{"version":"https://jsonfeed.org/version/1","title":"Jim Nielsen’s Blog","home_page_url":"https://blog.jim-nielsen.com","feed_url":"https://blog.jim-nielsen.com/feed.json","author":{"name":"Jim Nielsen","url":"https://jim-nielsen.com/"},"items":[{"id":"/2024/making-films-and-making-websites/","date_published":"2024-03-19T19:00:00Z","title":"Making Films and Making Websites","url":"https://blog.jim-nielsen.com/2024/making-films-and-making-websites/","tags":[],"content_html":"
I recently listened to an episode of the Scriptnoes podcast interviewing Christopher Nolan, director of films such as The Dark Knight, Inception, and Oppenheimer.
\nGenerally, it’s fascinating look at the creative process. More specifically, I couldn’t help but see the parallels between making websites and making films.
\nCoincidentally, I recently read a post from Baldur Bjarnason where he makes this observation:
\n\n\nSoftware is a creative industry with more in common with media production industries than housebuilding.
\nAs such, a baseline intrinsically-motivated curiosity about the form is one of the most powerful assets you can have when doing your job.
\n
You definitely hear Nolan time and again express his fascination and curiosity with the form of film making.
\nAs someone fascinated with the form of making websites, I wanted to jot down what stuck out to me.
\nHere’s Nolan talking about the tension between what a film starts as (a script, i.e. words on paper) and what the film ends up as (a series of images on screen).
\n\n\nEveryone’s struggling against, “Okay, how do I make a film on the page?” I’m fascinated by that...I enjoy the screenplay format very much…but there are these endless conundrums. Do you portray the intentionality of the character? Do you portray a character opens a drawer looking for a corkscrew?
\n
There’s a delicate balance the screenplay form must strike: what needs to be decided upon and communicated up front and what is left up to the interpretation of the people involved in making the film once the process starts?
\n\n\nThe problem is you have to show the script to a lot of people who aren’t reading your screenplay as a movie. They’re reading it as a screenplay. They’re reading it for information about what character they’re playing or what costumes are going to be in the film or whatever that is. Over the years, it varied project to project, but you try to find a middle ground where you’re giving people the information they need, but you’re not violating what you consider your basic principles as a writer.
\n
However, as much as you want the screenplay to be great and useful, moviegoers aren’t paying to read your screenplay. They’re paying to watch your film. Nolan notes how he always re-centers himself on this idea, regardless of what is written in the screenplay.
\n\n\nI always try to view the screenplay first and foremost as a movie that I’m watching. I’m seeing it as a series of images. I’m imagining watching it with an audience.
\n
Interestingly, Nolan notes that the screenplay is a medium that inherently wants the editing process to be intertwined in its form. If you don’t leverage that, you’re not taking advantage of the screenplay as a tool.
\n\n\n[movies are] a medium that enjoys this great privilege of Shot A plus Shot B gives you Thought C...that’s what movies do. That’s what’s unique to the medium.
\n
A script is words on paper. A film is an interpretive realization of those words as a series of images.
\nBut it’s even more than that. Just think of what it takes for words on paper to become a film:
\nIt may seem obvious, but a screenplay is not a film. It’s a tool in service of making a film.
\nIn other words, what you use to make a website is not the website itself.
\nWhen a movie is released in theaters, it would be silly to think of its screenplay as the “source of truth”. At that point, the finished film is the “source of truth”. Anything left in the screenplay is merely a reflection of previous intention.
\nSo do people take the time to go back and retroactively update the screenplay to accurately reflect a finished film?
\nNo, that would be silly. The finished film is what people pay to see and experience. It is the source of truth.
\nSimilarly, in making websites, the only source of truth is the website people access and use. Everything else — from design system components to Figma mocks to Miro boards to research data et. al. — is merely a tool in service of the final form.
\nThat’s not to say there’s no value in keeping things in sync. Does the on-set improvisation of an actor or director require backporting their improvisations to the screenplay? Does cutting a sequence in the editing process mean going back to the screenplay to make new edits? Only when viewed through the lens of the screenplay as a working tool in service of a group of people making a film.
\nThe screenplay is an evolving document. A screenplay is not a film, but a tool that allows disparate groups of talented individuals to get what they need to do their job in service of making a film.
\nNolan emphasizes this a few times, noting that the screenplay is not what moviegoers ultimately experience. They come to watch a film, not read a script.
\nAs individual artisans involved in the process of making websites, it’s easy to lose sight of this fact. Often more care is poured into the deliverable of your specialized discipline, with blame for quality in the final product impersonalized — “It’s not my fault, my mocks were pixel perfect!”
\nToo often websites suffer from the situation where everyone is responsible for their own little part in making the website but nobody’s responsible for the experience of the person who has to use it.
\nNolan: writing words on paper (screenplay) in service of making a series of images people experience (a film).
\nMe: designing visuals in Figma (mocks) in service of making interactive software people experience (a website).
\n\n Comment? Reply via:\n \n Email, Mastodon, or\n Twitter.\n
\n \n \n\nI loved this post from Chris Enns (via Robb Knight) where he outlines the rabbit hole of links he ventured down in writing that post.
\nIt felt fun and familiar, as that’s how my own browsing goes, e.g.
\n“I saw X and I clicked it. Then I saw Y, so I clicked that. But then I went back, which led me to seeing Z. I clicked on that, which led me to an interesting article which contained a link to this other interesting piece. From there I clicked on…”
\nBrowsing the web via hyperlinks is fun! That’s surfing!
\nDiscovering things via links is way more fun than most algorithmically-driven discovery — in my humble opinion.
\nAs an analogy, it’s kind of like going on vacation to a new place and staying/living amongst the locals vs. staying at a manicured 5-star hotel that gives you no reason to leave. Can you really say you visited the location if you never left the hotel?
\nI suppose both exist for a reason and can be enjoyed on their own merits. But personally, I think you’re missing out on something if you stay isolated in the walled garden of the 5-star hotel.
\nSimilarly, if you never venture outside a social media platform for creation or consumption — or automated AI browsing and summaries — it’s worth asking what you’re missing.
\nHave you ever ventured out via links and explored the internet?
\n\n\n Comment? Reply via:\n \n Email, Mastodon, or\n Twitter.\n
\n \n \n\nJohan Halse has a post called “Care” where he talks about having to provide web tech support to his parents:
\n\n\nMy father called me in exasperation last night after trying and failing to book a plane ticket. I find myself having to go over to their house and do things like switch browsers, open private windows, occasionally even open up the Web Inspector to fiddle with the markup, and I hate every second of it.
\n
Yup. Been there, done that.
\nWhy is making websites so hard?
\n\n\nthe number one cause of jank and breakage is another developer having messed with the browser’s default way of doing things
\n
So in other words, making websites isn’t hard. We make making websites hard. But why?
\nIn my experience, using default web mechanics to build websites — especially on behalf of for-profit businesses — takes an incredible amount of disciple.
\nSelf-discipline on behalf of the developer to not reach for a JavaScript re-implementation of a browser default.
\nBut also organizational discipline on behalf of a business to say, “It’s ok if our implementation is ‘basic’ but functional.” (And being advocate for this approach, internally, can be tiring if not futile.)
\nYou think people will judge you if your website doesn’t look and feel like a “modern” website.
\nBut you know what they’ll judge you even more for? If it doesn’t even work — on the flip side, they’ll appreciate you even more for building something that “just works”.
\nAt least that’s my opinion. But then again, I’ve never built a business. So what do I know.
\n\n\n Comment? Reply via:\n \n Email, Mastodon, or\n Twitter.\n
\n \n \n\nHere’s the link: https://shoptalkshow.com/605/
\nI sat down (again) with Chris and Dave to talk all things web.
\nThe conversation was fun and casual, mostly around topics I’ve written about recently — which is good, since those are topics I should (presumably) be able to speak on at least somewhat knowledgeably.
\nBig thanks to Chris and Dave for having me on the show!
\nAfter recording, I actually started to think more about this idea of “mouth-blogging”. And, should they ever decide to have me back on the show, here’s my pitch to Chris and Dave for the next episode (or, really, any episode with a future guest):
\nUntil then, go check out episode 605.
\n\n\n Comment? Reply via:\n \n Email, Mastodon, or\n Twitter.\n
\n \n \n\nThat’s something I’ve heard before — ChatGPT Is a Blurry JPEG of the Web — and it kind of made sense when I read it. But Paul Ford, writing in the Aboard Newsletter, helped it make even more sense in my brain.
\n\n\n[AI tools] compress lots and lots of information—text, image, more—in a very lossy way, like a super-squeezed JPEG. Except instead of a single image, it’s “The Web” or “five million images.”
\n
The nice thing about lossy compression in a JPEG is that it’s obvious. You can see the compression artifacts. But with AI? Not so much:
\n\n\nbecause of the way AI works, constantly guessing and filling in blanks, you can’t see the artifacts. It just keeps going until people have twelve fingers, stereotypes get reaffirmed, utter nonsense gets spewed, and so forth. You can see the forest, but the trees are all weird.
\n
What you end up with is text that looks like knowledge, but like a lossy JPEG, upon closer inspection you will find a lack of clarity. As Paul notes, you end up seeing the forest but zoom in to the details of any tree and stuff doesn’t looks right.
\n\n\n
AI is that: lossy compression, but on the level of knowledge not pixels.
\n\n\n
It follows that, as Paul notes, you end up with not only a tool whose output is akin to the lossy, visual artifacts of a JPEG, but a tool whose output introduces into the world the cognitive and social equivalent of those big blocky compression artifacts of a JPEG.
\nAs more and more people create, consume, and communicate with AI, more and more people will begin to understand themselves through a lens of lossyness — a lack of clarity. As Marshall McLuhan said: we shape our tools and then our tools shape us.
\nPaul raises one last parallel of AI, it’s like a “slightly high intern”:
\n\n\nYou can’t really trust their output, but they do help you move things along. They’re good at using the web to gather stuff. They’re bright [but their] teen brains can’t quite figure out why you want this stuff, just that they have to do it...So they do what you ask, but they fill in the blanks with whatever comes to mind and hope you don’t get too annoyed about it.
\nAI is basically that—a perpetually cotton-mouthed undergrad who doesn’t really need the job—but, thank God, many hundreds of times faster. We wanted a smart robot that does our laundry and maintains our jetpacks, but we got a 19-year-old accelerated hyperstoner with no respect for copyright. But as always, we’ll work with what shows up.
\n
Indeed. We work with what shows up.
\nBut when people start saying these “slightly high interns” should and will replace us all (and our best systems) in the immediate future — I take pause.
\n\n\n Comment? Reply via:\n \n Email, Mastodon, or\n Twitter.\n
\n \n \n\nI’m a fan of what Ink & Switch is doing in regards to local-first web development. I’ve got a few harebrained ideas myself I want to build. And I’ve written notes from a talk by Peter before.
\nWhich is all a preface for this set of notes from another talk by Peter. Here’s the talk on Vimeo and here are my notes (and opinions) from the talk.
\n\n\nIncreasing scale changes everything about a system.
\n
Tools that work for one magnitude of scale break down at other magnitudes of scale — in both directions!
\nPeter’s advice? Plan to rebuild systems, with new tools, as you hit new orders of magnitude. There is no way around this.
\nIndependent dimensions multiply problems. For example: building a web application entails accounting for (at minimum):
\nThe variability in each and across these factors multiply against each other:
\n 3 browsers
× 4 latest versions of each
× 4 platforms
× 4 latest versions of each
× 3 screen sizes
× 4 network speeds
= 2,304 combinations
And that’s if after drastically oversimplify these factors (for example, there’s way more than just “desktop”, “tablet” and “mobile” for screen sizes.
\nPeter notes that this is why Electron apps are a popular choice for building across platforms: they allow you to make an insanely unmanageable task just plain unmanageable.
\n\n\nThis is why you see ostensibly lazy electron apps from big companies because even worse than having to support [multiple native apps] is having to coordinate all the different people and teams to build features...if you have an iOS and and Android team, how do you get them to ship the same feature in the same quarter?
\n
It’s kind of funny when you think about it: what’s worse than the technical problem of building and maintaining multiple native applications? The people problem of building and maintaining multiple native applications.
\nTo belabor the point, I am reminded of Dave’s post about states. He enumerates a large (but not comprehensive) list of the many of the dimensions that affect the your application. I’m not good at math, but to Peter’s point about how these compound, imagine the math on Dave’s list of states. That’s got to be a very, very large number.
\nHow can you know that you have correctly functioning code?
\nThis is what kills you with trying to control complexity. All the small variances play off of each other to create an unknowably complicated environment.
\nSometimes, you have to learn to let go of control.
\nThere’s this phenomenon people talk about where everything around computing has gotten insanely faster and yet things are still slow.
\nFor example: CPUs have increased in speed dramatically (remember those ~600GHz beige towers?) and yet computers are still slow. Also: bandwidth has increased dramatically (remember dial up internet?) and yet websites are still slow.
\nHere’s Peter on this phenomenon (which is also called the rebound effect):
\n\n\nThe degree of complexity of a system is tied to who we are and what we’re doing over time. When we buy back some complexity by using better tools or picking a simpler environment we’re going to spend that out again eventually.
\n
It’s a funny conundrum.
\nWhy do we refactor? Because the code got too complicated. So we simplify it. And why do we simplify it? So we can add more to it over time and make it more complex again.
\nMake it simpler, so we can make it more complicated.
\nIt’s an inevitable journey. Systems grow to meet growing needs.
\nRefactors and rewrites are merely a way to reset the clock on the complexity that forever encroaches on your codebase and product.
\nPeter’s final advice:
\n\n\n\nWe can’t beat complexity, but we can be beat by it.
\n
\n Comment? Reply via:\n \n Email, Mastodon, or\n Twitter.\n
\n \n\n Tagged in:\n #generalNotes\n
\n \n \n\nScott Jenson has a great article called “The future needs files”.
\n\n\nThe power of files comes from them being powerful nouns. They are temporary holding blocks that are used as a form of exchange between applications. A range of apps can edit a single file in a single location.
\n
Files as a medium of exchange between applications — I like that. It’s akin to the usefulness of currency.
\n\n\nThe most powerful aspect of files is that they liberate your data. Any app can see it and do something useful to it.
\n
Files represent a “data first vs app first organization”. If you’re planning a wedding, you put everything wedding related into a folder. All your data is now in one place vs. strewn across various apps.
\nDocuments — like a Notion doc — are today’s folders: they contain a list of links to “files” that will open in bespoke applications.
\nBut there are drawbacks, like interoperability. Do we want to trust our data to the success or failure of a single company?
\n\n\nFiles encapsulate a ‘chunk’ of your work and allow that chunk to be seen, moved, acted on, and accessed by multiple people and more importantly external 3rd party processes.
\n
Can you imagine working on a codebase — which is a set of files — but the files were locked to a particular IDE? Craziness.
\nPersonally, I’m a file guy. I love files. And I wish more products worked in the currency of exchange of files.
\n\n\n Comment? Reply via:\n \n Email, Mastodon, or\n Twitter.\n
\n \n \n\nThe web has a superpower: permission-less link sharing.
\nI send you a link and as long as you have an agent, i.e. a browser (or a mere HTTP client), you can access the content at that link.
\nThis ability to create and disseminate links is almost radical against the backdrop of today’s platforms.
\nTo some, the hyperlink is dangerous and must be controlled:
\nAnd yet, we keep on linking:
\nWhy? Because it’s a web. Interconnectedness is the whole point.
\nLinks form the whole. Without links, there is no whole. No links means no web, only silos. Isolation. The absence of connection.
\nSubvert the status quo. Own a website. Make and share links.
\n\n\n Comment? Reply via:\n \n Email, Mastodon, or\n Twitter.\n
\n \n\n Tagged in:\n #openWeb\n
\n \n \n\nI’ve been working with the latest Remix-ification of React Router and there are two things I wish I had known when I started.
\nSo I’m writing them down in case anyone else is about to start a React Router app.
\nIf you’re submitting JSON, e.g.
\nsubmit(\n { key: "value" },\n { method: "post", encType: "application/json" }\n);\n
\nIt’s good to keep your data flat because it gives you the easy and flexibility to submit the same data as a native <form>
submission later (without JavaScript).
For example: let’s say you have some data for an action in your application. You can submit this as JSON with a structure where you separate the intent of the action from the payload itself, e.g.
\n{\n "intent": "update-person"\n "payload": {\n "name": "Jim Nielsen",\n "email": "jim@example.com"\n }\n}\n
\nHowever, once you have a structure like this, it becomes more convoluted to submit and parse that same data using a native <form>
element.
But if you use a flat structure, like this:
\n{\n "intent": "update-person",\n "name": "Jim Nielsen",\n "email": "jim@example.com"\n}\n
\nThen it’s simpler to represent that same action as a form in HTML which doesn’t require JavaScript to submit:
\n<form>\n <input type="hidden" name="intent" value="update-board" />\n <input type="text" name="name" value="..." />\n <input type="text" name="email" value="..." />\n</form>\n
\nAnd getting/parsing that data from formData to a JavaScript object becomes simpler and more straightforward.
\nSo keep your data as flat as possible because it gives you the flexibility to write your mutations declaratively in HTML or imperatively in JavaScript, depending on the context and needs of each use case (trust me, you’ll thank yourself one day if ever migrate to Remix and introduce a server).
\nIt’s ok to type a little more, I promise.
\nWhen coming up with your URL params, you could have patterns like this:
\n/files/:id\n/teams/:id\n
\nWhich you can access in nested routes using hooks:
\n// At the route: `/files/1234`\nconst { id } = useParams();\n// id = 1234\n
\nBut you can easily outgrow those generic param names as your features become more rich and the entities in your system develop more relationships which themselves also have IDs.
\nThen you end up with param conflicts:
\n/files/:id/users/:id\n/teams/:id/users/:id\n
\nOne solution to this might be to name your IDs for the kind of ID that they are. For example, maybe one of my IDs is a UUID (e.g. 550e8400-e29b-41d4-a716-446655440000
) whereas the other ID is just an int (e.g. 351
).
/files/:uuid/users/:id\n/teams/:uuid/users/:id\n
\nBut you can end up in the same place if, later on, you need to switch from one identifier to another (or you get another entity in your system that necessitates using the same kind of identifier).
\n/files/:uuid/users/:id\n/files/:uuid/invites/:uuid\n/teams/:uuid/users/:id\n
\nSo, given my experience, I would say: be specific in naming your params. Then the risk of namespace collisions in your params is decreased drastically and you won’t have to refactor your code as you add new entities. Plus the code is — IMO — just flat out clearer.
\n/files/:fileUuid/users/:userId\n/files/:fileUuid/invites/:inviteUuid\n/teams/:teamUuid/users/:userId\n
\nAccessing those params is now super easy anywhere in the code. In addition, finding those named params anywhere in your codebase is much easier too (vs. the generic id
).
// At route: `/teams/0001923-02930-123/users/1234`\nconst { teamUuid, userId } = useParams();\n\n// teamUuid = 0001923-02930-123..., userId = 1234\n
\n\n \n Comment? Reply via:\n \n Email, Mastodon, or\n Twitter.\n
\n \n \n\nThe Domino’s “Pizza Tracker” is an intriguing piece of UI.
\nAs an end user, it provides the precision of detail you want in tracking your order:
\nBut think of everything it takes to make that UI possible, where every digital order routes to a local store and every local store has the hardware and software to live track and update the position of your pizza.
\nI’ve worked on a project like this, a “track my claim” in the insurance world. As a designer, it’s nice to sit at a desk and design an ideal scenario:
\nBut when the rubber of your UI hits the road of reality within an organization, people and processes often cannot bend and stretch to the automated expectations of a idealized process.
\nWhat do you end up with in that scenario? A thousand tiny compromises.
\nFor example (going back to pizza), you find out that not every store can track precisely when an order is received and when it goes into the oven, so the UI becomes a facade — dare I say a lie — where the step from “order received” to “pizza in the oven” happens only because of a timer in the UI (and a corporate policy of, “all orders must be in the oven within five minutes of being received”). It’s not a representation of reality, but a facade of it.
\nThese kinds of facades happen all the time in software. “We think this will complete, at the most, within ___ time.” So we make progress indicators that look like they’re a live, up-to-date representation of progress that poll some kind of technological feedback mechanism for live status updates. But in reality they’re merely clocks counting down from a set time of “this is the most time we think it should take”.
\nThis is why designing UI is designing an organization.
\nYou can only design and make real a UI that matches an organization’s capabilities to deliver on its promise.
\nThis is why startups are best suited to these kinds of radical UIs tailored to reality. Their entire organizational structure, which is small, can orient around a single idea and deliver on it — then build to scale.
\nYour UIs, and the real life experiences they deliver, can only ever be as good as an organization’s capabilities to deliver on them.
\nThis is what I mean by UI=f(org): UI is a function of your organization.
\n\n\n Comment? Reply via:\n \n Email, Mastodon, or\n Twitter.\n
\n \n \n\n