← back

2026-02-18

Built for mobile, now built for AI

A short theory of how products were designed for mobile—and how that's shifting as we build for AI.

How things were built for mobile

Mobile forced a specific design language. Small screens. Touch. Thumbs. Interrupt-driven: notifications, badges, pull-to-refresh. Distribution was app stores and installs. The unit of interaction was the tap; the unit of content was the feed or the card. We optimized for attention in bursts and for actions that could be completed in a few seconds. "Mobile-first" meant: assume one hand, assume interruption, assume the user is somewhere else in 30 seconds.

We learned to think in screens, in flows, in funnels. We built for discovery (search, browse, recommendations) and for retention (push, email, re-engagement). The device was personal, always-on, and location-aware. The product was an app you opened.

Hardware mattered. Battery and network quality shaped what we shipped. Offline modes, skeleton loaders, optimistic UI: all of that was a response to the fact that mobile users are not sitting at a desk with a fiber line and patience. The constraints were visible, and they forced tradeoffs everyone could see.

What mobile got wrong

Mobile-first also trained us to over-optimize for engagement metrics that do not equal value. Infinite scroll, dark patterns, notification spam: the same tap-centric world made those tactics possible. We got very good at getting people back into the app without always asking whether they left better off.

So when we say "built for mobile," we should mean both the good constraints (clarity, speed, respect for partial attention) and the bad habits we inherited.

How we're building for AI

AI flips several of those assumptions. The interface is often a box or a thread, not a fixed set of screens. The user doesn't have to tap through a flow—they describe intent and the system proposes an outcome. Context is less "where is the user" and more "what are they trying to do and what do we know about that." The unit of interaction is the turn or the task, not the tap. Distribution is less about installs and more about where the model lives and who can invoke it.

Latency shows up differently. On mobile you worried about round trips to the server for a screen. With AI you worry about the time to first token, the length of a completion, and whether the user can steer mid-generation. Errors look like confident hallucinations instead of a 404.

We're still figuring out the equivalent of "mobile-first" for AI: assume conversation, assume ambiguity, assume the system can take the next step without a click.

Mobile's theory was: constrain the interface, maximize clarity of action. AI pushes toward a wider range of intent fulfilled in one go, with a shorter path from intent to output. Same appetite for less time and effort; different substrate.

The overlap

The through line is still human. People want outcomes without learning your internal vocabulary. Mobile taught us to reduce taps. AI pushes us to reduce turns: fewer clarifying questions, fewer dead ends, more "I understand what you meant" on the first try. If we only copy mobile patterns (cards, feeds, infinite menus) we will miss what is new. If we only chase the model's capabilities, we will forget that someone still has to feel in control.

The best products in this next phase will borrow discipline from mobile (clarity, speed, respect for context) without pretending the interface is the same. Different substrate, same old job: get people from thought to output with less pain in the middle.