← back

2026-02-18

Built for mobile, now built for AI

A short theory of how products were designed for mobile—and how that's shifting as we build for AI.

How things were built for mobile

Mobile forced a specific design language. Small screens. Touch. Thumbs. Interrupt-driven: notifications, badges, pull-to-refresh. Distribution was app stores and installs. The unit of interaction was the tap; the unit of content was the feed or the card. We optimized for attention in bursts and for actions that could be completed in a few seconds. "Mobile-first" meant: assume one hand, assume interruption, assume the user is somewhere else in 30 seconds.

We learned to think in screens, in flows, in funnels. We built for discovery (search, browse, recommendations) and for retention (push, email, re-engagement). The device was personal, always-on, and location-aware. The product was an app you opened.

How we're building for AI

AI flips several of those assumptions. The interface is often a box or a thread, not a fixed set of screens. The user doesn't have to tap through a flow—they describe intent and the system proposes an outcome. Context is less "where is the user" and more "what are they trying to do and what do we know about that." The unit of interaction is the turn or the task, not the tap. Distribution is less about installs and more about where the model lives and who can invoke it.

We're still figuring out the equivalent of "mobile-first" for AI: assume conversation, assume ambiguity, assume the system can take the next step without a click. The theory behind how things were built for mobile was "constrain the interface, maximize clarity of action." For AI it's closer to "expand the range of intent the system can fulfill in one go, and make the path from intent to output as short as possible." Same human desire for less time and effort—different substrate.