Software as a Verb: What AI Might Do to the App as We Know It
Over the last couple of weeks, I did something that surprised me. I built a couple of applications, each one tailored exactly to how I think, how I work, and what I actually needed. Not hacked-together workarounds. Not generic tools I’d bent to my will. Something genuinely mine, without compromise. And the effort it took was a fraction of what it would have been just a couple of years ago.
The process has a name, vibe coding, though the term has earned some skepticism, and fairly. It conjures someone blindly clicking “accept” on whatever the AI suggests, producing something that half-works until it doesn’t. For people with some technical footing, that framing undersells what’s actually happening. It’s less vibing and more collaborating: working alongside AI through conversation and iteration, bringing your own judgment to bear on the output, knowing when to push back. The AI handles the grunt work; you handle the decisions. And it got me thinking less about what I built and more about what it implies.
Because if I can do that — and I have some fluency with software, but I’m not a software engineer — what happens when the tools get just a little bit better? What happens when the person who needs something built doesn’t need to know how to build it at all?
I think we might be standing at the edge of one of the bigger quiet shifts in the history of technology: the end of software as a noun.
Software has always been a thing you have
For roughly forty years, the dominant model of software has been product-shaped. You acquire it, install it, subscribe to it. It lives on your computer or in your browser. It was designed by someone else, for a general version of you, and you adapt yourself to it as much as it adapts to you.
This model produced remarkable things. It also meant that every piece of software you’ve ever used was built for someone slightly different from you. Close enough to be useful. Different enough that your specific needs — your workflow, your context, your edge cases — were always a workaround, not a consideration.
What I’m starting to imagine, and what I think is closer than most people realize, is software that works the other way around. Not a thing you have, but something that happens when you need it. Software as a verb.
Imagine a parent planning a child’s birthday party. Rather than juggling a calendar app, a group text thread, a notes doc, and three browser tabs, an AI assembles something purpose-built for this exact situation: it connects to the right contacts, generates invitations, collects RSVPs, tracks dietary restrictions, pulls directions to the venue. When the party’s over, it stores what’s worth keeping and dissolves. No subscription. No settings menu. No features you’ll never use. Just a capability that materialized, did its job, and stepped back.
That’s a different relationship with software than most of us have ever had.
What this opens up
The implications for users, especially those who’ve always felt underserved by mainstream software, are significant.
Accessibility, for one, stops being a feature request and becomes a default. Today, accessibility is often an afterthought: screen reader support patched in late, font sizes adjusted under pressure, workflows redesigned only after someone complains. It gets built for a generalized user and retrofitted for everyone else. In a world of bespoke, AI-assembled software, the app knows you before it builds itself. Your needs aren’t accommodated, nor do you need to adapt to what you’re given; your needs are foundational to the software. High contrast, simplified navigation, adjusted reading levels: not an accessibility mode, just the app.
More broadly, personalized software has the potential to finally make sense of the complicated lives most of us actually live. Consider what a healthcare app knows about you today: your appointments, maybe your prescriptions. What it doesn’t know is that you’re also a caregiver for a parent, that you work irregular hours, that you share insurance with a spouse whose plan has different rules. No product team can anticipate that web of context. But an AI assembling something specifically for you, drawing on what it knows about your situation, can start to account for it in ways that no off-the-shelf product ever could.
What’s still in the way
Here’s where I want to be careful, because the honest version of this future is more complicated than the exciting version.
William Gibson, one of my favorite authors, once put it plainly: the future is already here; it’s just not evenly distributed. My vibe coding experience is a version of that. The tools exist. I can use them. But I got there because I already understood software architecture, deployment, and, crucially, how to know what I wanted. That knowledge was the real prerequisite, not the tools themselves. For most people, that gap is real, and it isn’t small.
But here’s what I think matters: that gap doesn’t need to close on the human side. It needs to close on the AI’s side. The expertise the user lacks shouldn’t be a prerequisite they’re expected to acquire. It should be something the AI brings to the table.
Think about how a financial advisor works. You don’t walk in knowing what asset allocation you want or how to optimize your tax exposure. You walk in with a feeling: “I want to be more secure,” “I want to retire earlier,” “I’m worried about the next ten years.” The advisor’s job is to ask the right questions, surface what you actually mean, and translate that into a concrete plan. You provide the goals and the context. They provide the expertise and the execution.
A doctor works the same way. You don’t walk in having diagnosed yourself. You walk in with symptoms: tired, unfocused, something feels off. The doctor asks questions, runs tests, connects dots you didn’t know were connected, and arrives at a recommendation. Your job is to describe your experience. Their job is to know what to do with it.
That’s the model for AI-assembled software at its best — not a tool you operate, but a skilled professional you brief. The intent problem — the challenge of understanding not just what someone asks for but what they actually mean — is real, but it’s increasingly the AI’s problem to solve, not the user’s. The AI needs to learn to ask better questions, notice when stated needs and actual needs diverge, and bring its own knowledge of what makes software work well so the user doesn’t have to. A good AI gets there the same way a good contractor does: you describe what you want the room to feel like, and it handles the building codes.
The other challenge is trust. An app you’ve used for years, built by a known company, carries a certain implicit accountability: there’s a brand on the line, a support channel, a privacy policy someone theoretically read. An ephemeral app assembled in the moment by an AI raises harder questions: What happens when it gets something wrong? Who’s responsible when it mishandles sensitive data? How do you audit something that no longer exists? Trust in this model won’t come from brand familiarity; it’ll have to come from confidence in the AI itself and in the organizations whose data it’s working with. That’s a different kind of trust to build, and it will take time and probably a few public failures before the norms solidify.
What organizations need to do now
This is where I think the strategic stakes are highest, and where most organizations are currently unprepared.
The instinct, understandably, is to think about AI as a way to improve existing products: smarter features, better recommendations, more personalization inside the apps companies already own. That’s reasonable in the short term. But it’s the wrong frame for what’s coming.
If users increasingly generate their own software, purpose-built for their needs, the role of organizations shifts. You stop being the builder of the interface and start being the provider of the underlying capability. Your job is to make your data, services, and logic accessible through clean, reliable APIs, so that a user’s AI can make use of them seamlessly and safely.
Think of it less like building an app and more like laying pipe. The water goes where the user needs it, in the form the user needs it, through channels you maintain but no longer control.
Organizations that understand this will invest now in open, well-documented integrations. They’ll think about data portability and consent not as compliance exercises but as competitive advantages. They’ll recognize that the customer’s AI is going to do the job their app once did, and decide whether to fight that or enable it.
The ones that fight it — that try to wall off their data, lock users into proprietary interfaces, insist on owning the experience — will find themselves on the wrong side of the same shift that made other once-dominant companies obsolete.
The future this describes isn’t science fiction. It’s already here in fragments: in the tools people like me are using today, in the APIs some organizations are already opening up, in the quiet shift happening at the edges of how software gets made.
The organizations that will matter in that future aren’t necessarily the ones with the best app. They’re the ones with clean data, accessible services, and systems that are ready to be worked with by an AI acting on someone else’s behalf. The ingredient, not the dish.
That’s a different thing to build toward. But the time to start is before the shift is obvious to everyone.