There is a quiet contradiction at the center of how the industry talks about AI in product. On one side, you have the pitch: AI is transformative, it changes everything, it creates entirely new ways of doing things. On the other side, you have what I actually observe: most users don't want new ways of doing things. They want the things they already do to work better, faster, and with less effort.
What I see right now is that we are in a transition. And from where I stand, the products that seem to be winning with AI are not necessarily the ones that built a new interface. They are increasingly the ones that found a way to disappear into an existing one.
That shift is not yet mainstream thinking. But I think it's where things are heading.
The classic product reflex
When a product team gets access to a capable AI model, the reflex is almost always the same: build a new feature. A chat window. A copilot panel. An "AI assistant" button that lives somewhere on the screen. The instinct makes sense — you want to demonstrate the capability, and a new surface makes it legible. You can point to it in a demo. You can track adoption. You can write a changelog entry.
But this approach carries a cost that I don't think gets talked about enough. It asks users to do something new.
It introduces a new mental model, a new workflow decision, a new question the user has to answer in real time: should I use this, or should I just do it myself? That question is friction. And friction — even when the value on the other side is genuine — is one of the most reliable predictors of low adoption I've seen in practice.
The alternative is less intuitive, harder to ship, and almost impossible to screenshot for a product launch post. It also tends to work better.
Intercepting the existing flow
My working definition of invisible AI is simple: it doesn't ask users to come to it. It positions itself inside a behavior the user is already performing, and makes that behavior smarter.
Think about the difference between a standalone "AI email composer" feature and a model that observes how a user edits a draft and quietly adjusts suggestions to match their writing style over time. The first creates a new habit loop the user has to consciously opt into. The second improves something the user was already doing, without asking them to think about it.
The value compounds. The friction is close to zero.
This is not a subtle distinction — at least not in my experience. It's the difference between AI as a destination and AI as infrastructure. The former competes for user attention. The latter earns trust by staying out of the way.
Three tensions worth taking seriously
That said, I don't think invisible AI is obviously the right call in every context. There are real tensions here that I find myself coming back to, and I don't think they resolve cleanly.
The first is visibility versus trust. If users don't see the AI working, do they actually value it? In B2C, this is often manageable — most users don't need to understand the mechanism, they need the outcome. But in B2B, particularly in regulated industries or anywhere a human is held accountable for the output, invisibility can create anxiety. Users want to know what shaped the decision. My intuition here is that the answer isn't to make the AI louder, but to make its reasoning available on demand — present when needed, invisible by default.
The second tension is more strategic, and it's one I find genuinely difficult. Friction, counterintuitively, can drive short-term adoption. A new AI feature creates a moment — a reason for users to re-engage, for sales to talk about, for the product to feel alive. Invisible improvements don't generate that moment. They show up in retention curves and satisfaction scores, not in launch-day traffic spikes. If your business is measured on short-term activation, invisible AI will underperform on the metrics you're tracking, even if it outperforms on the ones that actually matter.
The third tension is the hardest one for me: who gets to define what "efficient" means? When a product team decides to intercept a user behavior and make it smarter, they are making a judgment about what the user is actually trying to achieve. Sometimes that judgment is right. Sometimes it's a projection — the team's model of efficiency imposed on a user whose real goal is different.
A customer support tool that silently drafts responses faster is efficient by one definition. A support agent who needs to feel ownership over their words — who uses the drafting process to think through the problem — might experience the same feature as a threat rather than an assist. Efficiency is not a neutral variable. I don't have a clean answer to this one.
The case for invisibility, despite all of this
I recently read a piece by Laurel Burton in The Drum from December 2025 that put words to something I had been circling around for a while. Her argument was that AI fails not when it is inaccurate, but when it is noticeable — when it breaks the flow of an experience rather than extending it. 2026, she argued, should be the year AI becomes invisible.
I find myself agreeing with the direction, with one caveat: invisible is not the same as unexplained. The goal isn't to hide AI from users — it's to integrate it well enough that it stops feeling like a separate thing.
When GPS first appeared in cars, people were acutely aware they were using a GPS. Now it's just how you navigate. The technology didn't disappear — it matured into the background of the experience. That's the version of AI I think is most worth building toward.
An open question
The products I find most interesting right now are the ones asking a harder version of this question: not how do we make AI visible enough to get credit for it, but how do we make it valuable enough that users notice its absence.
I don't think there's a single right answer. The right approach depends on the user, the context, the business model, and frankly on what a given team is capable of building with confidence. But the instinct to make the AI the feature — to give it its own surface and its own moment — is worth questioning more often than it currently is.
The most compelling integrations I've come across are the ones where, when you ask a user what they think of the product, they don't mention AI at all. They just say it works.