Copilot Has Clippy Vibes

Copilot Has Clippy Vibes

(And Microsoft Knows It)

Translation: Nobody is using Copilot.

Not “nobody” as in zero humans on Earth.
“Nobody” as in not enough people to justify how hard Microsoft is pushing it.

And when a trillion-dollar company starts forcing its AI onto your laptop, your Outlook, your browser, and now your TV, that’s usually a sign the organic adoption didn’t go the way the PowerPoint promised.

Because if Copilot were amazing, like really amazing… Microsoft wouldn’t have to staple it to everything.


If Copilot were amazing, Microsoft wouldn’t have to staple it to everything

Here’s a rule of thumb that works shockingly well in tech:

If a product is genuinely indispensable, you don’t need to force it on people.

Slack didn’t need to be pinned to your operating system.
Zoom didn’t need to auto-install on your television.
ChatGPT didn’t need to be welded into your toaster.

People chose them.

They told their coworkers.
They installed them voluntarily.
They asked for them.

Copilot, on the other hand, keeps appearing whether you want it or not.

It’s baked into Windows.
Injected into Microsoft 365.
Stapled into Edge.
And now, somehow, forced onto LG televisions via firmware updates.

Yes. Televisions.

That’s not confidence.
That’s distribution panic.

When a company believes people want a product, it markets it.
When a company isn’t sure, it bundles it.
When a company is nervous, it forces it.

We’re very clearly in phase three.


Microsoft bet big on “AI everywhere”… then reality showed up

On paper, Microsoft did everything right.

They invested early in OpenAI.
They got early access to the most advanced models.
They launched Bing Chat before most competitors had anything coherent.
They rebranded everything as Copilot.
They baked AI pricing into licenses companies already pay for.

From a strategy standpoint, it looked brilliant.

This was supposed to be a slam dunk.

The thinking was simple:
“If AI is inside the tools people already use, adoption will be automatic.”

But that assumption turned out to be wrong.

Reports suggest internal sales targets were quietly lowered. Messaging shifted. The tone changed. Suddenly, the story wasn’t “This will change how everyone works.” It became “Adoption takes time.” Or “Customers are still discovering the value.”

That’s corporate-speak for one thing:

We thought this would sell itself. It didn’t.

And that’s the part Microsoft doesn’t want to talk about publicly.


The dirty secret: Copilot isn’t bad. It’s just not necessary

Here’s the uncomfortable nuance that gets lost in the shouting:

Copilot doesn’t suck.

It’s just… fine.

It can summarize emails.
It can draft documents.
It can rewrite text in a different tone.
It can answer basic questions about files you already have access to.

Sometimes it’s impressive.
Sometimes it’s wrong.
Sometimes it’s helpful.

But it’s rarely indispensable.

And “fine” is a problem when the product:

  • costs real money
  • touches sensitive data
  • raises permission questions
  • requires training
  • introduces governance risk

Turning on Copilot isn’t like installing a calculator app.

It means explaining to leadership why an AI can now read internal documents.
It means revisiting file permissions that haven’t been audited in years.
It means teaching users how not to blindly trust generated output.
It means answering uncomfortable compliance questions.

That’s a lot of friction for something that mostly saves a few minutes here and there.

So users do what users always do when value is unclear.

They ignore it.

They close the pane.
They stop clicking the icon.
They forget it exists.

And most people never look back.


“Enabled” is not the same thing as “used”

This is where AI metrics get creative.

Copilot might be:

  • enabled
  • licensed
  • visible

But that doesn’t mean it’s actually being used.

There’s a massive gap between:
“I technically have access to this”
and
“I rely on this every day”

IT departments know this gap well. So do vendors, even if they don’t love talking about it.

If usage were truly strong, Microsoft wouldn’t be working so hard to keep Copilot in front of your face.

Which brings us to the TV.


When people won’t use your AI, you stop asking nicely

Recent updates on certain LG smart TVs quietly added a Microsoft Copilot tile to the home screen.

Users didn’t download it.
They didn’t opt in.
In many cases, they can’t remove it.

It just appeared.

Sitting next to Netflix.
Like it belonged there.

This is not what confident adoption looks like.

This is what happens when distribution becomes the strategy.

And to be clear, this isn’t really LG’s fault. Smart TVs have been drifting in this direction for years with ads, tracking, sponsored placements, and “recommended” features. Copilot is just the next thing bolted on.

But it perfectly illustrates the broader problem.

When AI shows up in places people didn’t ask for it, trust evaporates fast.


This is how people learn to hate AI

Most people don’t hate AI because it’s scary.

They hate it because it’s imposed.

AI backlash doesn’t start with fear.
It starts with irritation.

It starts when:

  • features install themselves
  • updates add things you didn’t request
  • you can’t remove them
  • the value isn’t obvious

At that point, users don’t explore.

They disable.
They hide.
They unplug.
They keep the device offline.

Smart TVs are already on thin ice with consumers. Adding a forced AI assistant just teaches people one lesson: don’t trust updates.

That’s not a great outcome for a technology that depends on trust.


The Clippy problem (and why it matters)

Here’s where the comparison to Clippy stops being a joke.

Clippy wasn’t hated because it was useless.
It was hated because it was intrusive.

It popped up uninvited.
It interrupted workflows.
It assumed it knew what you wanted.
And it refused to go away.

Sound familiar?

Copilot isn’t Clippy in terms of capability… it’s far more powerful. But from a user experience perspective, the echoes are there.

When something keeps showing up without being asked for, users don’t think “helpful assistant.”

They think “how do I make this stop?”

That’s a dangerous association for any product, especially one positioned as the future of work.


The real irony Microsoft should worry about

The people who could benefit most from Copilot are often the fastest to tune it out.

Executives buried in meetings.
Analysts summarizing long documents.
Teams drowning in internal communication.

These are the exact users Copilot was built for.

And yet, they’re often the first to say, “I tried it once” and never open it again.

Why?

Because they’ve been through this cycle before.

They recognize the signs of a feature being pushed because it has to be, not because it’s loved. They know the difference between a tool that quietly becomes essential and one that constantly reminds you it exists.

Copilot, right now, is doing a lot of reminding.


AI doesn’t fail because it’s bad. It fails because it’s everywhere

This is the core mistake vendors keep making.

AI works best when:

  • it’s optional
  • it’s targeted
  • it solves one annoying problem extremely well

It fails when:

  • it’s everywhere
  • it’s unavoidable
  • it’s vaguely helpful

Copilot is trying to be the answer to everything. That makes it easy to ignore.

Ironically, the most successful AI tools today do less, not more. They focus on one job. They earn trust. They let users opt in.

Copilot is trying to skip that step.


Copilot isn’t doomed. It’s early, whether Microsoft admits it or not

None of this means Copilot is dead.

It isn’t.

It will improve.
It will get more accurate.
It will integrate better with real workflows.
Some teams will eventually rely on it heavily.

But pretending it’s already indispensable doesn’t make it so.

Right now, Copilot feels less like the future of productivity and more like a feature desperately trying to justify its seat at the table.

And the harder it’s pushed, the more obvious that becomes.


The bottom line

The story Microsoft wants to tell is:

“This is how everyone will work.”

The story users are actually living is:

“I keep seeing this thing I didn’t ask for.”

Those are very different narratives.

Copilot doesn’t need to be everywhere.
It needs to be worth choosing.

Until that happens, forcing it into more devices won’t fix the problem… it’ll just create the next Clippy.

And users can tell the difference.