• Product Upfront AI
  • Posts
  • 🤖 The AI Bubble Is About to Pop (And I'm Weirdly Excited About It)

🤖 The AI Bubble Is About to Pop (And I'm Weirdly Excited About It)

Hey friend,

I need to tell you something that's going to sound absolutely insane.

(Especially coming from someone who literally writes an AI newsletter for a living and is supposed to be bullish on everything.)

Ready?

We're heading into a correction.

Not the cute kind where tech stocks dip 5% and everyone "buys the dip" while tweeting rocket emojis.

The messy kind. The "wait, that company with zero revenue was worth HOW much?" kind.

The dot-com-crash kind.

And honestly? I'm kind of excited about it.

(Stick with me here, I promise this makes sense.)

🎢 Why This Crash Is Different (And Why You Should Actually Be Bullish)

Look, MIT and Harvard researchers are already calling it. The warning signs are everywhere:

Companies valued at billions with no revenue. Massive infrastructure buildout with unclear ROI. User growth prioritised over actual profitability.

Sound familiar?

It should. We've watched this movie before. It ends with a lot of very smart people losing a lot of money while pretending they "saw it coming."

But here's the thing that's keeping me up at night in a good way:

The technology actually works this time.

That's the difference.

In the dot-com era, "internet strategy" meant slapping your logo on a Geocities page and calling it e-commerce. The infrastructure wasn't there. The use cases were theoretical. Most of it was just... vaporware with venture capital.

AI in 2026?

Bradesco Bank freed up 17% of employee capacity using AI agents. Not "planning to." Actually did it. (Read here)

Companies using multimodal AI are reporting 29% faster project delivery. Real projects. Real delivery.

This isn't a pitch deck dream. This is working. Right now.

The correction isn't coming because AI doesn't work.

It's coming because Wall Street priced it like every company would become the next OpenAI overnight.

They won't.

Most will fail. The hype will deflate. Funding will dry up. Tech Twitter will declare "I told you AI wasn't ready" while conveniently deleting their old takes.

And that's when the real builders win.

Because while everyone else is panicking about valuations and pivoting to "AI-powered blockchain NFT metaverse" or whatever...

You'll be the one who actually knows how to deploy this stuff.

You'll have customers. Revenue. Proof it works.

You'll have survived the hype cycle.

That's the 2026 playbook I'm about to share with you.

Alright, let's get into the three forces that are actually reshaping everything...

🔥 Force #1: AI Agents Are Becoming as Boring as Websites (And That's Perfect)

Okay, hear me out on this one.

Remember when "having a website" made you cutting-edge?

Like in 1998, if your business had a URL, you were basically a visionary. People would go to conferences just to hear you talk about "your digital strategy."

Then by 2005, NOT having a website made you look like a technophobe living in a cave.

That exact transition is happening with AI agents in 2026.

And the speed of it is actually kind of terrifying.

Check out these numbers:

72% of companies are already testing AI agents. Not "thinking about it." Not "evaluating vendors." Actually testing them in production.

84% are planning to spend MORE money on this throughout 2026.

(Translation: If you're not building with agents yet, you're already behind. Sorry.)

But here's the part that should absolutely terrify traditional software companies...

AI agents don't respect the boundaries we spent 30 years building.

Think about it.

Salesforce built an empire on CRM. Took them two decades.

HubSpot on marketing automation. Years of development.

Asana on project management. Millions in engineering.

They created these neat little boxes where your data lives. And if you want those boxes to talk to each other? That'll be $50K in consulting fees and six months of "integration work."

AI agents just... ignore all of that.

They don't care about your org chart. They don't care about your software categories. They don't live in one app.

They orchestrate across everything.

Let me give you an example that made me go "oh shit" last week:

Customer service agent (one agent, doing all of this):

  • Reads support tickets across Zendesk, email, Slack, wherever

  • Checks order status in your Shopify store

  • Pulls customer history from Salesforce

  • Reviews company policy docs to decide if refund is warranted

  • Writes a personalized response (not a template, an actual good response)

  • Processes the refund if approved

  • Updates all systems automatically

  • Logs everything for compliance

That used to require:

  • Three different software subscriptions ($400/month)

  • Two Zapier integrations ($50/month)

  • A consultant to set it up ($5,000 one-time)

  • Six months of implementation time

  • Training for your team

  • Ongoing maintenance

Now?

One agent. One afternoon to set up. $200/month to run.

(And if you're still paying for the old way, your competition is eating your lunch while spending 80% less.)

This isn't theory. This is happening right now.

Here's what this means for YOU:

The cost to deploy sophisticated automation has collapsed.

No-code platforms like n8n and Make. Open-source frameworks like LangChain. API-first architecture everywhere.

You don't need an engineering team anymore.

A solopreneur with the right skills can now compete with enterprise software companies.

Not "kind of compete." Actually compete. On features. On reliability. On deployment speed.

That's not hype. That's structural change in how software gets built.

(And if you're not taking advantage of this... what are you waiting for?)

Alright, that's force #1. Agents becoming infrastructure.

Ready for force #2?

This one's going to upset a lot of people who've been obsessing over "bigger is better" for the past three years...

🍕 Force #2: Small Models Are Eating Big Models' Lunch (Finally)

For three years, the AI race was dead simple:

Bigger = better.

GPT-4 had more parameters than GPT-3. Everyone celebrated.

Claude got bigger. Gemini got bigger. Every new model announcement was about scale.

"We trained on more data!" "Ours has more parameters!" "Ours is SO BIG it uses an entire data centre!"

Cool story.

In 2026, that narrative is dying.

And I'm watching it happen in real-time from my API bills.

Let me tell you what happened to me last month.

I had this agent that was using GPT-4 for literally everything. Including dumb stuff like:

  • Sorting my emails

  • Extracting data from invoices

  • Categorizing support tickets

  • Basically any task that needed a yes/no answer

My monthly API bill? $340.

(I know. I'm an idiot. But in my defence, I was being lazy.)

Then I switched all those simple tasks to a small language model.

New API bill? $47.

Same output. Same accuracy. 85% cheaper.

(I'm still mad at myself for not doing this six months earlier.)

But here's what really got me:

It wasn't just about money.

See, when you use GPT-4 for everything, your data is constantly leaving your system and going to OpenAI's servers.

Which means:

  • You need internet connection (no offline capability)

  • There's latency (every API call adds delay)

  • Your proprietary data is being transmitted (compliance nightmare)

  • GDPR becomes a headache. HIPAA becomes impossible.

Small language models?

They run on YOUR infrastructure. Your device. Your servers.

No cloud dependency. No data transmission. No compliance panic attacks.

And for most business tasks—classification, extraction, formatting, basic reasoning—they're just as good as GPT-4.

(Sometimes better, because you can fine-tune them specifically for your use case.)

Here's the opportunity most people are sleeping on:

You can take these small models and specialise them for your exact niche.

Legal document analysis? Fine-tune a small model for that.

Financial forecasting? Train it on your specific data patterns.

Customer service for HVAC companies? Create a model that understands HVAC terminology perfectly.

And once you do that?

You have something OpenAI can't touch.

They're optimising for breadth—being okay at everything.

You're optimising for depth—being exceptional at one thing.

That's a moat. Real defensibility. Higher margins.

While everyone else is paying per-token to OpenAI, you're running inference on your own infrastructure for literal pennies.

The numbers backing this up are kind of insane:

The small language model market was $5.3 billion in 2024. (Read more)

Projected to hit $26.7 billion by 2032.

That's 22% compound annual growth.

(For context: That's faster than the smartphone market grew during the iPhone boom.)

Why the explosion?

Because companies are finally doing the math:

"Wait, we're paying $50K/month to OpenAI for tasks that a $500 specialized model could handle?"

Yeah. That's the light bulb moment happening across the enterprise right now.

Real question for you: Are you still building everything on expensive GPT-4 API calls?

Or have you started exploring small models for your specific use cases?

Hit reply and tell me. I'm genuinely curious what's holding people back here.

(Because if it's "I don't know where to start," I should probably write about that next week.)

Alright, that's force #2. Small models are winning.

Ready for force #3? This one's about why text-only AI is about to feel as outdated as a flip phone...

🎨 Force #3: Multimodal Is No Longer Optional

Quick question: When's the last time you sent someone just a text? No screenshot, no voice note, no video?

Probably never, right?

So why are we building AI that only understands text?

Multimodal AI—systems processing text, images, audio, video—is becoming the baseline in 2026.

The impact is measurable:

  • Companies using multimodal: 29% faster project delivery

  • Companies stuck with text-only: 37% drop in accuracy

Why this matters:

Customer support example:

Text-only agent:

  • "It's broken" → "Can you describe what's broken?" → 30 minutes of back-and-forth

Multimodal agent:

  • Customer sends screenshot + voice note → Agent diagnoses immediately → Problem solved in 2 minutes

See the difference?

One format = limited understanding. All formats = actual intelligence.

And this tech isn't coming. It's here. GPT-4, Claude, and Gemini all do this now.

The gap isn't technology. Its execution.

If you're building text-only agents in 2026, you're already behind.

(And I say that as someone who was text-only until three months ago. I'm not judging. I'm warning.)

💰 Where the Money Is Actually Going

Let's talk numbers:

AI spending 2026: $2.02 trillion (up 36% from 2025)

Cool. Big number. Yawn.

But here's what matters:

Two-thirds of that spending is going to inference.

Not training new models. Running existing models on real data.

Why does this matter?

Training is a one-time cost. You train GPT-5 once. Expensive, but one-time.

Inference is recurring. Every ChatGPT query costs money. Forever.

This shift changes who wins:

 Winners:

  • Optimization platforms (making inference cheaper)

  • Edge computing (running locally)

  • Small models (85% performance, 15% cost)

 Losers:

  • Companies burning cash on training with no revenue

  • "We're building the next GPT" startups

Translation: If you're building tools that make inference cheaper? You're positioned perfectly.

If you're pitching "we're training a foundation model"? Good luck with that Series B.

The correction is coming.

Expect 30-50% valuation drops in 2026-2027.

But unlike dot-com, the underlying tech delivers real value. It's just not happening at the pace Wall Street priced in.

If you're building real products that save real money? You'll be fine.

If you're chasing valuations and user growth? You're in trouble.

🎯 So What Do You Actually Do Now?

Alright, we've covered a lot.

But here's what I actually want to know:

What are YOU building right now?

Are you:

  • Still just consuming AI content? (Windows closing)

  • Planning to build? (What's the timeline?)

  • Actually building for customers? (Tell me about it)

  • Stuck in analysis paralysis? (Let's unblock you)

And more importantly:

What's holding you back?

Technical skills? Finding customers? Positioning? Confidence? Something else?

I'm asking because most people consuming AI content aren't building anything.

They're just learning. Researching. Planning.

But 2026 is when the window starts closing.

Builders who ship in Q1 2026 will have a 6-12 month advantage.

Not because tech will change. Because they'll have customer feedback, production experience, and proof.

While everyone else is still "researching the best approach."

My ask: Hit reply. Tell me where you're at. One sentence is fine.

  • "Building [X] for [Y]"

  • "Stuck on [Z]"

  • "Still figuring it out"

I read every response. Your answers shape what I write next week.

So... what's your 2026 AI play?

Not subscribed yet? Hit the button below.

Share this with them. They'll thank you when they get their time back.

Reply

or to participate.