Back to archive
StrategyBrief #005

The Supabase Default Trap

Everyone is defaulting to Supabase for AI apps. That might be the wrong reason to choose your database.

Druhinby Druhin Mukherjee·Apr 26, 2026·3 min read

Everyone is defaulting to Supabase. That might be a mistake.

Enjoying this? Get The Brief every week — strategy and business through the lens of gaming.

Free. Every week. Join 201 readers.

Most builders today don’t choose a database.

They inherit it.

→ “Just use Supabase”
→ “Firebase but open source”
→ “It works with AI agents”

And suddenly, your architecture is decided.


The Signal

Supabase is becoming the default database for AI-first apps.

Not because it’s always the best choice.

Because it’s the easiest starting point.


What’s Actually Happening

We’ve shifted from:

“What database should I use?”

to

“What works fastest with my AI stack?”

Supabase wins here because:

  • Postgres under the hood (trusted, scalable)
  • Built-in auth, storage, APIs
  • Easy integration with tools like:
    • LangChain
    • Vercel AI SDK
    • OpenAI embeddings

It’s not just a database anymore.
It’s a backend-in-a-box.


Why AI builders love it

  • You can store embeddings in Postgres (via pgvector)
  • You get instant APIs
  • You don’t think about infra

This reduces time-to-first-product drastically.


But here’s the tradeoff

Convenience is shaping architecture decisions.

Not requirements.


Where this breaks

Supabase (or any default stack) starts to hurt when:

1. Your workload isn’t relational

AI apps often deal with:

  • vectors
  • documents
  • graphs

Postgres can handle this.

But it’s not always optimal.


2. You scale unpredictably

AI workloads are spiky:

  • burst queries
  • heavy reads
  • embedding lookups

You may hit performance ceilings faster than expected.


3. You mix too many concerns

Supabase becomes:

  • DB
  • auth
  • API layer
  • storage

Tight coupling = harder to evolve later.


Alternatives (that people ignore)

Depending on your use case:

  • Vector DBs

    • Pinecone
    • Weaviate
    • Qdrant
  • Document DBs

    • MongoDB
  • Hybrid

    • Postgres + external vector store

The “right” stack is often composed, not default.


Why It Matters

We’re entering a phase where:

The fastest way to build is not always the best way to scale.

Early decisions in AI apps:

  • data structure
  • storage model
  • retrieval strategy

These become hard constraints later


Most founders aren’t choosing Supabase.

They’re avoiding thinking about the problem.


What I’d Do

Start with speed. But design for separation.

Simple framework:

1. Use Supabase for:

  • auth
  • basic relational data
  • rapid prototyping

2. Externalize early if needed:

  • vector search → dedicated DB
  • heavy reads → caching layer

3. Ask this before committing:

“What will break first if this scales 10x?”

If the answer is unclear → you’re deferring a problem.


Mental model

Defaults are for getting started.
Architecture is for staying alive.


References


🎥 Watch (quick context)


This is the same pattern I see in product thinking.

Founders often solve the wrong layer of the problem.

I wrote about this earlier in retention — where most teams try to fix churn through marketing, when the real issue is product design.

Retention Is a Game Design Problem


If this made you rethink your stack, share it with someone building an AI product.

That’s how The Brief grows.

— Druhin

If this was useful, share it with one builder.

The Brief grows through word of mouth. No ads, no algorithm — just people who find it worth passing on.

One idea every week. Free.

Strategy and business through the lens of gaming — for founders who want to think differently.