Two blog posts landed on the Hacker News front page within hours of each other this week, making the exact opposite case about the same question: can Postgres replace your search infrastructure?
The first, a comprehensive guide published on Anyblockers titled "Postgres as a Search Engine," walks through full-text search, trigram indexes, and semantic vector search using pgvector. It argues that for most applications, Postgres already ships everything you need. The second, published on CodeReliant under the title "Why 'Just Use Postgres' Is Not Always Good Advice," pushes back with the kind of specificity that suggests the author has been burned. Together the posts have generated hundreds of comments and exposed an architectural fault line that's been rumbling through engineering teams for years.
If your startup is paying for Elasticsearch and you have fewer than 50 engineers, this debate is about you.
What the "Just Use Postgres" Camp Is Actually Saying
The Anyblockers post isn't the oversimplification its critics want it to be. It methodically covers three distinct search capabilities that ship with modern Postgres , no extensions required for the first two.
First: tsvector and tsquery, Postgres's built-in full-text search. You create a search index on your text columns, and Postgres handles tokenization, stemming, and ranking. For a bookstore app where customers search by title or author, this gets you surprisingly far. The post demonstrates how to pair it with GIN indexes for performance that, according to benchmarks cited in the piece, holds steady through millions of rows.
Second: trigram indexes using pg_trgm. This is where fuzzy matching lives , misspelled author names, partial product codes, the kind of messy real-world queries that break exact-match systems. The extension ships with every major Postgres distribution.
Third , and this is where it gets interesting , semantic search via pgvector. Store embeddings alongside your relational data. Query by meaning, not just keywords. A customer searching "something to read on a long flight" could surface results even if no book description contains those exact words.
The argument isn't that Postgres beats Elasticsearch at search. It's that the gap is small enough, for most workloads, that maintaining a separate search cluster isn't worth the operational overhead.
The Rebuttal Doesn't Dismiss Postgres , It Respects It
The CodeReliant post opens by acknowledging that Postgres is remarkable. Then it builds a case that treating it as a universal hammer creates problems invisible until your system is under real load.
Operational complexity comes first. Yes, running Elasticsearch is expensive to maintain. But so is pushing Postgres beyond its design center. When your full-text search queries start competing with transactional writes for the same connection pool, you've traded one operational headache for another , only now it sits closer to your most critical data.
Then there's the scaling story. Postgres scales vertically. You can throw bigger hardware at it for a long time, and for many startups that's genuinely enough. But Elasticsearch was built to scale horizontally from day one. When your search index swells to hundreds of gigabytes and your query patterns involve faceted aggregation across dozens of fields, Postgres strains in ways that vertical scaling alone can't fix.
The post also raises a subtler point about feature depth. Elasticsearch ships with analyzers for dozens of languages, built-in synonym handling, percolation queries, and a scoring system tuned across millions of production deployments. Postgres's full-text search is good. Elasticsearch's is its entire reason for existing.
As one Hacker News commenter put it: "Postgres can do search the way a Swiss Army knife can cut bread. It works. But if you're opening a bakery, buy a bread knife."
The Real Question Neither Post Names
Here's what both posts circle around without quite saying directly: this isn't a technology debate. It's a team-size debate.
I've watched this pattern play out across half a dozen startups during my time in developer experience roles. A three-person engineering team adopts Elasticsearch because that's what the architecture blog posts recommend. They spend 20% of their time keeping the cluster healthy. The sync pipeline between Postgres and Elasticsearch breaks on a Saturday. Nobody notices until Monday. Customer search results go stale for 48 hours.
That team would have been better off with Postgres and pg_trgm.
But I've also seen a 40-person team try to make Postgres handle product search for an e-commerce platform with 11 million SKUs, faceted filtering across five languages, and real-time inventory weighting in search results. They spent six months building custom infrastructure that Elasticsearch provides out of the box. Then they migrated to Elasticsearch anyway.
That team would have been better off starting with dedicated search.
The decision matrix isn't about which technology is superior. It hinges on three questions: How many engineers do you have to maintain the infrastructure? How complex are your search requirements right now , not on your roadmap, right now? And what's your data volume today?
What the Hacker News Threads Reveal
The comment sections on both posts are, honestly, more instructive than either article alone. A few patterns stand out.
Engineers at early-stage startups overwhelmingly sided with the Postgres-only approach. The phrase "one fewer thing to deploy" appeared in some variation at least a dozen times. Several commenters shared stories of ripping out Elasticsearch and moving to Postgres, reporting simpler operations with acceptable search quality.
Engineers at larger companies or those working on search-heavy products leaned the other way. Multiple commenters from e-commerce and content platforms described scenarios where Postgres full-text search degraded under load in ways that proved difficult to diagnose.
The most upvoted comment across both threads came from an engineer who wrote: "The correct advice is 'just use Postgres until you can't, and you'll know when you can't.'"
That's probably right. But it masks real complexity. The transition from "Postgres is enough" to "Postgres is no longer enough" often happens gradually, and by the time you notice, you're migrating under pressure rather than by design.
What This Means for Your Stack
If you're a startup founder or early engineer reading this, here's the practical takeaway.
Start with Postgres. Use tsvector for basic search. Add pg_trgm when you need fuzzy matching. Explore pgvector if semantic search matters to your users. You'll cover 80% of search use cases without adding a single new service to your infrastructure.
But set tripwires. Monitor search query latency as its own metric. Track how long search-related queries hold connections. If search queries start P99-ing above 500ms while your transactional queries stay fast, that's not a Postgres problem , it's a signal that search is outgrowing its host.
And when you do migrate, don't try to move everything at once. Run Elasticsearch for search, keep Postgres as your source of truth, and build the sync pipeline with care. The sync layer is where every hybrid architecture lives or dies.
This debate will resurface in six months. It always does. The tools will be slightly better on both sides, and the right answer will still be the same: it depends on your team, your data, and your users. The worst choice is the one you make based on a blog post title instead of your own measurements.