Postgres’s extensible index AM story doesn’t get enough love, so it’s nice to see someone really lean into it for LIKE. Biscuit is basically saying: “what if we precompute an aggressive amount of bitmap structure (forward/backward char positions, case-insensitive variants, length buckets) so most wildcard patterns become a handful of bitmap ops instead of a heap scan or bitmap heap recheck?” That’s a very different design point from pg_trgm, which optimizes more for fuzzy-ish matching and general text search than for “I run a ton of LIKE '%foo%bar%' on the same columns”.
The interesting question in prod is always the other side of that trade: write amplification and index bloat. The docs are pretty up-front that write performance and concurrency haven’t been deeply characterized yet, and they even have a section on when you should stick with pg_trgm or plain B-trees instead. If they can show that Biscuit stays sane under a steady stream of updates on moderately long text fields, it’ll be a really compelling option for the common “poor man’s search” use case where you don’t want to drag in an external search engine but ILIKE '%foo%' is killing your box.
Wouldn't tsvector, tsquery, ts_rank, etc. be Postgres's "poor man's search" solution? With language-aware stemming they don't need to be as aggressive with writing to indexes as you describe Biscuit above.
But if you really need to optimize LIKE instead of providing plain text search, sure.
Looks very interesting. I really like trigram indexes for certain use cases, but those are essentially running an ILIKE %something% on various text content in the DB. So that would fit the described limitations of this index type very well.
Usually you're quickly steered towards fulltext search (tsvector) in Postgres if you want to do something like that. But depending on what kind of search you actually need, trigram indexes can be a better option. If you don't search so much for natural language, but more for specific keywords the stemming in fulltext search can get in the way.
One information that would be nice here is a comparison of the index size on disk for both index types.
This is a fairly simple idea of indexing characters for each column/offset and compressing the bitmaps. Simple is good, as the overhead of more sophisticated ideas (eg suffix sorting) is often prohibitive.
One suggestion is to index the end-of-string as a character as well; then you don't need negative offsets. But that turns the suffix search into a wildcard type of thing where you have to try all offsets, which is what the '%pat%' searches do already, so maybe it's OK.
AFAIK the most common design for these kinds of systems is using trigram posting lists with position information, i.e., where in the string does the trigram occur. (It's the extra position information that means that you don't need to re-check the string itself.) No need for many different bitmaps; you just take an existing GIN-like design, remove deduplication and add some side information.
How is the postgres ecosystem at stating when these kinds of things are ready for adoption? I can think of a usecase at work where this might be useful, but hesitant to just start throwing random opensource extensions at our monolith DB.
In my experience you wait for the next two major PG release. When its actively maintained they support them fast. If not, you see by them that it is abandoned…
The interesting question in prod is always the other side of that trade: write amplification and index bloat. The docs are pretty up-front that write performance and concurrency haven’t been deeply characterized yet, and they even have a section on when you should stick with pg_trgm or plain B-trees instead. If they can show that Biscuit stays sane under a steady stream of updates on moderately long text fields, it’ll be a really compelling option for the common “poor man’s search” use case where you don’t want to drag in an external search engine but ILIKE '%foo%' is killing your box.
But if you really need to optimize LIKE instead of providing plain text search, sure.
Usually you're quickly steered towards fulltext search (tsvector) in Postgres if you want to do something like that. But depending on what kind of search you actually need, trigram indexes can be a better option. If you don't search so much for natural language, but more for specific keywords the stemming in fulltext search can get in the way.
One information that would be nice here is a comparison of the index size on disk for both index types.
One suggestion is to index the end-of-string as a character as well; then you don't need negative offsets. But that turns the suffix search into a wildcard type of thing where you have to try all offsets, which is what the '%pat%' searches do already, so maybe it's OK.
UPD
> Biscuit is 15.0× faster than B-tree (median) and 5.6× faster than Trigram (median)
> Trade-off: 3.2× larger index than Trigram, but 5.6× faster queries (median)
https://x.com/lemire/status/2000944944832504025
"Foobario 451" With the string "Foo 4" Is this too much complexity for trigrams? Would biscuit work for this?