> TCL test harness. C SQLite's test suite is driven by ~90,000+ lines of TCL scripts deeply intertwined with the C API. These cannot be meaningfully ported. Instead, FrankenSQLite uses native Rust #[test] modules, proptest for property-based testing, a conformance harness comparing SQL output against C SQLite golden files, and asupersync's lab reactor for deterministic concurrency tests.
If you're not running against the SQLite test suite, then you haven't written a viable SQLite replacement.
The TH3 test suite is proprietary, but the TCL test suite that they refer to is public domain.
I'm not sure where they get their 90k CLOC count though, that seems like it might be an LLM induced hallucination given the rest of the project. The public domain TCL test suite is ~27k CLOC, and the proprietary suite is 1055k CLOC.
The TH3 test suite is proprietary, but the TCL test suite that they refer to is public domain.
I'm not sure where they get their 90k CLOC count though, that seems like it might be an LLM induced hallucination given the rest of the project. The public domain TCL test suite is ~27k CLOC, and the proprietary suite is 1055k CLOC.
This kind of slop spewing into Github feels like the modern equivalent of toxic plumes coming from smoke stacks.
Utterly unmaintainable by any human, likely never to be completed or used, but now deposited into the atmosphere for future trained AI models and humans alike to stumble across and ingest, degrading the environment for everyone around it.
Even though it looks like LLM slop, we are starting to see big projects being translated/refactored with LLMs. It reminds me of the 2023 AI video era. If the pattern follows, we will start to see way fewer errors until it is economically viable.
Love the "race" demo on the site, but very curious about how you approached building this. Appreciated the markdown docs for the insight on the prompt, spec, etc
If you can't tell this is LLM slop then I don't really know what to tell you. What gave it away for me was the RaptorQ nonsense & conformance w/ standard sqlite file format. If you actually read the code you'll notice all sorts of half complete implementations of whatever is promised in the marketing materials: https://github.com/Taufiqkemall2/frankensqlite/blob/main/cra...
If you bothered to do any research at all you’d know the author as an extreme, frontier, avant-garde, eccentric LLM user and I say it as an LLM enthusiast.
Thanks. Next time I'll do more research on what counts for LLM code artwork before commenting on an incomplete implementation w/ all sorts of logically inconsistent requirements. All I can really do at this point is humbly ask for your & their avant-garde forgiveness b/c I won't make the same mistake again & that's a real promise you can take to the crypto bank.
Great! But note I haven’t said that you should be doing the research. This was more of a warning about today, but it also was a different kind of warning about the next 12-18 months once models catch up to what this guy wants to do with them.
Thank you for your wisdom. I'll make a note & make sure to follow up on this later b/c you obviously know much more about the future than a humble plebeian like myself.
If you're not running against the SQLite test suite, then you haven't written a viable SQLite replacement.
I'm not sure where they get their 90k CLOC count though, that seems like it might be an LLM induced hallucination given the rest of the project. The public domain TCL test suite is ~27k CLOC, and the proprietary suite is 1055k CLOC.
The value of SQLite is how robust it is and that’s because of the rigorous test suite.
I'm not sure where they get their 90k CLOC count though, that seems like it might be an LLM induced hallucination given the rest of the project. The public domain TCL test suite is ~27k CLOC, and the proprietary suite is 1055k CLOC.
> and the proprietary suite is 1055k CLOC.
Why is the code size of the proprietary test suite even public though?
https://github.com/Dicklesworthstone/frankensqlite#current-i...
Although I will admit that even after reading it, I'm not exactly sure what the current implementation status is.
RS over GF256 is more than adequate. Or just plain LDPC.
[1] <https://www.jeffreyemanuel.com/writing/raptorq>
Utterly unmaintainable by any human, likely never to be completed or used, but now deposited into the atmosphere for future trained AI models and humans alike to stumble across and ingest, degrading the environment for everyone around it.
MIT plus a condition that designates OpenAI and Anthropic as restricted parties that are not permitted to use or else?
Impressive piece of work from the AIs here.
A better question is if the implementation was touched by anything other than generative AI.