373 private links
Seeds data automatically for databases. It's marketing tells it's more automated than fakerJS.
Rust provides a single binary (lightweight compared to JS and more cache-efficient for Docker layers)
(via https://www.reddit.com/r/rust/comments/1r1emah/rewrote_my_nodejs_data_generator_in_rust_20x/ which seems generated by AI)
There are multiple examples of database schema
A great summary of the database news in 2025
PostgreSQL is the state of the art database with wide range support of features.
If 2023 was the year every DBMS added a vector index, then 2025 was the year that every DBMS added support for Anthropic's Model Context Protocol (MCP)
New file formats emerged. Why not using Parquet?
The main problem with Parquet is not inherent in the format itself. The specification can and has evolved. Nobody expected organizations to rewrite petabytes of legacy files to update them to the latest Parquet version. The problem is that there are so many implementations of reader/writer libraries in different languages, each supporting a distinct subset of the specification. Our analysis of Parquet files in the wild found that 94% of them use only v1 features from 2013, even though their creation timestamps are after 2020.
It can be useful later on in case of need.
A P2P database
The project: https://github.com/kustomzone/sierra-db
Le projet est sur github: https://github.com/gobackup/gobackup
The more you want to calculate at query time, the more you want views, calculated columns and stored or user routines. The more you want to calculate at normalized base update time, the more you want cascades and triggers. The more you want to calculate at some other (scheduled or ad hoc) time, the more you use snapshots aka materialized views and updated denormalized bases. You can combine these. Any time the database is accessed it can be enabled by and restricted by stored routines or other api.
Until you can show that they are in adequate, views and calculated columns are the simplest.
The whole idea of a DBMS is to store a representation of your application state as the database (which normalization reduces the redundancy of) and then you query and let the DBMS implement and optimize calculation of the answer. You haven't presented a reason for not doing that in the most straightforward way possible.
Great for many usages.
Numeric IDs take up a lot less space though. ULIDs are a bit long, which is inconvenient for URLs and sometimes, it's undesirable to expose when an ID was created.
They released a UI to manipulate the data too
Can one have a project with a relational database that is deployed early and often, and not have thousands of SQL migration scripts? Seems like it’s difficult to have both. Maybe there’s some way to “roll up” old migration scripts into one nice SQL schema. I guess running them all on a new database and exporting the schema will do that. 🤔
Prisma provides a "baseline" to reset merge all migration scripts. There are always many migrations though.
Jon Gjengset make great tools. The idea of Noria is to compute reads in advance when update occurs. It leads to faster reads.
At a high level, Noria takes a set of parameterized SQL queries (think prepared statements), and produces a data-flow program that maintains materialized views for the output of those queries.
and how to batch to optimize DELETEs
That's very interesting