358 private links
After a few years of using both (see Optimizing SQLite for servers for example), I've found that SQLite particularly shines when used for internal services or public services where a small amount of downtime is tolerable.
So, I choose PostgreSQL (preferably with a managed provider) if the service needs (close to) 100% uptime, if the service needs more than 5 Gbps of bandwidth or if the database is expected to grow larger than 200GB. [...] Bascially, all the situations where running on a single server is not possible.
It's important to note that with the advent of DuckDB, Parquet and Apache Iceberg, there is less and less reasons to stuff your main database with
useless junktimeseries data, and instead only keep some kind of materialized view and send the raw data to S3. Thus, there are less and less reasons for your main database to be over 200 GB.