A great tool to generate an ERD. It can be generated from an SQL schema, a DB connection, JSON,
IndexedDB can be used to store a lot of data. It has some caveats though.
Storage:
About deletion, use soft delete to smoothen the synchronization if a user deletes a record and another one update it.
About record collection, use unique IDs (UUID v4) or property related ids with (UUID v5).
About ordering, it is easier to use fractional indices! Read more on https://www.figma.com/blog/realtime-editing-of-ordered-sequences/, or https://www.steveruiz.me/posts/reordering-fractional-indices, or use a dedicated library.
Sync is made with pull and push
Update:
- Send atomic changes from a client is the more convenient way. We can send only the model’s ID and its updated fields.
- send operations instead of changed data,
Conflict resolution:
- In some cases, last-write wins at the record field level will be enough
- in others, we strongly need a full-fledged CRDT.
Simple to use local JSON database. Use native JavaScript API to query. Written in TypeScript. owl
Good points! I have use cases where a frontend database approach will work perfectly!
SQLite that is not relational but using key/value pairs
- Key-value database: It is a non-relational database that stores data as a collection of key-value pairs, where the key is used as a unique identifier. KVDB has simple operations and robust scalability.
- Time Series Database: Time Series Database (TSDB) stores data in time sequence. TSDB data has a timestamp, large data storage, and high insertion and query performance.
Maybe useful someday
Much like QEMU, new trends in the industry are taking it into a completely new direction: the rise of use cases around Edge compute, due to its limited resources and limited environments means that SQLite fits the bill perfectly.
It lists some distributed data project based on SQLite. The most promising project seems to be LiteFS: https://github.com/superfly/litefs. However SQLite is not open to contribution...
That'S why the author starts a new project, open to contribution named LibSQL (https://libsql.org/). It can be merged with SQLite if they change their code of conduct. They want to add async interfaces for example, or WASM support.
About schema; tables; collections; keys and columns; unique, primary and foreign keys
However, SQLite continued to improve and eventually introduced the write-ahead log (WAL) journaling mode and even the wal2 journaling mode. These provide significantly better support for concurrent readers.
SQL table expressions are somewhat similar to functions in a regular programming language — they reduce the overall complexity.
You can write an unreadable sheet of code, or you can break the code into understandable individual functions and compose a program out of them.
You can build a tower of nested subqueries, or you can extract them into CTEs and reference from the main query.There is a myth that “CTEs are slow”. It came from old versions of PostgreSQL (11 and earlier), which always materialized CTE — calculated the full result of a table expression and stored it until the end of the query.
Ok. There are some rules:
- CTE runs on every request
- CTE splits the query code into multiple chunks
- instead of subquery, always use CTE for clarity
x15 write speed :D
How ?
With Queued Writes rqlite itself can now queue a set of received write-requests, internally batch them, and then write that batch to the Raft log as a single log entry. This is key — by putting more data in a single Raft log, all of those previously distinct requests now result in a single Raft transaction, reducing the number of network round trips to a minimum.
Implementation of the queue : https://github.com/rqlite/rqlite/tree/v7.5.0/queue
Creating virtual columns in SQLite to greatly improve SQL queries. So
however I don't think that storing JSON as-is is a good idea...
But yes, it is almost a NoSQL database ツ
A visual database with spread alike representations
And that’s the characteristic problem with the normalized approach: In exchange for the simplicity of working exclusively with normalized data, you have to write queries that don’t scale.
With denormalization, there is so much to think about, so much edge cases that needs to be handled !
Comment pister l'origine d'une requête dans une BDD
NocoDB transforme un serveur MySQL, PostgreSQL, SQL Server, SQLite ou MariaDB en une feuille de calcul interactive.
Le dépôt git: https://github.com/nocodb/nocodb-seed
Store the timestamp instead of a boolean to have an information that can be useful. If the value is false, then NULL is set.
Du HollandaisVolant (https://lehollandaisvolant.net/?mode=links&id=20210424171603)
Par contre, je ne sais pas si c’est juste moi, mais j’ai du mal avec le timestamp. Je préfère le format YMDHIS (YYYYMMDDHHIISS).
Ce format prend 14 caractères (soit 14*8= 104 bits) contre un timestamp qui en prend sûrement moins.
Du point de vue de l'affichage, on peut ensuite très bien le formatter comme on le veux.
A database for the developer point of view:
As a developer you can think of your database as a single giant, structured global variable with a weird access method, and to make things worse, concurrent access.
(via sebsauvage)