366 private links
An example of entry point in a documentation
- A simple queue.json
- Batching with group commit
- Use a brokered group commit to eliminate contention over the queue object
- HA brokered group commit to handle unfinished job or broker machine die
Similarly to the job system we built at work, it guarantees at-least-once delivery.
I don't know if the pattern becomes too complex to be viable.
1. Event-Driven Architecture (EDA)
Problèmes résolu:
- timeout si un service est lent
- 1 service down = toute la chaîne bloquée
- temps de réponse imprévisible
Pièges à éviter:
- Event explosion
- Debugging de l'enfer
- Eventual Consistency mal gérée
- Cohérence transactionnelle
2. API-First & API Gateway pattern
API-First : Concevoir l'API avant d'implémenter le service
API Gateway : Point d'entrée unique qui orchestre, sécurisé, et monitore les APIs (et Backend for Frontend)
Pièges à éviter:
- moins de 5 api et un seul frontend
- communication interne uniquement
- latence critique
3. CQRS + Event Sourcing
Command Query Responsibility Segregation: séparer les modèles de lectures et d'écritures; deux bases de données différentes optimisées pour leur usage.
Event Sourcing : Au lieu de stocker l'état actuel, on stocke tous les événements L'état actuel est reconstruit en rejouant les événements.
Cas d'usage: Performance, audit et compliance, analytics temps réel
Pièges à éviter: complexitée surévaluée, eventual consistency, gestion de la mimgration de schéma
4. Saga Pattern
Here's the thing: 99% of companies don't need them. The top 1% have tens of millions of users and a large engineering team to match.
The fun thing about Postgres is there is already an extension for that: PostGIS, Full-text search, JSONB, TimescaleDB, pgvectorm, and many for AI
Each database add hidden costs: backup strategy, monitoring dashboards, seceurity patches, on-call runbooks, failover testing.
SLA math: Three systems at 99.9% uptime each = 99.7% combined
Why? Costs, operational complexity, data consistency
- Caching with UNLOGGED table (that I've also found in other posts)
- Pub/Sub with LISTEN/NOTIFY
- Job Queues with SKIP LOCKED
- Rate Limiting is also possible
- Sessions with JSONB
PostgresSQL ist nearly 2 times slower compared to Redis, but is it worth it?
When to keep Redis?
- extreme performance needed
- using redis-specific data structures
Migration strategy:
- Side by side
- Read from Postgres
- Write to Postgres only
- Remove Redis
and Prisma provide for example typedSQL to have a smooth integration with Typescript.
When messages carry their routing accross nodes, the pattern can be useful
The permission system should handle folders and files.
Strategies:
- (naive) read-time permission queries
- A simple table (RBAC role based access control).
-- RBAC: Pre-computed permissions
-- access_type: 'owner' (full control), 'shared' (read only), 'path_only' (visible but no access)
CREATE TABLE permissions (
user_id INTEGER NOT NULL,
resource_id INTEGER NOT NULL,
access_type TEXT NOT NULL,
PRIMARY KEY (user_id, resource_id),
FOREIGN KEY (user_id) REFERENCES users(id),
FOREIGN KEY (resource_id) REFERENCES resources(id)
);
- Attribute-Based Access Control
This approach is very clear and composable. It works great for single-resource access checks: "can user X access resource Y?" It struggles when listing resources, as we would need to execute the policies for each resource and can't directly query the resources table with simple filters.
- Zanzibar and ReBAC
Generic software design: It’s “designing to the problem”: the kind of advice you give when you have a reasonable understanding of the domain, but very little knowledge of the existing codebase. [...] When you’re doing real work, concrete factors dominate generic factors.
In large codebases:
- consistency is more important than "good design". Read Mistakes engineers make in large established codebases.
- Real codebases are typically full of complex, hard-to-predict consequences.
- Large shared codebases never reflect a single design, but are always in some intermediate state between different software designs. How the codebase will hang together after an individual change is thus way more important than the "north star"
The majority of software engineering work is done on systems that cannot be safely rewritten.
- Generic software design advice is useful for building brand-new project
- Generic software design advice is useful for tie-breaking concrete design decisions.
- Generic software design principles can also guide company-wide architectural decisions.
3 rules to it:
- the fastest code is the code you don't run.
- the smaller the code, the less there is to go wrong.
- the less you run, the smaller the attack surface.
Rewrite to native is also an option: Atom got 10x faster with Zed.
COSMIC desktop looks like GNOME, works like GNOME SHell, but it's faster and smaller and more customisable because it's native Rust code.
The COSMIC desktop looks like GNOME, works like GNOME Shell, but it's smaller and faster and more customisable because it's native Rust code. GNOME Shell is Javascript running on an embedded copy of Mozilla's Javascript runtime.
Just like dotcoms wanted to dis-intermediate business, remove middlemen and distributors for faster sales, we could use disintermediation in our software. Fewer runtimes, better smarter compiled languages so we can trap more errors and have faster and safer compiled native code.
Dennis Ritchie and Ken Thompson knew this. That's why Research Unix evolved into Plan 9, which puts way more stuff through the filesystem to remove whole types of API. Everything's in a container all the time, the filesystem abstracts the network and the GUI and more. Under 10% of the syscalls of Linux, the kernel is 5MB of source, and yet it has much of Kubernetes in there. This is what we should be doing. This is what we need to do. Hack away at the code complexity. Don't add functionality, remove it. Simplify it. Enforce standards by putting them in the kernel and removing dozens of overlapping implementations. Make codebases that are smaller and readable by humans.
Gnome Shell is
Your deployment strategy
What you need:
• A single server
• Postgres
• Maybe Redis for caching
• That's it
Building up means starting with a tiny core and gradually adding functionality. Sanding down means starting with a very rough idea and refining it over time.
There are a few rules I try to follow when building up:
- Focus on atomic building blocks that are easily composable and testable.
- Build up powerful guarantees from simple, verifiable properties.
- Focus on correctness, not performance.
- Write the documentation along with the code to test your reasoning.
- Nail the abstractions before moving on to the next layer.
The alternative approach, which I found to work equally well, is “sanding down.” In this approach, you start with a rough prototype (or vertical slice) and refine it over time. You “sand down” the rough edges over and over again, until you are happy with the result. [...] I find that this approach works well when working on creative projects which require experimentation and quick iteration.
When using this approach, I try to follow these rules:
- Switch off your inner perfectionist.
- Don’t edit while writing the first draft.
- Code duplication is strictly allowed.
- Refactor, refactor, refactor.
- Defer testing until after the first draft is done.
- Focus on the outermost API first; nail that, then polish the internals.
Both variants can lead to correct, maintainable, and efficient systems. There is no better or worse approach. However, it helps to be familiar with both approaches and know when to apply which mode. Choose wisely, because switching between the two approaches is quite tricky as you start from different ends of the problem
The term was coined to explain a deliberate process where you write software quickly to gain knowledge
"If you discover a better way to do things, the old way of doing it that is embedded in your code base is now “debt”:
- you can either live with the debt, “paying interest” in the form of all the ways that it makes your code harder to work with;
- or you can “pay down” the debt by fixing all the code in light of your new knowledge, which takes up front resources which could have been spent on something else, but hopefully will make sense in the long term.
Technical debt isn’t just bad code—it represents the lessons you’ve learned about how to build software better. Refactoring isn’t “paying off a debt,” but investing in applying that knowledge. Ignoring it wastes what you’ve learned and, over time, leads to lost value and competitive disadvantage compared to those who actively improve their code.
Can I really say “we now know” that the existing code is inferior? Is it true that fixing the code is “investing my knowledge”?
A collection of antipatterns and how to avoid them
The more you want to calculate at query time, the more you want views, calculated columns and stored or user routines. The more you want to calculate at normalized base update time, the more you want cascades and triggers. The more you want to calculate at some other (scheduled or ad hoc) time, the more you use snapshots aka materialized views and updated denormalized bases. You can combine these. Any time the database is accessed it can be enabled by and restricted by stored routines or other api.
Until you can show that they are in adequate, views and calculated columns are the simplest.
The whole idea of a DBMS is to store a representation of your application state as the database (which normalization reduces the redundancy of) and then you query and let the DBMS implement and optimize calculation of the answer. You haven't presented a reason for not doing that in the most straightforward way possible.
The method to build software feature by iteration is a mistake long-term.
Carrying over this approach past the learning phase was a mistake.
It is possible to dramatically cut the amount of bugs you introduce in the first place, if you focus on optimizing that (and not just the iteration time)
One super power is bugs can be found while reading the code.
The key is careful, slow reading. What you actually are doing is building the mental model of a program inside your head.
If you are reviewing a PR, don’t review just the diff, review the entire subsystem.
Follow the control flow or stare at the state
Native libraries are hard in Rust; the compiler offers no guarantees about the memory representation of structs; or these structs needs to be FFI-Friendly with unsafe extern "C". There is no sandboxing, so a malicious code could promise the machine; or corrupt the memory silently.
Finally, dynamic library plugins are distributed as compiled code.It's easier to hide backdoors in compiled code. It's also harder to share the code than simple scripts.
There are two main ways to embed JavaScript in a Rust program. The first one is with bindings for the lightweight QuickJS engine, such as rquickjs. Take a look at AWS' LLRT (Low Latency Runtime) for an advanced integration of QuickJS in Rust.
Ho, and did I mention that QuickJS is lightweight? Around 210 KiB of code vs around 40 MiB of code for V8, all while being fast enough for most situations.
My vision of programming is to limit ourselves to two programming languages. A very powerful, secure and fast compiled language for the lower levels of the computing stack, Rust, and a less powerful and slower scripting language for high-level scripting and user interfaces, JavaScript.
Another alternative is WASM. The provided code is already compiled; and the author judges the ecosystem too immature.
The last method is the less powerful: expression engine. It allows to specify a language with some rules, even if it is not turing complete and the result always evaluate to an expression.
So to rank them:
- scripting language
- expression engine
- WASM
- at the end native libraries
I agree: if we want to build complex products, we have to hide implementation details. It means for example at some level, the type or structure should abstract an empty value for me: 0, null, undefined, false, NULL, etc...
I have the same idea for a node js backend serving a fancy UI :)
It would be better to split the UI and the server while developing to benefit from hot reloading.
Example: https://git.sr.ht/~pyrossh/rust-embed/tree/master/item/examples/axum-spa/main.rs
How We Structure It – No Guesswork Needed:
- One frontend framework (Astro) for performance and design flexibility
- One headless CMS (DatoCMS) structured for multi-brand, multi-region content
- One hosting platform (Vercel) that scales automatically
- One monorepo for all brands and sites
- One dev team managing the whole thing