386 private links
3 rules to it:
- the fastest code is the code you don't run.
- the smaller the code, the less there is to go wrong.
- the less you run, the smaller the attack surface.
Rewrite to native is also an option: Atom got 10x faster with Zed.
COSMIC desktop looks like GNOME, works like GNOME SHell, but it's faster and smaller and more customisable because it's native Rust code.
The COSMIC desktop looks like GNOME, works like GNOME Shell, but it's smaller and faster and more customisable because it's native Rust code. GNOME Shell is Javascript running on an embedded copy of Mozilla's Javascript runtime.
Just like dotcoms wanted to dis-intermediate business, remove middlemen and distributors for faster sales, we could use disintermediation in our software. Fewer runtimes, better smarter compiled languages so we can trap more errors and have faster and safer compiled native code.
Dennis Ritchie and Ken Thompson knew this. That's why Research Unix evolved into Plan 9, which puts way more stuff through the filesystem to remove whole types of API. Everything's in a container all the time, the filesystem abstracts the network and the GUI and more. Under 10% of the syscalls of Linux, the kernel is 5MB of source, and yet it has much of Kubernetes in there. This is what we should be doing. This is what we need to do. Hack away at the code complexity. Don't add functionality, remove it. Simplify it. Enforce standards by putting them in the kernel and removing dozens of overlapping implementations. Make codebases that are smaller and readable by humans.
Gnome Shell is
One behind 37signals. They are currently doing great stuff with CSS.
We performed an empirical study to investigate whether the context of interruptions makes a difference. We found that context does not make a difference but surprisingly, people completed interrupted tasks in less time with no difference in quality. Our data suggests that people compensate for interruptions by working faster, but this comes at a price: experiencing more stress, higher frustration, time pressure and effort. Individual differences exist in the management of interruptions: personality measures of openness to experience and need for personal structure predict disruption costs of interruptions. We discuss implications for how system design can support interrupted work.
How to avoid "software rot"? The author describes three pillars:
- Foundations: the first decisions we make, that all other decisions are laid upon. These are the ground upon which all other battles are fought - and here, hours of research and thinking can save months of future engineering work. Mistakes here compound like little else in the game.
- Workflow: The environment in which the codebase has space to grow and be changed, and the digital and human I/O that enables that.
- Maintenance: The energy, time and will that must be reserved for the ongoing care, support and security of the platform.
- choose boring technology
- automate the tedious or repetitive
- good leadership is crucial
- make it easy to do the right thing
- reward clarity in code and communication
- make it easy to recover from disaster
- take external dependencies seriously, thoughtfully and defensively
- build a team that feels co-operation is a superpower
- technical debt is a strategy
Building up means starting with a tiny core and gradually adding functionality. Sanding down means starting with a very rough idea and refining it over time.
There are a few rules I try to follow when building up:
- Focus on atomic building blocks that are easily composable and testable.
- Build up powerful guarantees from simple, verifiable properties.
- Focus on correctness, not performance.
- Write the documentation along with the code to test your reasoning.
- Nail the abstractions before moving on to the next layer.
The alternative approach, which I found to work equally well, is “sanding down.” In this approach, you start with a rough prototype (or vertical slice) and refine it over time. You “sand down” the rough edges over and over again, until you are happy with the result. [...] I find that this approach works well when working on creative projects which require experimentation and quick iteration.
When using this approach, I try to follow these rules:
- Switch off your inner perfectionist.
- Don’t edit while writing the first draft.
- Code duplication is strictly allowed.
- Refactor, refactor, refactor.
- Defer testing until after the first draft is done.
- Focus on the outermost API first; nail that, then polish the internals.
Both variants can lead to correct, maintainable, and efficient systems. There is no better or worse approach. However, it helps to be familiar with both approaches and know when to apply which mode. Choose wisely, because switching between the two approaches is quite tricky as you start from different ends of the problem
En résumé, la préférence pour le Mac chez de nombreux ingénieurs logiciels repose sur :
- La puissance d’un environnement Unix natif, qui simplifie le développement et la gestion des outils.
- Une expérience utilisateur stable et “out-of-the-box”, qui minimise les tracas liés aux mises à jour et à la maintenance.
- Une qualité matérielle et une intégration écosystémique qui renforcent la productivité au quotidien.
Reinventing the wheel, but every time different. The author shares its experience.
Consider standards for it: they are powerful.
Some wheels I see that I think could use some new takes but which I don’t have the time/energy to do myself:
- Web browsers - probably the most significant. The browser market is essentially a monopoly right now. And Firefox is pretty much the only alternative option, somewhat of a monopoly in itself. We need to have many independent browser projects going on, not just an alternative.
- Higher education - this is probably too big a project for any one person, but I think there’s a lot of ground that needs new work and reevaluating in the world’s current higher education system.
- Task management - there are a lot of task management systems out there, but I think there’s still definitely room for more. I’m personally beginning to settle on a hybrid analog/digital task management system I’m designing myself.
The problem with that is, if everything is highlighted, nothing stands out.
Broken syntax should not be highlighted too. So Tonsky recommends to
Highlight names and constants. Grey out punctuation. Don’t highlight language keywords.
and because developers are not payed to the line of code anymore, "Comments should be highlighted, not hidden away."
Many developers prefer the dark mode because the dark mode can have less vibrant colors.
I suspect that the only reason we don’t see more restrained color themes is that people never really thought about it.
- Feature complete. Features do not need to be added.
- It is secure.
- No maintenance needed (can broke while
Implications: if build tools change, then we may not be able to modify our software. If hardware, platforms, interpreters or external APIs change, the software may stop working. That's the implication.
Examples of finished software:
- The Nintendo Gameboy
- Job Sheet Manager
- A multitude of embedded systems (DVD player, ...)
- Some small JS apps and libraries
How to make them?
- Understand the requirements. There must be a definition of done.
- Keep scope small and fixed
- Reduce dependencies
- Produce static output
- Increase quality assurance: don't fix bugs - avoid them
A collection of antipatterns and how to avoid them
Time management can be useful
- Organizations don't use that much data.
Of queries that scan at least 1 MB, the median query scans about 100 MB. The 99.9th percentile query scans about 300 GB.
but 99.9% of real world queries could run on a single large node.
I did the analysis for this post using DuckDB, and it can scan the entire 11 GB Snowflake query sample on my Mac Studio in a few seconds.
When we think about new database architectures, we’re hypnotized by scaling limits. If it can’t handle petabytes, or at least terabytes, it’s not in the conversation. But most applications will never see a terabyte of data, even if they’re successful. We’re using jackhammers to drive finish nails.
As an industry, we’ve become absolutely obsessed with “scale”. Seemingly at the expense of all else, like simplicity, ease of maintenance, and reducing developer cognitive load
Years it takes to get to 10x:
10% -> ~ 24y
50% -> ~ 5.7y
200% -> ~ 2.10y
Scaling is also a luxurious issue in many cases: it means the business runs well.
- Hardware is getting really, really good
In the last decade:
SSDs got ~5.6x cheaper, 30x more on a single SSD and 11x faster in sequential reads and 18x in radom reads.
CPUs core count went up 2.6x, price went down at least 5x per core, each Turin core is also probably 2x-2.5x faster.
Distributed systems are also overkill as hardware progresses faster.
#define MAKE_U32_FROM_TWO_U16(high, low) ( ((uint32_t)(high) << 16) | ((uint32_t)(low) & 0xFFFF) )
Piccalilli shares links!
When we look around in our field, everyone in Tech seems to focus on one thing: "How can we adopt AI in our tooling and in our processes?"
So it is a proof of a bubble. Everyone is enthusiasts but it doesn't solve real use cases.
A rightful question can be: "How can we set up our engineers for long-term career success?"
Jens Meiert ask pertinent questions to solve this big up question.
What can be done reasonably well with AI today? (And tomorrow? And the day after tomorrow?)
How are our engineers affected by AI?
- Are our engineers using AI?
- How are our engineers using AI?
- What are realistic expectations for our engineers in terms of AI use and proficiency?
- Are we setting clear expectations for use of and proficiency with AI in our job descriptions as well?
- Do we document and anchor these expectations in our competency and skill matrixes?
- Are we watching the AI market, and are we evaluating tooling?
- While the AI market is in flux—which it may be for some time—, do we have enough flexibility (budget, processes, room for errors) to test AI tooling?
- If our engineers leave the company, would they find a new job—or would their profile make them less interesting?
- If they would not necessarily find a new job, what extra skills and experience do they need?
- How can we make our engineers ready for the AI age?
As you can tell, we cannot have all those answers yet—this is precisely why this is so important to get on top of, and it’s also the reason why I say “start answering.”
Now, everyone’s a prize exhibit in the FAANG zoo, because mastering this tangled mess is what opens their gates. Being just a CRUD monkey doesn’t feel fun anymore. If this is what “progress” looks like, I don’t want any part of it.
The technologies to build for 10 years ago dramatically improved!
As mentionned by LeHollandaisVolant, one thing the article doesn't mention is that:
- 1 the pages are more interactive
- 2 the data changes in real time
Hi, I’m Rob Weychert. I make art and design, obsess over film and music, hoard trivial archival data, and share it all on this here website. Enjoy your stay.
This manifest critics modern (bloated, unreliable and worsening over time) softwares.
It favorises self-reliant programming (few features and simplicity, minimum amount of dependencies, write your own tools). It has benefits such as learning, improving skills, simpler code, simpler tools, easy modification and deployment.