Daily Shaarli
August 17, 2025
How to do X in the browser dev tools.
I used to be on a team that was responsible for the care and feeding of a great many Linux boxes which together constituted the "web tier" for a giant social network.
At some point, I realized that if I wrote a wiki page and documented the things that we were willing to support, I could wait about six months and then it would be like it had always been there. Enough people went through the revolving doors of that place such that six months' worth of employee turnover was sufficient to make it look like a whole other company. All I had to do was write it, wait a bit, then start citing it when needed.
One near-quote from that page did escape into the outside world. It has to do with the "non-compliant host" actions: "Note: any of these many happen without prior notification to experiment owners in the interest of keeping the site healthy. Drain first, investigate second."
So the author created a list of actions; methods to apply for any given events.
A directory of the web
A friend who plays better chess than me — and knows more math & CS than me - said that he played some moves against a newly released LLM, and it must be at least as good as him. I said, no way, I’m going to cRRRush it, in my best Russian accent. I make a few moves – but unlike him, I don't make good moves1, which would be opening book moves it has seen a million times; I make weak moves, which it hasn't 2. The thing makes decent moves in response, with cheerful commentary about how we're attacking this and developing that — until about move 10, when it tries to move a knight which isn't there, and loses in a few more moves. This was a year or two ago; I’ve just tried this again, and it lost track of the board state by move 9.
we could say that the whole argument that LLMs learn about the world is that they have to understand the world as a side effect of modeling the distribution of text.
LLMs are limited by text inputs: color are numbers, etc...
Ideally, you would want to quantify "how much of the world LLMs model."
“a fundamentally incorrect approach to a problem can be taken very far in practice with sufficient engineering effort.”
Take:
LLMs are not by themselves sufficient as a path to general machine intelligence; in some sense they are a distraction because of how far you can take them despite the approach being fundamentally incorrect.
LLMs will never6 manage to deal with large code bases “autonomously”, because they would need to have a model of the program, and they don’t even learn to track chess pieces having read everything there is to read about chess.
LLMs will never reliably know what they don’t know, or stop making things up.
LLMs will always be able to teach a student complex (standard) curriculum, answer an expert’s question with a useful (known) insight, and yet fail at basic (novel) questions on the same subject, all at the same time.
LLM-style language processing is definitely a part of how human intelligence works — and how human stupidity works.
Arguments clés:
- les blogs sont plus sains que les réseaux sociaux
- les technologies permettent actuellement de créer un blog très facilement, il y en a pour tous les goûts
- l'écriture peut être une thérapie; le blog en est alors l'outil
- chacun maîtrise son temps d'écriture
- suivre d'autres blogs est aisé avec les flux RSS
TL;DR js solutions is often better for accessibility. At least information is conveyed.
Popover will be more useful than ever.
The tradeoff is currently the <details>
tag with two limitations: the element does not announce a navigation menu is exposed; clicking outside or pressing Esc does nothing.
Rust shines currently iin fondational software. So software used to build other softwares.
Other places are a great places of experiment for Rust.