319 private links
They wouldn't use Google Search engine as default for 20 billions per year.
Their marketing values privacy, but this partnership is a on their commitment to privacy.
Take: If Apple really cared about privacy, not only should they choose a different search engine, they should block ads and trackers in Safari by default.
But they don't even if they can do it tomorrow.
As a nuclear engineer, I have never been asked to show my portfolio of reactor designs I maintain in my free time, I have never been asked to derive the six-factor formula, the quantization of angular momentum, Brehmsstrahlung, or to whiteboard gas centrifuge isotopic separation, water hammer, hydrogen detonation, or cross-section resonance integrals.
There's something deeply wrong with an industry that presumes you're a fraud unless repeatedly and performatively demonstrated otherwise and treats the hiring process as a demented form of 80s-era fraternity hazing.
Thoughts of https://blog.koalie.net/2025/08/30/tech-mistrust-or-fatigue/
Yes.
I am ready for the revival of directories of websites curated by people for people, and found through serendipity. How much worse will it get? I am both curious and very afraid. But also angry. And powerless.
So I’m frustrated.
Aussi sur la taille des entreprises incompatibles avec l'éthique.
Des trois acteurs en jeu, les investisseurs, les entreprises d’IA et les utilisateurs, les seuls à réaliser des profits sont les acteurs financiers dont les valorisations augmentent. C’est une situation de bulle typique : des plus-values élevées et des profits faibles.
By induction, the only programmers in a position to see all the differences in power between the various languages are those who understand the most powerful one. (This is probably what Eric Raymond meant about Lisp making you a better programmer.) You can't trust the opinions of the others, because of the Blub paradox: they're satisfied with whatever language they happen to use, because it dictates the way they think about programs.
The source code of the Viaweb editor was probably about 20-25% macros.
Computer hardware changes so much faster than personal habits that programming practice is usually ten to twenty years behind the processor. At places like MIT they were writing programs in high-level languages in the early 1960s, but many companies continued to write code in machine language well into the 1980s.
A friend who plays better chess than me — and knows more math & CS than me - said that he played some moves against a newly released LLM, and it must be at least as good as him. I said, no way, I’m going to cRRRush it, in my best Russian accent. I make a few moves – but unlike him, I don't make good moves1, which would be opening book moves it has seen a million times; I make weak moves, which it hasn't 2. The thing makes decent moves in response, with cheerful commentary about how we're attacking this and developing that — until about move 10, when it tries to move a knight which isn't there, and loses in a few more moves. This was a year or two ago; I’ve just tried this again, and it lost track of the board state by move 9.
we could say that the whole argument that LLMs learn about the world is that they have to understand the world as a side effect of modeling the distribution of text.
LLMs are limited by text inputs: color are numbers, etc...
Ideally, you would want to quantify "how much of the world LLMs model."
“a fundamentally incorrect approach to a problem can be taken very far in practice with sufficient engineering effort.”
Take:
LLMs are not by themselves sufficient as a path to general machine intelligence; in some sense they are a distraction because of how far you can take them despite the approach being fundamentally incorrect.
LLMs will never6 manage to deal with large code bases “autonomously”, because they would need to have a model of the program, and they don’t even learn to track chess pieces having read everything there is to read about chess.
LLMs will never reliably know what they don’t know, or stop making things up.
LLMs will always be able to teach a student complex (standard) curriculum, answer an expert’s question with a useful (known) insight, and yet fail at basic (novel) questions on the same subject, all at the same time.
LLM-style language processing is definitely a part of how human intelligence works — and how human stupidity works.
TL;DR js solutions is often better for accessibility. At least information is conveyed.
Popover will be more useful than ever.
The tradeoff is currently the <details>
tag with two limitations: the element does not announce a navigation menu is exposed; clicking outside or pressing Esc does nothing.
The links and conversations circulating in these chats amount to an ongoing, personalized curation — a feed shaped not by tech companies but by people I trust.
"Trust is peer-to-peer, not platform-based"
"Contrary to the popular narrative, media literacy isn’t dead — it just looks different. Concepts like “source layering” or “context collapse” aren’t theoretical to us, they play out in real time. "
The author alternates between the Mastodon instance Fosstodon and micro.blog.
A modal is a small view in the window: this view makes the rest of the content inert.
Il n'y a aucun remplacement
Toute réduction est bonne à prendre. Toute augmentation bonne à chasser. L’une ne remplace pas l’autre.
Mais aussi, si on se focalise sur trop de choses inutiles ou insignifiantes, on n’aura rien en résultat.
On peut comparer les mesures de réduction de consommation de ressources selon leurs ordres de grandeur.
Les grandes lignes restent: l'alimentation; réduire l'empreinte du transport (motorisé et aviation); la consommation jetable et le renouvellement rapide; le chauffage et la climatisation.
"presence" is optional online, unless we actively act for it.
The digital world is the opposite. Space and time are optional. There we cannot be perceived unless we send information. Only when we send data, like messages, photos or our webcam video feed to the internet, others will be able to perceive traces of us.
One of the best applications of modern LLM-based AI is surfacing answers from the chaos of the internet. Its success can be partly attributed to our failure to build systems that organize information well in the first place.
Remember Semantic Web? The web was supposed to evolve into semantically structured, linked, machine-readable data that would enable amazing opportunities. That never happened.
The knowledge of the Internet were structured with rich semantic linking, then very primitive algorithms could parse them.
C’est la part des richesses apportée à chacun qu’il faut sauvegarder, pas le travail.
Le problème n’est pas que l’automatisation retire du travail, ni même qu’on manque de richesses. Le problème c’est que l’automatisation du travail modifie la répartition des richesses (vers une plus grande concentration).
Le problème n’est pas que l’automatisation retire du travail, ni même qu’on manque de richesses. Le problème c’est que l’automatisation du travail modifie la répartition des richesses (vers une plus grande concentration).
La société actuelle tend rester elle-même au lieu de s'adapter à cette automatisation des tâches qui libère du travail.
While copying the code, the right abstraction reveals itself.
If you start too early, you might end up with a bad abstraction that doesn’t fit the problem. You know it’s wrong because it feels clunky. Some typical symptoms include:
Removing a wrong abstraction is hard work.
Clean up afterwards :)
I totally agree.
- When there is more than one text directionality
- When the respective expression would be shorter than the non-logical equivalent.
The second point is healthy for every project.