386 private links
When we look around in our field, everyone in Tech seems to focus on one thing: "How can we adopt AI in our tooling and in our processes?"
So it is a proof of a bubble. Everyone is enthusiasts but it doesn't solve real use cases.
A rightful question can be: "How can we set up our engineers for long-term career success?"
Jens Meiert ask pertinent questions to solve this big up question.
What can be done reasonably well with AI today? (And tomorrow? And the day after tomorrow?)
How are our engineers affected by AI?
- Are our engineers using AI?
- How are our engineers using AI?
- What are realistic expectations for our engineers in terms of AI use and proficiency?
- Are we setting clear expectations for use of and proficiency with AI in our job descriptions as well?
- Do we document and anchor these expectations in our competency and skill matrixes?
- Are we watching the AI market, and are we evaluating tooling?
- While the AI market is in flux—which it may be for some time—, do we have enough flexibility (budget, processes, room for errors) to test AI tooling?
- If our engineers leave the company, would they find a new job—or would their profile make them less interesting?
- If they would not necessarily find a new job, what extra skills and experience do they need?
- How can we make our engineers ready for the AI age?
As you can tell, we cannot have all those answers yet—this is precisely why this is so important to get on top of, and it’s also the reason why I say “start answering.”
Now, everyone’s a prize exhibit in the FAANG zoo, because mastering this tangled mess is what opens their gates. Being just a CRUD monkey doesn’t feel fun anymore. If this is what “progress” looks like, I don’t want any part of it.
The technologies to build for 10 years ago dramatically improved!
As mentionned by LeHollandaisVolant, one thing the article doesn't mention is that:
- 1 the pages are more interactive
- 2 the data changes in real time
If, given the prompt, AI does the job perfectly on first or second iteration — fine. Otherwise, stop refining the prompt. Go write some code, then get back to the AI. You'll get much better results.
Ohhh modern tech-stack, ohh shiny object :D
Make websites because you like to.
I totally agree: using HTML as much as possible, then CSS, then JS to enhance it in this order.
The API can respond with HTML fragments anyway for an HTML table.
They wouldn't use Google Search engine as default for 20 billions per year.
Their marketing values privacy, but this partnership is a on their commitment to privacy.
Take: If Apple really cared about privacy, not only should they choose a different search engine, they should block ads and trackers in Safari by default.
But they don't even if they can do it tomorrow.
As a nuclear engineer, I have never been asked to show my portfolio of reactor designs I maintain in my free time, I have never been asked to derive the six-factor formula, the quantization of angular momentum, Brehmsstrahlung, or to whiteboard gas centrifuge isotopic separation, water hammer, hydrogen detonation, or cross-section resonance integrals.
There's something deeply wrong with an industry that presumes you're a fraud unless repeatedly and performatively demonstrated otherwise and treats the hiring process as a demented form of 80s-era fraternity hazing.
Thoughts of https://blog.koalie.net/2025/08/30/tech-mistrust-or-fatigue/
Yes.
I am ready for the revival of directories of websites curated by people for people, and found through serendipity. How much worse will it get? I am both curious and very afraid. But also angry. And powerless.
So I’m frustrated.
Aussi sur la taille des entreprises incompatibles avec l'éthique.
Des trois acteurs en jeu, les investisseurs, les entreprises d’IA et les utilisateurs, les seuls à réaliser des profits sont les acteurs financiers dont les valorisations augmentent. C’est une situation de bulle typique : des plus-values élevées et des profits faibles.
By induction, the only programmers in a position to see all the differences in power between the various languages are those who understand the most powerful one. (This is probably what Eric Raymond meant about Lisp making you a better programmer.) You can't trust the opinions of the others, because of the Blub paradox: they're satisfied with whatever language they happen to use, because it dictates the way they think about programs.
The source code of the Viaweb editor was probably about 20-25% macros.
Computer hardware changes so much faster than personal habits that programming practice is usually ten to twenty years behind the processor. At places like MIT they were writing programs in high-level languages in the early 1960s, but many companies continued to write code in machine language well into the 1980s.
A friend who plays better chess than me — and knows more math & CS than me - said that he played some moves against a newly released LLM, and it must be at least as good as him. I said, no way, I’m going to cRRRush it, in my best Russian accent. I make a few moves – but unlike him, I don't make good moves1, which would be opening book moves it has seen a million times; I make weak moves, which it hasn't 2. The thing makes decent moves in response, with cheerful commentary about how we're attacking this and developing that — until about move 10, when it tries to move a knight which isn't there, and loses in a few more moves. This was a year or two ago; I’ve just tried this again, and it lost track of the board state by move 9.
we could say that the whole argument that LLMs learn about the world is that they have to understand the world as a side effect of modeling the distribution of text.
LLMs are limited by text inputs: color are numbers, etc...
Ideally, you would want to quantify "how much of the world LLMs model."
“a fundamentally incorrect approach to a problem can be taken very far in practice with sufficient engineering effort.”
Take:
LLMs are not by themselves sufficient as a path to general machine intelligence; in some sense they are a distraction because of how far you can take them despite the approach being fundamentally incorrect.
LLMs will never6 manage to deal with large code bases “autonomously”, because they would need to have a model of the program, and they don’t even learn to track chess pieces having read everything there is to read about chess.
LLMs will never reliably know what they don’t know, or stop making things up.
LLMs will always be able to teach a student complex (standard) curriculum, answer an expert’s question with a useful (known) insight, and yet fail at basic (novel) questions on the same subject, all at the same time.
LLM-style language processing is definitely a part of how human intelligence works — and how human stupidity works.
TL;DR js solutions is often better for accessibility. At least information is conveyed.
Popover will be more useful than ever.
The tradeoff is currently the <details> tag with two limitations: the element does not announce a navigation menu is exposed; clicking outside or pressing Esc does nothing.