319 private links
One of the best applications of modern LLM-based AI is surfacing answers from the chaos of the internet. Its success can be partly attributed to our failure to build systems that organize information well in the first place.
Remember Semantic Web? The web was supposed to evolve into semantically structured, linked, machine-readable data that would enable amazing opportunities. That never happened.
The knowledge of the Internet were structured with rich semantic linking, then very primitive algorithms could parse them.
En prenant pour exemple, https://www.smithsonianmag.com/smart-news/google-just-released-an-ai-tool-that-helps-historians-fill-in-missing-words-in-ancient-roman-inscriptions-180987046/
Le problème de présenter un nouvel outil sans faire aucune remise en contexte, c'est que cela donne l'impression d'un pas de géant dans le domaine impulsé par une entreprise qui n'a rien à voir avec ce domaine (Google) et qui débarquerait d'un coup avec une solution magique
On peut s'amuser à faire un peu de rétro-engineering sur la façon dont cet outil de Google fonctionne. En fait, il s'agit essentiellement d'une grosse base de données, que l'IA rend capable d'émettre des hypothèses plus rapidement.
Based on my interviews, it became clear that the students’ goal was less about reducing overall effort than it was about reducing the maximum cognitive strain required to produce prose.
[...] the Brain-only group suggests that writing without assistance most likely induced greater internally driven processing…their brains likely engaged in more internal brainstorming and semantic retrieval.
There is indeed some concerns cited by the MIT paper: reduce students' ability to retain and recall information; bypass the process of synthesizing information from memory, promote a form of metacognitive laziness and avoid the intellectual effort.
employees seeking a promotion must now describe how they have used generative AI or other AI technologies to improve customer experience or increase operational efficiency
I wrote the book to teach how to use AI to collaborate, not automate--that’s a race to the bottom.
Johannes Link dernier explique qu’il ne s’agit pas tellement de se demander si l’IA générale arrivera ou pas, mais bien de s’interroger sur l’altruisme des milliardaires de la Tech et sur la confiance qu’on peut leur accorder. Pour ma part, je ne pense pas que nous obtiendrons une réponse intégrale non censurée, car il y a une quasi certitude qu’elle ne soit pas en faveur des milliardaires.
We ended up deciding: what the heck, we might as well meet the market demand. So we put together a bespoke ASCII tab importer (which was near the bottom of my “Software I expected to write in 2025” list). And we changed the UI copy in our scanning system to tell people about that feature.
I am not sure it's a market demand, but only a ChatGPT hallucination.
the many ways AI is making humans less productive #1:
- I have to prove I am human when visiting website
- websites are generally slower because they need to check if my browser is an AI bot
- I spent time every week now, restarting services affected by AI scraping bots
And I am not even using AI
An interesting project: let the AI squash and reword the commits
These fake video can still be spotted on few details, but they start to feel real
Cela ne sent pas bon pour les futurs investissements déraisonnable.
Comme pour toute bulle d'investissement, seule les applications éprouvées continueront d'être financées.
The books were once criticized similarly to AI now.
Saying no to AI is a luxury too. You have to be able to.
AI is not good now: https://github.com/dotnet/runtime/pull/115733
Thus, the general expectation is that AI implies, at the very least, software that consistently and reliably outperforms a human expert at any task in any given field it claims to be proficient in.