288 private links
Des ressources critiques
https://raindrop.io/guillaume11/ia-tl-dr-55999307
Ainsi que des exemples néfastes:
https://raindrop.io/guillaume11/ia-56132031
Ainsi que des guides, vulgarisations ou réflexions sur le sujet:
https://raindrop.io/guillaume11/comprendre-l-ia-56019219
Est-ce que cela permet de rédiger du contenu plus rapidement? Sûrement.
Des trois acteurs en jeu, les investisseurs, les entreprises d’IA et les utilisateurs, les seuls à réaliser des profits sont les acteurs financiers dont les valorisations augmentent. C’est une situation de bulle typique : des plus-values élevées et des profits faibles.
Bon, AXE est donc une marque de merde
Klarna entre autres.
Le simple fait d’avoir un site Web ou un projet numérique suffisait à attirer des dizaines de millions de dollars de financements. Le marché boursier, notamment le NASDAQ, a alors atteint des sommets. [...] Mais en mars 2000, le vent tourne. Les investisseurs se rendent compte que beaucoup de ces entreprises ne sont pas rentables, certaines n'ont même aucun produit fini.
On parle d'un schéma qui semble se répéter avec l'IA.
Torsten Sløk, éminent économiste en chef chez Apollo Global Management:
les principales entreprises du S&P 500 sont « plus surévaluées » que les grandes entreprises au plus fort de la bulle Internet à la fin des années 1990 et au début des années 2000
Google is upscaling videos on delivery. So creators get their videos modified. The user is not informed and can not consent to it.
That's why Peertube or other alternatives are important: whenever Youtube changes some behavior, we need to have the right to at least leave and use something else that better align with our needs.
A friend who plays better chess than me — and knows more math & CS than me - said that he played some moves against a newly released LLM, and it must be at least as good as him. I said, no way, I’m going to cRRRush it, in my best Russian accent. I make a few moves – but unlike him, I don't make good moves1, which would be opening book moves it has seen a million times; I make weak moves, which it hasn't 2. The thing makes decent moves in response, with cheerful commentary about how we're attacking this and developing that — until about move 10, when it tries to move a knight which isn't there, and loses in a few more moves. This was a year or two ago; I’ve just tried this again, and it lost track of the board state by move 9.
we could say that the whole argument that LLMs learn about the world is that they have to understand the world as a side effect of modeling the distribution of text.
LLMs are limited by text inputs: color are numbers, etc...
Ideally, you would want to quantify "how much of the world LLMs model."
“a fundamentally incorrect approach to a problem can be taken very far in practice with sufficient engineering effort.”
Take:
LLMs are not by themselves sufficient as a path to general machine intelligence; in some sense they are a distraction because of how far you can take them despite the approach being fundamentally incorrect.
LLMs will never6 manage to deal with large code bases “autonomously”, because they would need to have a model of the program, and they don’t even learn to track chess pieces having read everything there is to read about chess.
LLMs will never reliably know what they don’t know, or stop making things up.
LLMs will always be able to teach a student complex (standard) curriculum, answer an expert’s question with a useful (known) insight, and yet fail at basic (novel) questions on the same subject, all at the same time.
LLM-style language processing is definitely a part of how human intelligence works — and how human stupidity works.
Anubis seems to not be enough to protect websites against wild AI crawlers.
Chats common. The more the AI has output, the more
The chat can be completed with task-oriented UIs.
The UI itself can express intent, so the AI write feeds itself.
The hardest part of the UX is often the refinement; good old-fashioned UI controls can help in this case.
Presets, bookmarks and allowing users to select specific parts of the outcome they want to change or pick for later on.
That experience reinforced what we all know deep down: your best work rarely happens in isolation.
One of the best applications of modern LLM-based AI is surfacing answers from the chaos of the internet. Its success can be partly attributed to our failure to build systems that organize information well in the first place.
Remember Semantic Web? The web was supposed to evolve into semantically structured, linked, machine-readable data that would enable amazing opportunities. That never happened.
The knowledge of the Internet were structured with rich semantic linking, then very primitive algorithms could parse them.
En prenant pour exemple, https://www.smithsonianmag.com/smart-news/google-just-released-an-ai-tool-that-helps-historians-fill-in-missing-words-in-ancient-roman-inscriptions-180987046/
Le problème de présenter un nouvel outil sans faire aucune remise en contexte, c'est que cela donne l'impression d'un pas de géant dans le domaine impulsé par une entreprise qui n'a rien à voir avec ce domaine (Google) et qui débarquerait d'un coup avec une solution magique
On peut s'amuser à faire un peu de rétro-engineering sur la façon dont cet outil de Google fonctionne. En fait, il s'agit essentiellement d'une grosse base de données, que l'IA rend capable d'émettre des hypothèses plus rapidement.
Based on my interviews, it became clear that the students’ goal was less about reducing overall effort than it was about reducing the maximum cognitive strain required to produce prose.
[...] the Brain-only group suggests that writing without assistance most likely induced greater internally driven processing…their brains likely engaged in more internal brainstorming and semantic retrieval.
There is indeed some concerns cited by the MIT paper: reduce students' ability to retain and recall information; bypass the process of synthesizing information from memory, promote a form of metacognitive laziness and avoid the intellectual effort.
employees seeking a promotion must now describe how they have used generative AI or other AI technologies to improve customer experience or increase operational efficiency
I wrote the book to teach how to use AI to collaborate, not automate--that’s a race to the bottom.
Johannes Link dernier explique qu’il ne s’agit pas tellement de se demander si l’IA générale arrivera ou pas, mais bien de s’interroger sur l’altruisme des milliardaires de la Tech et sur la confiance qu’on peut leur accorder. Pour ma part, je ne pense pas que nous obtiendrons une réponse intégrale non censurée, car il y a une quasi certitude qu’elle ne soit pas en faveur des milliardaires.