319 private links
C'est immense
"Les chercheurs ont découvert que la confiance dans l'IA diminue à mesure que les gens acquièrent des connaissances dans ce domaine."
"L'euphorie de l'IA se heurte à la réalité économique : personne ne paie la facture. Des rapports récents font état de flux de trésorerie négatifs, procès à répétition et absence de modèle économique viable. La promesse d'un retour sur investissement est encore théorique."
Source: https://journals.sagepub.com/doi/10.1177/00222429251314491
ou un autre article de Marianne https://www.marianne.net/politique/julie-martinez-porte-parole-du-ps-qui-travaille-chez-la-concurrence-au-detriment-de-la-france
About the push of Copilot on GitHub without opt-out
Des ressources critiques
https://raindrop.io/guillaume11/ia-tl-dr-55999307
Ainsi que des exemples néfastes:
https://raindrop.io/guillaume11/ia-56132031
Ainsi que des guides, vulgarisations ou réflexions sur le sujet:
https://raindrop.io/guillaume11/comprendre-l-ia-56019219
Est-ce que cela permet de rédiger du contenu plus rapidement? Sûrement.
Des trois acteurs en jeu, les investisseurs, les entreprises d’IA et les utilisateurs, les seuls à réaliser des profits sont les acteurs financiers dont les valorisations augmentent. C’est une situation de bulle typique : des plus-values élevées et des profits faibles.
Bon, AXE est donc une marque de merde
Klarna entre autres.
Le simple fait d’avoir un site Web ou un projet numérique suffisait à attirer des dizaines de millions de dollars de financements. Le marché boursier, notamment le NASDAQ, a alors atteint des sommets. [...] Mais en mars 2000, le vent tourne. Les investisseurs se rendent compte que beaucoup de ces entreprises ne sont pas rentables, certaines n'ont même aucun produit fini.
On parle d'un schéma qui semble se répéter avec l'IA.
Torsten Sløk, éminent économiste en chef chez Apollo Global Management:
les principales entreprises du S&P 500 sont « plus surévaluées » que les grandes entreprises au plus fort de la bulle Internet à la fin des années 1990 et au début des années 2000
Google is upscaling videos on delivery. So creators get their videos modified. The user is not informed and can not consent to it.
That's why Peertube or other alternatives are important: whenever Youtube changes some behavior, we need to have the right to at least leave and use something else that better align with our needs.
A friend who plays better chess than me — and knows more math & CS than me - said that he played some moves against a newly released LLM, and it must be at least as good as him. I said, no way, I’m going to cRRRush it, in my best Russian accent. I make a few moves – but unlike him, I don't make good moves1, which would be opening book moves it has seen a million times; I make weak moves, which it hasn't 2. The thing makes decent moves in response, with cheerful commentary about how we're attacking this and developing that — until about move 10, when it tries to move a knight which isn't there, and loses in a few more moves. This was a year or two ago; I’ve just tried this again, and it lost track of the board state by move 9.
we could say that the whole argument that LLMs learn about the world is that they have to understand the world as a side effect of modeling the distribution of text.
LLMs are limited by text inputs: color are numbers, etc...
Ideally, you would want to quantify "how much of the world LLMs model."
“a fundamentally incorrect approach to a problem can be taken very far in practice with sufficient engineering effort.”
Take:
LLMs are not by themselves sufficient as a path to general machine intelligence; in some sense they are a distraction because of how far you can take them despite the approach being fundamentally incorrect.
LLMs will never6 manage to deal with large code bases “autonomously”, because they would need to have a model of the program, and they don’t even learn to track chess pieces having read everything there is to read about chess.
LLMs will never reliably know what they don’t know, or stop making things up.
LLMs will always be able to teach a student complex (standard) curriculum, answer an expert’s question with a useful (known) insight, and yet fail at basic (novel) questions on the same subject, all at the same time.
LLM-style language processing is definitely a part of how human intelligence works — and how human stupidity works.
Anubis seems to not be enough to protect websites against wild AI crawlers.
Chats common. The more the AI has output, the more
The chat can be completed with task-oriented UIs.
The UI itself can express intent, so the AI write feeds itself.
The hardest part of the UX is often the refinement; good old-fashioned UI controls can help in this case.
Presets, bookmarks and allowing users to select specific parts of the outcome they want to change or pick for later on.
That experience reinforced what we all know deep down: your best work rarely happens in isolation.