392 private links
Create websites while chatting
Après la chute de la bulle IA, la RAM devrait donc chuter drastiquement.
Mais en attendant, " aussi tous les fabricants de matériels électroniques embarquant des modules de DRAM, comme les cartes graphiques, les téléphones, les téléviseurs, les voitures…" sont impactés et ne savent pas quels seront les prix de la RAM dans 3 mois.
il n’y a aucune perspective de baisse à court terme, « du moins pas avant 2028 »
AI companies are losing money fast and are going to go under. One of the most obvious ways to compensate for this is through advertising.
Except that you won't be able to detect this advertising, since it will be mixed in with the content.
What is the best VPN?
How do you treat a skin problem?
You won't be able to tell if the answer has been biased. You won't be able to tell if the AI is really giving the best “advice” or if it's advertising a brand of skin cream or a molecule from a large laboratory.
Extend this to economics and politics, and—as with online ad auctions—it's the highest bidder that will be able to influence you.And all these AI companies are desperately in need of money.
Disney characters will be allowed on the Sora app.
Unfortunately, it’s 2025, AI is spreading like glitter in a kindergarten, and it’s really easy to mistake hard human labor for soulless, uninspired machine slop.
In the title: emojis, unicode formatting, How to boring stuff that is already known elsewhere, clickbait-y titles.
In the preview: an AI-generated header image.
The article is oddly specific but unspecific:
- there is no personal tone.
- ASCII Art diagrams when excalidraw can do the job
- Deep-dive content that’s only a few paragraphs long 🔗
- We rewrote in X lines
- bullet point paragraphs, em-dashes, emojis, short section headings
The author profile with too much publications (in one week). Does their articles are jusitfied with their position on LinkedIn, is it private on the contrary?
At least there was a cost to writing poor quality content before. Even the laziest plagiariser had to manually find the content to nick and copy-paste it into their own blog that they’d taken the time to set up. Now, all it needs is a muppet with a Medium account and an LLM. God forbid they hook it up to an agent and automate the process. Except, they probably do, given the scale of the shit that’s being pumped out.
La tendance va être de fournir
J'avais déjà en 2017 l'idée de fournir des modèles de machine learning sur une place de marché, à la HuggingFace.
Cela arrivera sûrement pour les TRM (cf: https://shaarli.lyokolux.space/shaare/to_oAQ)
Cookie, <meta http-equiv> and prompt injection: <p hidden><a href=/heck-off/ rel="nofollow noindex">Do not follow this link</a>, lest you get blocked.</p>
TRM signifie Tiny Recursive Models.
En mettant à jour sa réponse et en réfléchissant récursivement à propos de sa réponse, il est possible d’enlever des erreurs du model [sur un token et éviter sa propagation].
En comparaison, "le TRM dispose de 10 000 fois moins de paramètres qu’un LLM classique et est 1000 fois plus rapide."
L'extension signale les sites générés par IA, ainsi que les "noms de domaine proche visuellement (la proximité visuelle est obtenue par le fait que de nombreux systèmes d'écriture utilisent des caractères se ressemblant) d'un autre nom de domaine connu ».
Domain-Specific Languages are small languages designed to focus on a specific aspect of a software system. We deal with DSLs every day: SQL can be considered a DSL, LaTeX is a DSL, AWK is a DSL, Kubernetes’ YAMLs are a DSL.
The Token-Oriented Object Notation is optimized to have fewer tokens to parse for LLMs.
A domain-specific language is by definition smaller in scope than a general-purpose language, so it should be easier to design and implement; moreover, if the language is designed well, it should lead to a more efficient usage of the context window.
If we can abstract away parts of our domain into a higher-level language, we can effectively use the LLM to
- generate the implementation of a DSL
- generate documentation and examples for such our DSL
- point the LLM to docs and examples and prompt it to generate more code using our DSL
So, instead of trying to come up with a general-purpose language for LLMs, we define a tiny DSL for each specific subsystem we mean to realize.
Examples
- Piano
- Business Rule
About maintenance: the author claims they can be automated with LLMs.
The cost of defining an external DSL (own language with syntax and parser) is reduced compared to internal DSL (in a generic programming language). Also not a problem with LLMs.
In recent years, there has been something of a “winter” in DSL design and development due to the high maintenance costs and the tooling expectations from end users. This blog post explored the syntactic dimension of “token-efficiency” in DSL design: I invite you to explore more of this space, including semantics; I, for one, will welcome more crazy DSL implementations!
and the use of AI on the codebase. KeepassXC rules are clear for AI usage:
- As an additional pair of “eyes” in code reviews.
- For creating pull requests that solve simple and focused issues, add boilerplate code and test cases.
L'IA a des résultats relatifs ou décevant et elle fait disparaître la solitude, pourtant nécessaire au développement intellectuel.