392 private links
Idea -> Programming -> Feedback. Repeat.
Half of those [students] who started from scratch had working designs.
The moral of both stories is that there is worth in what seem to be boring or painful experiences. Effort has intrinsic value.
Value isn't replaceable. AI is a great tool only if it's used correctly:
- treat generated code as ephemera rather than the finished product
- review every line, increasing our skillset at the same time
- solve new problems and build new instincts
As written in another post:
- Never let an LLM speak for you
- Never let an LLM think for you
I also discovered the blog post via ThePrimeagean.
A follow-up post is published on the blog of Turso https://turso.tech/blog/working-on-databases-from-prison
Because it needed 7 weeks of "work"
Mastodon, Bluesky, Twitter, Instagram, Facebook
Cooking a search engine in one weekend (from an experienced developer). It's basic but does the work for 1000 documents.
Instead of trying to ascertain the truth, editors assessed the credibility of sources, looking to signals like whether a publication had a fact-checking department, got cited by other reputable sources, and issued corrections when it got things wrong.
Wikipedia’s dispute resolution system does not actually resolve disputes. In fact, it seems to facilitate them continuing forever.
Wikipedia is a mirror of the world’s biases, not the source of them. We can’t write articles about what you don’t cover.
As volunteers, editors work on topics they think are important, and the encyclopedia’s emphases and omissions reflect their demographics.
Crucially, if you think something is wrong on Wikipedia, you can fix it yourself, though it will require making a case based on verifiability rather than ideological “balance.”
That is, Wikipedia’s first and best line of defense is to explain how Wikipedia works.
A friend who plays better chess than me — and knows more math & CS than me - said that he played some moves against a newly released LLM, and it must be at least as good as him. I said, no way, I’m going to cRRRush it, in my best Russian accent. I make a few moves – but unlike him, I don't make good moves1, which would be opening book moves it has seen a million times; I make weak moves, which it hasn't 2. The thing makes decent moves in response, with cheerful commentary about how we're attacking this and developing that — until about move 10, when it tries to move a knight which isn't there, and loses in a few more moves. This was a year or two ago; I’ve just tried this again, and it lost track of the board state by move 9.
we could say that the whole argument that LLMs learn about the world is that they have to understand the world as a side effect of modeling the distribution of text.
LLMs are limited by text inputs: color are numbers, etc...
Ideally, you would want to quantify "how much of the world LLMs model."
“a fundamentally incorrect approach to a problem can be taken very far in practice with sufficient engineering effort.”
Take:
LLMs are not by themselves sufficient as a path to general machine intelligence; in some sense they are a distraction because of how far you can take them despite the approach being fundamentally incorrect.
LLMs will never6 manage to deal with large code bases “autonomously”, because they would need to have a model of the program, and they don’t even learn to track chess pieces having read everything there is to read about chess.
LLMs will never reliably know what they don’t know, or stop making things up.
LLMs will always be able to teach a student complex (standard) curriculum, answer an expert’s question with a useful (known) insight, and yet fail at basic (novel) questions on the same subject, all at the same time.
LLM-style language processing is definitely a part of how human intelligence works — and how human stupidity works.
L'histoire de Yoshi Kiloyuki est un shōnen.
Sur EVE online
Deleting lines of code for optimisation and better maintainability.