387 private links
The places are based on OpenStreetMap objects.
A good news
Alors que déjà il y a 18 ans https://sebsauvage.net/rhaa/index.php?2007/05/25/10/08/26-le-debut-de-la-fin-de-la-taxe-microsoft-
On peut se passer de Windows pour la majorité des usages maintenant.
Time management can be useful
Compare a slot machine to vibe coding. It matches.
Ils vont utiliser LibreOffice, avec des plugins ou extensions dédiée à la défense.
nous le faisons pour que l'armée fédérale, en tant qu'organisation qui est là pour fonctionner lorsque tout le reste est à terre, continue à disposer de produits qui fonctionnent dans notre sphère d'influence
La problématique majeure pour se soustraire de Microsoft réside dans les dépendances associées : la plupart de nos applications métiers dialoguent avec les outils Microsoft
Eh oui malheureusement. En revanche, avec des outils libres, il est possible de développer en interne pour que les applications métiers dialoguent avec ces logiciels libres.
All the forms are sent as PDF.
When the experience of clicking a link, waiting for a Javascript-heavy page to load and dismissing a thousand pop-ups has become the norm, it’s hardly surprising that a good many users would rather bypass that experience altogether and are turning to AI and chatbots to do the browsing for them.
The experience of browsing the web could be so much better than it is right now, without the huge social and environmental cost of AI. Perhaps there would be less demand for chatbots if the web itself was less hostile.
550 000 espaces personnels sur monlycée.net fournis par Leviia.
The parsers are different in JS, Python, Go and Java.
Number are not precise:
- MAX_SAFE_INTEGER limits the number. Twitter had to use an `id_str.
- decimal precision is unreliable (in JS) --> always use dedicated decimal types (Python’s Decimal, Java’s BigDecimal, JavaScript’s decimal libraries)
- UTF-8 encoding in JSON allow single unicode code points or composed ones. Use
.normalize("NFC")for JS strings. - the object key order should be alphabetically in JSON.
- Different languages handle absence of values (
undefined,nullor a missing property) differently. - No time format is official, so it's always custom:
{ "iso_string": "2023-01-15T10:30:00.000Z", "unix_timestamp": 1673780200, "unix_milliseconds": 1673780200000, "date_only": "2023-01-15", "custom_format": "15/01/2023 10:30:00" } - Different parsers fail differently on malformed JSON.
The twitter example is only one. There is also postgres that stores the format as JSON and JSONB (normalized).
MongoDB uses an extended JSON format.
Workarounds:
- Use Schema Validation!
- custom normalisation function#:~:text=Normalize%20Data%20Types%3A%20Ender%E2%80%99s%20Data%20Normalization%20Game)
- Tests! Numeric Precision Tests, Unicode and String Handling, Date and Time Consistency, Error Handling Uniformity, Cryptographic Consistency, Performance and Memory Behavior
- Organizations don't use that much data.
Of queries that scan at least 1 MB, the median query scans about 100 MB. The 99.9th percentile query scans about 300 GB.
but 99.9% of real world queries could run on a single large node.
I did the analysis for this post using DuckDB, and it can scan the entire 11 GB Snowflake query sample on my Mac Studio in a few seconds.
When we think about new database architectures, we’re hypnotized by scaling limits. If it can’t handle petabytes, or at least terabytes, it’s not in the conversation. But most applications will never see a terabyte of data, even if they’re successful. We’re using jackhammers to drive finish nails.
As an industry, we’ve become absolutely obsessed with “scale”. Seemingly at the expense of all else, like simplicity, ease of maintenance, and reducing developer cognitive load
Years it takes to get to 10x:
10% -> ~ 24y
50% -> ~ 5.7y
200% -> ~ 2.10y
Scaling is also a luxurious issue in many cases: it means the business runs well.
- Hardware is getting really, really good
In the last decade:
SSDs got ~5.6x cheaper, 30x more on a single SSD and 11x faster in sequential reads and 18x in radom reads.
CPUs core count went up 2.6x, price went down at least 5x per core, each Turin core is also probably 2x-2.5x faster.
Distributed systems are also overkill as hardware progresses faster.