386 private links
const productIds = [123, 456, undefined, 789]
const products = productIds
.map(getProduct)
.filter((item): item is Product => item !== undefined)1 liner in javascript? Something with regex :)
Je suis twitter et exige ma carte pour chaque lien partageable.
Autant faire un standard !
It’s crazy to think how much bandwidth is being used by metadata tags.
and how much of these are used by the client...
This is limited on the best runs though. Interesting to keep in miind.
One project to unify them all
Often forgotten, it might be useful
A cool project for migrating a lot of mixins
When using auto-fill, the items will grow once there is no space to place empty tracks.
When using auto-fit, the items will grow to fill the remaining space because all the empty tracks will be collapsed to 0px.
More recently, the idea to treat attribute selectors on par with classes as first-class citizens has been proposed more widely. We’re no longer talking about edge cases, but challenging the very defaultness of classes, all while not giving up that sense of structure that many of us look for in CSS naming conventions.
👍
And think of aria-selectors too ! This promotes an a11y-first mindset — if there is no attribute or pseudo selector available to represent the state we wish to style, should we add one?
this is the principle that class selectors violate. An element’s classes are never guaranteed to reflect their state
Using data attributes instead seems a good idea to avoid impossible states!
And there’s a reason why looks attractive — it’s mirroring the APIs we’re used to seeing in design systems and component libraries, but bringing it to vanilla HTML and CSS.
Indeed, it’s a small step from data attribute selectors to custom pseudo selectors or prop-based selectors when using Web Components (think). Styling based on ARIA attributes encourages more accessible markup, and styling based on custom data attributes makes it more robust and readable — a better experience for users and developers alike.
Utiliser des solutions plus faciles en premier:
- une API documentée
- une API utilisée par le service web en question
- Flux RSS, websockets
- Les flux RSS ont tendance à être très utiles pour tout ce qui ressemble de près ou de loin à un blog.
- Parser le HTML
J'avais toujours utilisé le 1 ou le 4 :u
Vous venez de trouver une API sur un site internet et souhaitez l’utiliser dans du code, tout a l’air similaire mais quand vous exécutez votre requête vous obtenez une erreur.
Dans 90 % du temps, c'est un problème de User-Agent.
Pour éviter de surcharger un site:
Google Cache est un outil de Google qui garde une version en cache d’un site web, pour l’utiliser il suffit de remplacer
par le lien que vous souhaitez scraper, vous n’ interagissez pas directement avec le site, mais bien avec Google cache.
https://webcache.googleusercontent.com/search?ie=UTF-8&q=cache:
How they solved the challenges that comes with this feature.
Good practices !
✅ on my personal projects
- Documentation in the same repo as the code ✅
- Mechanisms for creating test data
- Rock solid database migrations
- Templates for new projects and components
- Automated code formatting ✅
- Tested, automated process for new development environments ✅
- Automated preview environments ✅
As a result, to avoid downtime you need to design every schema change with this in mind. The process needs to be:
- Design a new schema change that can be applied without changing the application code that uses it.
- Ship that change to production, upgrading your database while keeping the old code working.
- Now ship new application code that uses the new schema.
- Ship a new schema change that cleans up any remaining work—dropping columns that are no longer used, for example.
Integrate a code sandbox with live edit functions directly in pages. I am totally for it ! :D
Here is a guide
computedEager utility has optimizations over computed in some cases.
when you have a simple operation, with a rarely changing return value – often a boolean.
Stick to computed
when you have a complex calculation going on, which can actually profit from caching and lazy evaluation and should only be (re-)calculated if really necessary.
A CSS-only tree view
Nice!
Clever and efficient
But you have all this internal context that makes it self-documenting. Other people don’t have that context.
I agree
Operational information is more global, how the code fits in with the larger program. The code’s behavior can’t tell us that because it isn’t supposed to know anything about the larger program.
How should we document them?
A list of how self-documenting code is so appealing to people for wrong reasons.
s