222 private links
I created a Cargo subcommand called cargo-wizard that simplifies the configuration of Cargo projects for maximum runtime performance, fastest compilation time or minimal binary size.
Avoid a round trip for the slow start TCP algorithm. Depending of the internet connexion it can save 100s of ms.
Some tricks the author uses and explains:
- using
<link rel="preload"
as="ROLE" href="URL">` to download - lazy-loading the search index (and this feature) as it is not used widely, and costs bandwidth (1.8MB)
- Reducing the cost of webfonts: shrinking the typeset of the font (especially for titles) + combining css files
The author reduced the page load from 11 seconds to 4 seconds with these.
There’s a standard way to make part of a page not visible until the user requests it: the
tag. You may have seen this on big code examples in some of my other posts.
All that we did to get this speedup is implement the Serialize trait using one line for the body of the serialize method!
But implementing the trait directly loses the possibility to serialize the structure with the #derive(Serialize) macro.
Instead, you should implement it on wrapper types that act like formatters.
Also for efficiency: format_args!
doesn't allocate or even apply the formatting! It only returns Arguments which is a formatter that borrows its arguments.
Even after applying various default filters and providing a GUI to search and filter the remarks, there is still a lot of data to go through.
Understanding the remarks is quite challenging. What even is FastISelFailure or SpillReloadCopies? How can I change my Rust code to resolve these remarks? Hard to say if you’re not a LLVM expert.
How class encapsulation or closure can reduce the bundle size
The plan is to import the dependencies from package.json and modify the rollup output chunks to split each dependency from the vendor bundle. A vendor array should be there with the boot dependencies.
The 512KB Club is a collection of performance-focused web pages from across the Internet. To qualify your website must satisfy both of the following requirements:
- It must be an actual site that contains a reasonable amount of information, not just a couple of links on a page (more info here).
- Your total UNCOMPRESSED web resources must not exceed 512KB.
Kafka has a good throughput with sequential Writes and Reads.
Kafka can move a lot of data because of the zero copy read principle:
Before:
- Disc to OS buffer
- Write the content of the OS buffer to the RAM
- Copy the data to the application Buffer
- Copy the data back to the socket buffer
- Copy the data from the socket to the Network Interface Chip buffer and send it
With zero-copy read principle:
- Read from the disc and load it into the OS Buffer
- Directly copy to the NIC Buffer (the CPU is not involved)
An example of optimisation
-
SSR and Jamstack
-
Active Memory Caching
In summary, if you want to increase the performance of your application, you can use server caches to speed up your APIs, but if you want to persist your app state, you should use the local storage cache.
- Data Event Sourcing
Useful for real-time applications. Connections are made with Websockets.
4.a Prefetching
Pros: Prefetching waits until the browser’s network is idle and is no longer in use and will stop when you trigger usage by clicking a link or triggering a lazy loading function.
Pros: Prefetching caches data within the browser, making page transitions faster when redirecting to a link.
4.b Lazy Loading
Lazy loading can only help you delay downloading resources and doesn’t make your resources smaller and more cost-efficient.
- Resumability
Essentially, Resumability uses the server to do the heavy lifting and then gives the client a minimal amount of JavaScript to execute via serialization.