228 private links
A project leant to curate web content online. It is only an archive by now.
The Open Directory Project's goal is to produce the most comprehensive directory of the web, by relying on a vast army of volunteer editors.
olduse.net was an interactive art installation conceived and implemented by Joey Hess that ran from 2011 to 2021.
olduse.net was posting the first 10 years of archived usenet articles to a news server, replaying usenet as it happened 30 years earlier. It also had a web interface with an interactive news reader, allowing you to access the news server via the web instead of using nntp.
The project is available at https://article.olduse.net/
The current version publishes news from 30 years ago, and some versions have a delay of 40 to 45 years.
Starting with raw HTML, then provides more features with JS
Visualizing all ISBNs in one 1000x800px picture. Each pixel represents 2.500 ISBNs.
The file gets greener as much as there is a book available, or red as much as an ISBN has been issued without the file available.
An image per datasource is also available (Google Books, ISBNdb, Russian State Library, ...)
- cli and UI
- two factor authentication, authenticated encryption and ... file encryption
- multiple file management feature
- open source
- cross platform
- export tasks from the GUI as CLI scripts
Adactio or Alex Chan in using static websites for tiny archives both are
going low-scale, low-tech. There’s no web server, no build system, no dependencies, and no JavaScript frameworks.
Because this system has no moving parts, and it’s just files on a disk, I hope it will last a long time.
Paperwork, documents created, screenshots taken, bookmarked web pages, video and audio files.
Each gets a website.
These websites aren’t complicated – they’re just meant to be a slightly nicer way of browsing files than I get in the macOS Finder.
Each collection is a folder on my local disk, and the website is one or more HTML files in the root of that folder. To use the website, I open the HTML files in my web browser.
I’m deliberately going low-scale, low-tech. There’s no web server, no build system, no dependencies, and no JavaScript frameworks. I’m writing everything by hand, which is very manageable for small projects. Each website is a few hundred lines of code at most.
These are created and curated by hand.
I think this could be a powerful idea for digital preservation, as a way to describe born-digital archives.
In addition to the content of web pages, it's important to record how this digitized content is constructed and served. The HTTP Archive provides this record. It is a permanent repository of web performance information such as size of pages, failed requests, and technologies utilized. This performance information allows us to see trends in how the Web is built and provides a common data set from which to conduct web performance research.
They look relevant. I don't know how complex they are thought.
The files can be put in a directory (automatically created) with the -e
option. So we don't need specific commands (tar, zip, 7z, ...) anymore?
Perma.cc helps scholars, journals, courts, and others create permanent records of the web sources they cite.
An internet archive project dedicated to self-hosting 👍