Grazing the Web
Caveat lector: unsubstantiated claims and missing citations lurk ahead. I encourage you to write me an email, and tell me what you think.
Grazing is similar to browsing, but the trees are shorter. Here, I'll try to explain. Let's talk about the Web that is, Web that was, and the Web that can be. And let me be clear: I emphatically do not mean crypto, blockchain, or any other manifestation of the Web three graft.
The Web that we have right now is a curious evolution of the Web of the noughties. Back then phones had buttons, chromium was a transition metal, and the Web was rounding down to 1.0. Then a revolution happened and WWW became a more participatory experience. People started to post, comment, even upload a jagged 240p clip from their Nokia. In short, creativity blossomed.
In the following years the Web has grown immensely. The browsers traded speed for complexity, Web standards ballooned, and ECMAScript became the most popular language on the planet. Feature-rich and JIT-reliant single-page applications filled the vacuum left by Flash. The Web became an advertisement-clad sumptuous festival of colour and sound, in particular for those who could afford the bandwidth to download the assets having uploaded the cookies.
Two Paths Forward
Past performance isn't always an indicator of future returns, but the trend is worrying. I don't think the Web will get smaller before it gets larger, and I don't feel I'm an isolated case. Pundits have been raising their eyebrows, asking questions, and proposing solutions for years now. Maciej Cegłowski, Dave Copeland, this guy, small web, smolweb are only some who proposed solutions. Others, such as Gopher or Gemini, went a step further and declared a schism, abandoning all they saw superfluous. Let's start with the latter.
Gemini is a good example of a hard cut. It is clear what its goals are, and the resulting protocol design implements them with no remorse. The designers were not looking for a middle ground. Interoperability with the World Wide Web is not part of the feature set. The TLS overhead is much higher than than in HTTP/2. The protocol is not designed to be versioned or extended, thus locking its users in the current and final version. Those decisions might be jarring from the HTTP standpoint, but make perfect sense if you consider them in the light of the project's goals.
A fresh TLS handshake on every request is great for privacy, but at the cost of accessibility. If you happen to be in a place with poor cell coverage, e.g. a coffee shop in downtown Berlin, you can wait more than ein Augenblick for a next Gemini document to load.
I value the thoughtfulness the designers of Gemini put into their protocol. The proof is in the pudding; Gemini clients have little to do, and consequently are small and stunningly fast. They do one thing and do it well. Try out, say, Lagrange, and see for yourself how fast the Web could have been.
Despite all my praise, the decisions such as the hard separation from WWW is a hard pill to me swallow. I would like a less restrictive solution, backwards compatible with the mainstream Web.
Having rejected the clean cut solution, let's take a look at the other path forward. It is less draconian, but likely as controversial: keep using the Web, but use less of it. Pundits have encouraged web developers to restrict themselves to a subset of what modern browsers enable. Less scripting, better compression, focus on textual content, testing on screen readers. Those are all great ideas, but what if instead applying the filter on the producer side we limit the consuming endpoint.
We've been doing it for a while now. We've been extending our web browsers with ad blocklists, third-party cookie blockers, noscript plugins, &c. We've actively reduced the feature set of our user agents to make the modern web palatable. And that's fantastic; that's user agency.
You decry what I'm going for here as backwards, but consider one core feature of every modern web browser: the reader mode. Its purpose is to extricate the content the reader is interested in from all the irrelevant nonsense, and serve it in a clean minimalist form. If a web page conveying textual content does not look properly when viewed in the reader mode, the fault might not be on the reader mode's side. The reader mode is firmly in the future, and is but getting started.
What's the right point to apply force, then? The browser extensions marketplaces are awash with plugins that can extend the user agent by reducing its functionality. There's no niche that could be filled there. Should we instead start at a lower level? Instead of extending a browser, build one?
Alas, building a web browser has been getting progressively more difficult and expensive. Web standards have continued to grow. The adherence to backwards compatibility—by all means a praiseworthy objective—has prevented new web technologies from obsoleting old ones. As the web stack continues to accrue complexity, building a complete browser anew becomes a hard problem.
The big browser market is has been dominated by one player from the advertisement industry. There's a couple of other companies trying to pitch their own alternatives, which under the hood are based on the same engine. Tellingly, large corporations that had used to build their own engines give up and switch to their competitors stack. Indeed, building a browser is prohibitively expensive.
I have to point out that there are new small entrants. Ladybird is a praiseworthy effort by the other of SerenityOS to create a new browser from scratch. I applaud the author and wish them success. Nevertheless, I think that's not an undertaking for which I am ready.
I don't think it's a controversial statement to admit that a modern, comprehensive web browser is as complex as an operating system. Management of windows, tab groups, individual tabs, and resource-hungry single-page applications running in them; keeping track of a multitude of network connections; gracefully handling a plethora of possible timeouts and failure scenarios that the system permits; and while all those concurrent, networked cogwheels are spinning, keeping user's private data private.
I've arrived at a conclusion that the only way forward is to trim. Take the curatorial scissors, and cut out all the components of the modern web that are not immediately conductive to having—in essence—a nice reader mode. Consider my earlier experiment, put together in three days one hot summer week. If you squint, it does 80% of the job, as long as you permit cutting 90% of the features.
Cut, Cut, Cut
How much of a modern web browser will we need to trim? An awful lot. CSS is dauntingly complex and we'll need a restricted subset to achieve our goals. Ecmascript, with its reliance on browser's APIs, will likely have to be wholly left out. As long as your website uses moderate amount of scripting and adheres to principles of progressive enhancement, I'd want the grazer to render it without major issues.
All this trimming will make the resulting application less usable for the public, but lower the barrier for prototyping. Once the skeleton is walking, we can vet further features, and add them as we see fit. It's always easier to add than to take away.
One aspect of the modern web I do want to preserve is its contributory character. I'm not a passive consumer. I can write journal entries, fill forms, and upload videos. This is what enables creativity and participation. This decision brings with itself a lot of privacy and security challenges that I'm confident are outside my expertise. Luckily, this won't be a solo project.
Will this be for everyone? No, not all will be happy with our choices and we cannot please everybody. I reckon that 80% of the WWW traffic is three or four companies in the advertisement industry vying for your attention. We will likely miss out on those and that's alright. There is a lot of room in the remaining sliver of the Web.
Note that eschewing complex features of the modern web will help us achieve other qualities. The less styling and scripting freedom we permit, the more reliable our screen reading, control over colours and contrast, and other accessibility features will be. Downloading fewer assets, and running less code will enable the application to run on older devices, use less resources, and save your battery life.
Before you declare the resulting application unfit for purpose because of lack of scripting, consider how much of the modern web does necessitate Ecmascript. How much do we, web developers, offload to a Turing-complete client-side, that could've been perfectly achieved in pure HTML and a touch of styling on our back-end, serverless or otherwise. I do not believe lack of scripting support in a web grazer is a disservice to its users. All progress requires sacrifice, and I am gladly sacrificing advertisers.
European Union has recently been a harbour for all projects private and digital. That applies to both products, their adoption, and relevant legislation. Consider the Matrix protocol, implemented by Germany and France for their defence and health care services. Another example would be Mastodon, hailing from Berlin, readily picked up by various European institutions as their publishing platform. The legislation provides railings: GDPR and more recent DMA protect online citizens.
Moreover, individual members of the supranational organisation put the money where the mouth is. Prototype Fund, a German non-profit, and NLnet, a Dutch counterpart, sponsor projects that benefit the modern information society.
The web grazer feels to me as a natural extension of European institutions' long term goal. It enables the citizens to engage in the online world in a safe manner, that protects their rights, and isn't siphoning data to overseas GPU clusters in order to build advertisement profiles or large language models.
Broadband internet access has been basic human right in Finland for over a decade now. It is not an overstep to declare access to a browser, or a grazer, such a privilege as well.
Honestly, if Brussels is willing to fund me I’ll license the project under EUPL, and call it Sprout.