Sarah Jamie Lewis's Activity Feed

I've been using a Linux as my primary OS for about 20 years. Over that time I've used dozens of different distros, different desktop environments, different display servers, and compiled countless custom kernels, and even built a few toy kernels of my own.

And over that time I've developed plenty of strong opinions regarding how I want my operating system to be.

(And I think it important to draw attention to the my operating system, because I seriously expect only a small number of people to really share these views)

Yesterday, I published a poll on mastodon in which I outlined two possible futures.

In the first future, I ignore these opinions and simply accept the new Linux status quo. I pick a mainline distro that fully takes advantage of tight systemd integration, adopts wayland as the display protocol, probably run one of those nice and shiny rust based compositors, and move on to more important things.

In the second future, I fully embrace the instinct deep inside of me that says that is deeply dissatisfied with modern operating systems, and direct some of my energy towards that dissatisfaction.

At time of writing the votes are still coming in, but the distribution is undeniable. The result, perhaps obvious in hindsight.

Down the rabbit hole we go..


What is an OS anyway?

The OS as a Package Manager

There is a long standing philosophy in a few circles that the core of any operating system (or rather the most important aspect of the operating system itself) is the package manager. This philosophy can be seen in Linux distros like Gentoo and NixOS in addition to being a fairly prominent part of the BSDs.

In these operating systems the primary mode of modification is through the package manager which takes responsibility for building, optimizing, installing, and uninstalling capabilities - as well as avoiding dependency conflicts.

This is a view of Operating Systems I quite like, it's a practical view, one rooted in the necessity of managing software distributions. There is a deep academic well that has greatly improved the state of the art over recent decades - and I think there are still exciting avenues to explore.

The OS as a Programming Language

Another long standing philosophy is one that positions the operating system as a programming language. With the operator taking on the role of programmer, combining the various atoms to produce larger and more complex behaviour. This is the heart of the UNIX philosophy and the core driver behind standards like POSIX,

I love the idealism of this view, the elegance of combining small parts to form a whole, the rejection of monoliths. There is something in this philosophy that appeals strongly to my sensibilities that is deeply tied to when and how I was first introduced to this world.

Programming Languages as Operating Systems

A quick aside to remark that most programming languages, are themselves Operating Systems - with their own concepts of managing dependencies, their own higher level interfaces, and security models - abstracting away the details of the underlying system calls and other OS-internals.

It is very easy to spend a lot of time here attempting to define an exact line where a programming language ends and the operating system begins, but for now let us remark that the choice of programming language is not a trivial one and is tightly coupled to how the rest of the operating system is structured (something that we will definitely revisit in later notes).

The OS as a Manager

A aspect we must consider is that the OS as defined by its ability to manage access to resources. Diverse devices require a growing set of drivers to manage, in addition to the infinite configurations that allow them to be useful. On top of that, a growing userspace requires managing what programs get access to what devices and to help manage this complexity many mainstream services have been developed - all of which need to be configured and managed by the operating system.

Under this model, there is little benefit to atomic programs as many must have a shared understanding of the services on the system in order to get anything done, resulting in a large repository of shared code and build-time inter-dependencies.

Systemd has a lot of supporters and is now a core dependency in most of more mainstream Linux distros, desktop environments, and various core software. This centralization and standardization of various interfaces and expectations is popular and benefits from a large chunk of FOSS dev-hours.

The OS as Security Enforcer

One last aspect I want to visit is the idea that the OS is defined by security boundaries - userspace and kernel space, users and groups, file permissions etc. How far an OS should go to enforce boundaries is still a fairly open topic. Android went to great lengths to isolate processes (both by uid and through IPC). Linux's ability to setup sandboxes has greatly improved in recent years thanks to Landlock (shoutout to BSDs pledge who got there much earlier).

Security extends further, into how executables are run, how they are linked, how they are built, and into how dependencies are managed. Operating systems that focus on security (like Qubes or Whonix) have focused on isolation, containerization, and sandboxing - a non-trivial feat especially in a world where most applications are web browsers in disguise.

The OS as OS

Of course an OS is all of these things, the mixture and prominence of any particular aspect is what defines the OS.

So what kind of mix do I want?

I am a big fan of tools that do one thing well, and libraries with few (if any dependencies). Both from a practical perspective and from a security perspective. The sheer complexity of many modern Linux distributions doesn't sit well with me.

I like systems I can grasp, and turn around in my head. This gravitate towards smaller systems, because these days my head can only hold so much. I would much rather many small programs, than a few larger ones.

I like the idea of one well defined IPC mechanism e.g. a message bus like bus1. As useful as I find directly connecting programs via pipes, it is clear that a more robust mechanism is needed for certain usecases.

I'm OK with having to rewrite or port significant software into the system - especially if it avoids greater intrinsic complexity in the system itself. In practice this means that I am considering writing my own desktop environment and accepting that anything with a user interface will need to be ported. This is a direction I've experimented with in the past, and one I am keen to explore further.

I am free to ignore compatibility and in doing so really consider new security and interaction models. I expect to make heavy use of modern sandboxing and likely some virtualisation. i

I want to bring in some of the lessons from modern package and build managers while maintaining strong isolation and independence across packages.

Starting

I have been hacking away on a few pre-prototypes for a little while, and it is clearly time to start bringing them together. I have a barebones kernel and initramfs that gets me a shell, some base utils (for now, sbase and ubase), a compiler (tcc) and a libc (musl). This in practice is enough to get a lot of software up and running and from this base I've explored in a few different directions (including using Nix packages and an initial custom desktop prototype).

I think the most natural place to head next is introducing a build/dependency manager. I have experimented with quite a few over recent months, and nothing really felt right. So my plan for this coming weekend, and perhaps many weekends to come, is to design and play around with an approach that feels right to me - perhaps I will end up reimplementing 20% of Nix and justifying adopting the rest to myself, or maybe some inspiration will pull me in a slightly different direction. Ideally I want something that feels like a natural combination of smaller atoms making full use of the interfaces offered by the kernel - I have some ideas here, but this note has already run on too long, and I think it best to leave it for another time.

The rabbit hole is deep with plenty of side tunnels. Watch this space.


Posted: by Sarah Jamie Lewis. (1374 words)

Today I am making public a draft paper that documents a new structure I have been investigating for use in decentralized search.

Comments, questions, and critiques welcome.


Posted: by Sarah Jamie Lewis. (27 words)

Last night I was fortunate enough to see one of the most extraordinary, and beautiful, events I've ever seen.

We saw the first hints of it, a muted light in the east, towards the end of nautical twilight. As the sky darkened the band of light grew wider and more noticeable, eventually becoming a blur of magenta and whitish-green.

As the peak of astronomical twilight hit we were greeted to an astonishingly bright sky at zenith; the shape and awe it inspired conjured up images of angels breaching through the heavens.

I was surprised by both the sheer scale, and how dynamic it was. The colours were obviously more muted than the long exposures below, but nonetheless there - lighting up the sky and drowning out the stars.

A long exposure photography of the Aurora as seen from British Columbia, Canada. A large number of bright pink and green lights are seen emanating from a radiant point high in the sky. Some bright stars can be seen in the background.

We stared into the sky for hours, watching the storm remnants twist and turn' the colors boosting and fading.

One of the last images I took was the one below, and I'm very happy with how the image below came out. It shows the tip of Ursa Major/Big Dipper intersecting with the radiant point.

It was just as majestic when seen with the naked eye.

A long exposure of the aurora as seen from British Columbia, Canada. A bright pink light in the sky with streaks of green emanating from a radiant point. The big dipper/ursa major constellation can be seen on the right hand side with it's tip towards the radiant point.


Posted: by Sarah Jamie Lewis. (284 words)

I'm somewhat perplexed by the new SecureDrop protocol

Specifically: "The server is “untrusted” in the sense [it] learn[s] nothing about users & messages besides what is inherently observable from its pattern of requests, and it should not have access to sensitive metadata, or sender or receiver information"

Seems like a very weak definition of "untrusted", especially when two comparison techniques explicitly attempt to restrict knowledge derived from access patterns.

Further...doesn't the servers ability to produce arbitrary valid ciphertexts (not really forgeries as it's an explicit requirement) allow a range of active attacks against recipients?

I'm not entirely sure of the consequences there, but it seems incompatible with the optimized decrypt-fetch message id (as it allows the server to test and trigger).

Removing the optimization effectively brings you back to download-all and trial decryption (with server forgeries there becoming effectively noise)

The motivation for private server state is "there isn't enough traffic going through the system to provide a reasonable anonymity set to any observer so we want to minimize observers"

Which is reasonable, but then the server is explicitly not "untrusted" - it can perform all the same statistical attacks...you effectively limit the adversary space to the server.

And if so (and you are unwilling to trust the server) then your risk model becomes that addressed by PIR or OMR.

But instead the protocol explicitly allows the server additional capabilities by granting it participation in generated receiver key material (and bloating the ideal communication cost)

Any optimization you make to reduce that cost grants the server additional information. Either making the server trusted in arbitrary ways or compromising one of the desired properties.

The protocol itself is interesting, involving the server in that way has that nice property of hiding valid ciphertexts from all other parties - I feel like I've seen a flow like it before, somewhere, but nothing immediate comes to mind.

I suspect you could probably hack in authentication into that flow somehow which could have useful applications.

But the protocol doesn't feel like it solves the problem? Or rather, the strengths of the protocol don't nicely map to desired properties.


Posted: by Sarah Jamie Lewis. (343 words)

I had a chance to sit down and read Tor: From the Dark Web to the Future of Privacy by Ben Collier

I highly recommend it. I think it captures the history beautifully and its a nice reminder of how these projects play out over decades.

It can be very easy to get caught up in the day-by-day/week-by-week rush/drama/critiques/effort and having a history like this puts that nicely in perspective.

Go read it.


Posted: by Sarah Jamie Lewis. (71 words)

A list of research/project ideas that I have no time to pursue fully, but which I would be very interested in helping out/mentoring. If any of these sound interesting then please reach out to me. I may also be able to help find funding for some of these.

1. File Metadata Removal - this is an area that I think needs additional reseach/experimentation. Existing solutions like exiftool and MAT/2 are great, but don't quite match the modern reality for when and how we would ideally like to integrate such processes into tooling (e.g. in a chat app file share flow) and I think suffer from a limited view of file metadata in light of the growing complexity and diversity of file types, and web based workflows - and how people realistically want to check/modify/remove metadata.

Further, we now have interesting approaches like Magica and other approaches that can perhaps be utilized to catch file types and metadata not strictly captured by a given tool.

Realistically I think there are like 4-5 distinct research projects here around UI/UX, requirements and expectations, and new technical work.

2. Reproducible Build Tool - I'd love to find someone interested in helping/expanding replqiate. I have some ideas about speeding it up further, pushing back the trust, and getting it to the point where it functions as a day-to-day build tool.

3. A better (mostly-static) source code repo/forge - One of my biggest sources of frustration right now is spam in source code repos: specifically issue spam, pull request spam - I think I delete at least 5-10 AI-generated nonsense a week. It's getting worse and the moderation tools to catch this are terrible. I would really like a mostly static git repo that also allows moderated public-issue creation and other nice features. I have a prototype but its been gathering dust.

4. Small Groups (multi)-Project Management - I work on a lot of projects where the primary team is small (either just me, or a small group of 2-4). I have used every project management tool under the sun and found none of them really fit the flow I want. A few notes:

- Kanban is great for restricting WIP but a tool that manages everything in terms of kanban/boards is terrible for managing longer term research

- I need a place to put papers/documents/artifacts that can be tied and referenced to active issues

- I'd like to be able to breakdown problems in addition to work. Capturing the problem structure is, to me, just as important as tracking the work needed to solve a particular instantiation of it

5+. While I am actively working on projects in the following areas, if any of them sound interesting please also get in touch:

- Decentralized Search

- Formal Requirements Specification

- Evolutionary Fuzzing

- Cwtch / Privacy Preserving Communication


Posted: by Sarah Jamie Lewis. (465 words)

I spent some time improving Saffron, a formal language I've been working on aimed at requirements elicitation and analysis.

Mainly I've been thinking about how to express common "patterns" in the underlying formalism e.g. "A message has a single author" can be expressed as infer WHERE m:Message a:Actor b:Author MATCH Author(m,a) THEN PROHIBIT Author(m,b).

One of my goals for this week is to document some of these patterns, and perhaps allow them to be expressed in a syntactic sugar e.g. the above is perhaps more nicely expressed as constrain WHERE m:Message a:Actor EXCLUSIVE Author(m,a)

(An even better syntactic sugar might be to allow the construction of meta-types that compile down to such relationships under the hood to allow a more programmer friendly representation e.g. struct Message := { Author Content } though I am less inclined towards that direction right now as it clutters the number of formalisms at the highest level.)


Posted: by Sarah Jamie Lewis. (149 words)

Hello World!

I've been thinking about where microblogging/blogging fits in my life.

I used to write a lot of Twitter threads, but since the implosion and my move to Mastodon I write far fewer than I used to. Part of this is the difference in platforms, part is my own changing priorities.

For a while I maintained a personal blog for longer term thoughts but I've found it clashes with how I want to organize my thoughts. I often update old articles, redraft papers, rewrite systems etc. and so a few months ago I started writing this website, in a text editor, with no overarching taxonomy or categorization.

However, this obviously comes with some downsides, which I'm now looking to address.

Inspired by Molly White, I've implemented this Activity Feed. A place for me to microblog, collect thoughts, post links, document website updates, new papers etc. all in one place, and in a format that I have a bit more control over.

I'll probably expand the little script I wrote to compile this, and publish it once it is in a less hacky state. But it already compiles to a feed, has tag categories, and is nicely integrated with the rest of my little static site.

There is no automated cross-posting, some of the stuff posted here will end up on Mastodon etc. but most will not. This is mostly for me, but if you would like to keep track of things I am working on, then this is the place.


Posted: by Sarah Jamie Lewis. (248 words)


Home » Activity Feed