Recently I started work on redoing all my dotfiles in preparation for setting up my new desktop computer. I wanted everything to be slick, streamlined, automated, and simple. Things are so far looking much better than they were before. I have documentation on how to get things setup and I have Ansible playbooks for setting things up in a repeatable way. One of the frustrating things though is that setting up Ansible requires having some kind of Python installation on the host. Why is this a problem you ask? Well first, how do I know what version of Python is on the host? How do I even know there’s a Python installation on the host at all? Even if a version of Python is available that works with the latest version of Ansible, I have no guarantee that there’s no shared dependencies with anything else installed on the system that will break because of a version miss match. There are so many open variables when you just install something globally that I would rather not deal with. The solution for simple binaries is to just create a directory in the user’s home directory and add that to their PATH. That way they can chuck whatever binaries they want into there and just run them without root. Thing’s get more complicated though with programs that are written in languages like Python that want to be installed in a single location with everything else. For Python I’ve adopted using pipsi (Pip Script Installer) which creates a virtual environment for each script that is installed. That way if I install Ansible with pip, for example, it’s in its own environment and won’t have issues of conflicting dependencies for each package that is installed. This works but we already have two different ways of installing packages without root now and we’ve only seen two ways to install things. There’s every reason to think at this rate that there will be many more strategies required for a normal end-user system. This got me wondering if there’s not an approach that is similar to Docker. With Docker, every image is run in isolation with it’s own file system. This keeps everything safe from contamination by other installed programs as well as giving you some security benefits. I do actually run some commands in Docker containers rather than installing them but it often means having very long commands to do something simple. Let’s say for example that I wanted to run some command named foo and it requires access to the current directory. If I wanted to run that I would need to run a command something like the following:

That’s a lot just to replace foo . Now you could create an alias that would address most of that boilerplate but it would also mean losing some of the flexibility that running just straight docker container run gives you (like if you wanted to change the mount point or mess with networking). The other problem with using Docker for this is that you’re going to have a world of trouble trying to run a graphical application. Docker is normally only used for command line applications but if you wanted to run your browser in Docker you would need to employ something like X11Docker which wires up your container to an X session. This is relatively complicated and gets more complicated if you have proprietary drivers (like Nvidia…). So clearly using Docker as a solution is out. Thankfully there are some new contenders in the application distribution space (new being relative). Namely there’s Snap, Flatpack, and AppImage. Each of these provides most of what I want but in very different ways and with very different focuses. I still need to do a general appraisal of each but at least this gives me somewhat of a way forward. I plan on giving a general rundown of each and how I picked what I want to work with going forward in another blog post.