July 21, 2019
By Leo Dorrendorf

Linux Application Packaging

With guest writer Donald Tevault


This is the fifth and final article in the series on Linux isolation techniques. So far we have looked at different ways of protecting, monitoring, and isolating applications and services using the Linux access control systems, kernel features, and various user-level techniques. In this entry, we will look at ways of packaging Linux applications for protection, including Ubuntu Core and "Snappy", Flatpak, and containers - specifically, Docker.

These packaging methods make it easier for software distributors to bundle their software along with any dependencies it might have, which in turn makes it easy for users to install and deploy entire software suites with a single button press. Packaging also provides for more compact installation packages. Our interest in packaging methods, however, comes from the security angle. As we see below, the packaging methods often provide convenient ways to deploy isolation techniques and other security mechanisms, with good defaults coming out of the box. On the other hand, packaging exposes the user installing the software to outdated or malicious software components, if those are included in the package; and if the isolation system is misconfigured, software inside the package (or an attacker exploiting the software) can abuse that to obtain privileged access to the entire device. Let's look into some examples for each of the packaging methods below.

Ubuntu Core and "Snappy"

Ubuntu Core is a stripped-down version of Ubuntu that's specifically targeted at IoT devices. Unlike the traditional Ubuntu, Ubuntu Core works exclusively with "Snap" packages, instead of the traditional .deb packages (more on that in a moment.) Ubuntu Core is versatile and supports many types of embedded machines out of the box - it comes in pre-built versions for many embedded devices and single-board computers, including the Raspberry Pi, Orange Pi, Snapdragon, Artik, Intel NUC, and Intel Joule. You can also install it in a KVM virtual machine or on a Grove IoT Developer Kit.

Snappy is a new package manager that is an Ubuntu innovation. It's a universal package management system that allows developers to package all dependencies that an application would require into one handy "snap" package. That way, you can install a snap on any Linux distribution that supports Snappy, without worrying about the differences in the names, versions, and locations of library files from one distribution to the next.

NOTE: If you're running some Linux distribution other than Ubuntu Core, be aware that there are other packages with "snappy" in the name that have nothing to do with the Snappy package manager. The right package for Snappy should have a name such as "snapd".

For complex applications that require multiple services that interact with each other, you can just package everything up into one snap package. You can install Snappy on numerous different Linux distributions and use it alongside your normal package manager. On Ubuntu Core, Snappy is all you have, because the apt utilities used on regular Ubuntu aren't included and there's no way to install them.

But Snappy is more than just a package manager. It's set up so that applications you install with Snappy run in a sandbox, using the process isolation technologies that we reviewed earlier. When developers create snaps, they can specify the types of isolation they want for their applications. Normally with Snappy, the snap would run a "confined" mode, and the end user would have to configure the application to access other resources, such as a USB memory stick.

Before you can use Ubuntu Core, you'll need to go to the Ubuntu website, create an "Ubuntu One" account, and upload a public SSH key from the desktop machine that you want to use to access the Ubuntu Core IoT device. If you don't have SSH keys yet, these are simple to generate on any Linux platform. First, check if you already have SSH keys on your host:

$ ls ~/.ssh/

If you find or another id_*.pub file, skip the next step. Otherwise, generate the SSH keys for your email address:

$ ssh-keygen -t rsa -b 4096 -C ""

The first time you boot your IoT device on Ubuntu Core, you'll be prompted to set up your network configuration, and to enter your Ubuntu One credentials. This will cause the IoT device to automatically download and install your public SSH key. When you want to use the IoT device, you will be able to log in remotely from the host from which you uploaded the public SSH key.

To use Snappy, you'll want to log into the Snapcraft store with the same credentials that you used to create the Ubuntu One account. Although you can use Snappy without logging in, more features will be available if you do log in. Log in by entering:

$ sudo snap login

You will then be prompted to enter your credentials.

To search for packages, use the find command and specify a text string on which to search. For example, let's say that we want to find all snaps that have something to do with the Apache webserver. We'll enter:

$ snap find apache

On our Raspberry Pi device, the output looks like this:

$ snap find apache
Name Version Developer Notes Summary
nextcloud 13.0.2snap1 nextcloud - Nextcloud Server - A safe home for all your data
rasterview 1.6 michaelrsweet - CUPS/PWG/Apple Raster File Viewer

Let's say that we want to install the Nextcloud snap. Do that with:

$ snap install nextcloud

Once installation has completed, all Nextcloud services will start automatically. To see the services that are associated with Nextcloud, just use the services command:

$ snap services
Service Startup Current
nextcloud.apache enabled active
nextcloud.mdns-publisher enabled active
nextcloud.mysql enabled active
nextcloud.nextcloud-cron enabled active
nextcloud.php-fpm enabled active
nextcloud.redis-server enabled active
nextcloud.renew-certs enabled active

So we see that the Nextcloud application uses the Apache webserver, along with a multicast DNS publisher, a MySQL server, a Redis server, and a certificate server. All of this was packaged up in one neat little snap bundle, installed with a single command. This snap application uses the same IP address as the host machine. So, we can open the web browser on any desktop machine, and log into the Nextcloud app by navigating to the IP address of the Raspberry Pi.

But convenient installation is only the first goal achieved by Snappy. It also wraps the software it installs with an isolation layer, to preserve the security of the host device. Snappy also leverages the power of AppArmor. When you install a snap, Snappy will simultaneously install the AppArmor profiles for that snap. You can see these profiles in the /var/lib/snapd/apparmor/profiles directory. In addition to AppArmor isolation, when running a snap application, Snappy will use the kernel and namespace features we reviewed in a previous blog. It will set up a cgroup to limit the application's control to selected device objects, set up a seccomp filter, create a private /tmp directory, and use namespaces to allow applications inside the snap to run as root without getting root privileges on the host machine.

Additional steps applied by Snappy restrict snaps from using crond, changing to another user account, adding new users, using setuid or setgid, and more. Full details are available in the Ubuntu Core Security whitepaper.

Something to note is that the Snappy documentation isn't all that good. Running snap --help shows a lot of options for the snap command, but it's not entirely clear how to use all of them. The documentation can be found here. Rather than try to use this documentation, your best bet would be to use the Ubuntu tutorials, which are much better written. You can find the tutorial for basic Snap usage here. At the end of the basic tutorial, you'll see a link for the advanced Snap usage tutorial.

Also, there's no good way to browse through the official Snapcraft store site to see what's available. Instead, you're expected to use the snap find command to see if what you want is available. However, there is a third-party website that someone set up to browse snap packages, which you can find here.


At the top of the home page, under "Browse", click on the "Snaps" link. You'll then see a listing of available snaps, but without any description of what they are. To see more detail, click on the "List" link on the right-hand side. The next screen will look much better, because you'll now see the listing of snaps with brief descriptions. Also on the right-hand side of the screen is a "Filters" link. Click on this, and you'll see how to define certain search criteria, including CPU architecture. Interestingly, you'll see that snaps are available not only for x86/x86_64 and ARM architectures, but also for PPC64el and IBM's S390. Note, however, that not all architectures are supported equally. If you have an IoT device that has an Intel CPU, you have pretty much the entire Snapcraft store available to you. But if you're running any device that has an ARM CPU, you might not find everything that you might need.

Finally, there is the issue of trust. Snappy does have good security features when it comes to sandboxing applications, so that's a huge plus. But unlike what happens with normal distribution repositories, there are no trusted package maintainers who are assigned the job of uploading new packages and updating existing packages. Instead, anyone who wants to can create a Snapcraft store account and upload snaps. There is a review process for snaps, but it doesn't catch everything. In 2018, a snap developer was found to have included cryptocurrency mining software with his snaps, and the Ubuntu reviewers hadn't caught it. The developer meant no harm, and was just trying to find a way to monetize his efforts. The problem is that mining software places an extra load on a computer, and he didn't inform anyone that he was including the mining software. Ubuntu have promised to create a better vetting process to make sure that this sort of thing doesn't happen again.

All-in-all, Snappy is a pretty cool concept, and it will be even better once Ubuntu get these minor issues ironed out.

The Flatpak System

The Flatpak system, a creation of the Fedora Linux team, is very similar to the Ubuntu Snappy system. Both systems allow developers to create self-contained software packages that can be installed on any Linux distribution that supports them. Also, both systems allow applications to run within their own secure sandboxes. Flatpak documentation shows that internally, Flatpak uses the same technologies that we covered in the previous entries on this blog:

  • cgroups
  • namespaces
  • seccomp rules

Flatpak also uses bind mounts to create restricted views of the filesystem and D-Bus for high-level inter-process communications and additional packaging features.

One difference between snaps and Flatpaks is that Flatpaks aren't always quite as self-contained as snaps are. Each Snap package contains all executable files, library files, and configuration files that are needed for an application to run. Flatpaks can be like this as well, but usually, each application Flatpak depends upon one or more run-time Flatpaks that contain the needed library files. Whenever you install an application Flatpak, the needed run-time Flatpaks will also get installed automatically. Creating Flatpaks is fairly straightforward. All you need to do is to create a JSON-type manifest file that will download the source code for a project, specify the necessary runtime packages, and provide directions for the compiler. Modifying the source code shouldn't be necessary. At the bottom of this page, you can see an example of a Flatpak manifest.

The Flatpak system is available in the repositories of various Linux distributions, including Debian, Ubuntu, Raspbian, and of course Fedora. The name of the package in all cases we've seen so far is just "flatpak", without the "c" in "pak". Be careful not to install "flatpack" with a "c", because that's something else altogether. The first thing you'll need to do after installing the flatpak package is to configure a default Flatpak repository. Note that this is the only flatpak command that requires sudo privileges. In the Setup section of the online documentation for Flatpak, you'll see directions on how to do this for various different distributions. In general, the command will look something like this:

$ sudo flatpak remote-add --if-not-exists flathub

This command downloads and installs the configuration file for the flathub repository. Next, we'll want to see what packages are available for installation. Unlike Ubuntu's Snappy, the Flatpak site has a handy way to browse through what's available. The main problem with the site is that there's no easy way to filter things to see what's available for both x86/x86_64 and ARM-based systems. That's okay, though, because there's a good way to cheat. All you have to do is to make a local list by entering:

$ flatpak remote-ls flathub > flathub.txt

If you do this for both your x86/x86_64 system and your ARM-based system, you'll see that each system has different packages that the other doesn't have. The current count for our Raspbian/Raspberry Pi machine is 640 Flatpaks, while the count for our Fedora/x86_64 machine is only 416. Mind you, that's with the same flathub repository for both machines.

To install a Flatpak application, use the install option and specify both the repository that you're using and the package that you want to install. For example, to install the "Bookworm" e-book reader, you can do this:

$ flatpak install flathub com.github.babluboy.bookworm

The packages get installed into the /var/lib/flatpak directory. To run the new Bookworm application, enter:

$ flatpak run com.github.babluboy.bookworm

All-in-all, the Flatpak system is pretty cool, and it seems to have been implemented better than Ubuntu's Snappy system. The Flatpak documentation is also pretty good. Although it is a bit disconcerting to see that anyone can install Flatpak apps without administrative privileges, it's also reassuring to know that the apps are all configured to run in a secure sandbox so that they can't do harm to the rest of your system.

Docker Containers

Containers have been around for quite a few years, but they never became all that popular until Docker came onto the scene. Docker is big business, and it has revolutionized how business is done in busy data centers. So indeed, what is a container?

The traditional way to explain things is to compare containers to virtual machines. A virtual machine is an entire operating system that runs within another operating system that serves as the host. So, you can have a Windows host running Linux virtual machines, or vice versa. You can also have a Linux host running virtual machines of either the same Linux distribution as the host, or of other distributions. Whichever way you do it, each virtual machine will contain an entire operating system, complete with the operating system kernel.

A container is more slimmed down than a virtual machine, which means that you can run more of them on any given physical machine than you can virtual machines. The trick is that even when you download a Docker image for any given Linux distribution, you're not downloading the entire operating system. What's not included in a Docker image is the Linux kernel. When you spin up a Docker container, it will look like a guest operating system, but it will be using the kernel of the host Linux operating system.

A Docker container doesn't necessarily have to be an operating system. It can also be just an application or a set of applications. Docker utilizes the process isolation techniques that we've already looked at - namespaces, SECCOMP, kernel capabilities, and cgroups - to provide isolation for the individual containers. The aim is to prevent containers from accessing the host operating system or each other unless you specifically want them to do so. In addition to allowing more services to run on a particular piece of hardware, Docker containers also allow developers to create an application that can be easily deployed across an entire network.

It all sounds good, right? Well, in reality, Docker containerization is both a blessing and a curse when it comes to system security. When configured correctly, Docker containers do indeed enhance system security. The problem is that it's extremely easy to configure Docker containers incorrectly, which would result in a security disaster. Unless you take steps to lock things down, Docker containers don't really contain anything all that well.

Here are some of the problems with Docker security:

  • There are two ways for a user to run Docker. The user would either have to have proper sudo privileges, or the user could run it without sudo privileges if he or she is a member of the docker user group. Either way, the user is able to do things that only a privileged user should be able to do. This includes being able to mount the host machine's root filesystem as a read/write Docker volume.
  • When creating a container from a Linux distribution image, the user will have root-user privileges within the container. So in the above scenario, where an unprivileged user who is a member of the docker group mounts the host machine's filesystem as a Docker container volume, that unprivileged user will be able to access the host machine's filesystem with root-user privileges. The user would then be able to modify system files or to run programs that normally require root privileges.
  • A lot of the images that you can download from the Docker Hub are not updated and have vulnerabilities, such as the Heartbleed vulnerability in OpenSSL or the ShellShock vulnerability in the bash shell.
  • Although many images are signed with the creators' signing keys in order to show that nobody has tampered with them, a lot of images aren't signed at all.

So, how does this work out in practice? To demonstrate the first two points, we created a Debian virtual machine and installed the latest version of Docker Community Edition on it. (The Docker company doesn't support Docker Enterprise on Debian, and you'll see why in just a moment.) We then added our user donnie to the docker user group by running:

$ sudo usermod -a -G docker donnie

After logging out and logging back in, our docker group membership took effect.

We then spun up a Debian container to run a bash shell. We also mounted the root filesystem of the host virtual machine under the /homeroot mountpoint within the container. Note that we're doing this without sudo, as just an unprivileged user.

$ docker run -v /:/homeroot -it debian bash

After running this command, we can see that we're logged into the Debian container as the root user:

$ docker run -v /:/homeroot -it debian bash

By using the /homeroot mountpoint, we can cd into any directory of the Debian host machine.

root@a9846ee889e6:/# cd /homeroot/etc

We can't do much damage just yet, because we don't have a text editor installed in the container. But that's easy enough to take care of.

root@a9846ee889e6:/homeroot/etc# apt update
root@a9846ee889e6:/homeroot/etc# apt install vim

Next, we'll add a user account for user katelyn by hand-editing the /etc/passwd file and the /etc/shadow file. We do it manually because with the current setup, the host machine's user management utilities aren't working from within the container.

root@a9846ee889e6:/homeroot/etc# vim passwd

Note that we gave Katelyn the User ID number of 0. Hmmm. Are you beginning to see a problem here? But we're still not done. We now need to edit the /etc/shadow file to add an entry there for Katelyn. To make things easy, we'll just copy and paste the line for Donnie's user account and edit it for Katelyn. That way, she and Donnie will have the same password (below, we're shortening the password hashes for clarity).

root@a9846ee889e6:/homeroot/etc# vim shadow

Now, when we log out of our donnie account and log back in as katelyn, we're actually logged in as the root user, thanks to setting Katelyn's User ID number to 0. This is shown in the command prompt:


We now have full administrative privileges on the host machine, thanks to being a member of the docker group. This means that we have full access to the host machine, with the ability to wreak all kinds of havoc, even though our normal user account is a non-privileged account.

So, what's the take-away? Is running Docker a bad thing? Well, no, it's not. It really is possible to utilize Docker containers in a secure manner. Now, giving a whole presentation about Docker security would require a whole book unto itself, and we don't have space to do that here. But we can give you some quick tips on how to deploy Docker containers securely.

  • Don't just add everybody to the docker user group. Only add users that you can fully trust.
  • Never run Docker without some sort of Mandatory Access Control (MAC). A default installation of Debian comes without any kind of MAC, which permitted us, as an unprivileged user, to perpetrate our evil deeds on the host machine from within the container (now you understand why Docker Enterprise isn't supported on Debian). Either SELinux or AppArmor running on the host machine would have prevented us from doing that. Between the two, we prefer SELinux, because it's possible for an unprivileged Docker user to create a container with an option flag that disables AppArmor protection for that container. With SELinux, only someone with proper root privileges can disable it.
  • Virtual machines are inherently better at isolation than Docker containers. So, let's say that you have a group of containers that contain "Confidential" data, and another group that contains "Top Secret" data. For best security, placing the "Confidential" containers within one virtual machine, and the "Top Secret" containers within another virtual machine would be better than using containers. Sensitive information on the host itself is also safer when applications are confined to a virtual machine, rather than a container.
  • When you create and deploy containers, do so by creating a Dockerfile. Within that file, use the USER directive to specify that the container will run without root privileges.
  • Instead of deploying operating system containers, deploy containers that only contain the specific applications that you want to run.
  • When you update containers, do so by modifying the Dockerfile as needed and then rebuilding the containers.
  • With the proper runtime and Dockerfile options, you can create Docker containers with a reduced set of kernel capabilities. You can also use SECCOMP to reduce the number of allowed system calls.

As indicated, there's a lot more to Docker security than just this, but those are the basic concepts.

For low-resource IoT devices, you probably won't have multiple containers running on a single device, so you probably won't have to worry so much about keeping containers isolated from each other. But you will still have to ensure that the containers are isolated from the host system. This means that the above quick tips apply to IoT devices as well as to servers in data centers.


In this article series, we've looked at some cool technologies for Linux process isolation. We started out by looking at Discretionary Access Control and chroot. We then looked at Mandatory Access Control, under which we talked about SELinux and AppArmor, the two most popular incarnations of MAC on Linux. As an aside, we examined application-level monitoring as a potential solution to the problem of remote exploitation and privilege escalation. We then looked at the ways that we can control process isolation by leveraging the capabilities that are built into systemd and the newer versions of the Linux kernel. These included using croups and namespaces, limiting kernel capabilities, and using SECCOMP to limit system calls. Next up were Linux jail packages, and we examined in depth the deployment of Firejail for isolating applications. Finally, in this article, we talked about the various sandboxing technologies, which include Ubuntu Core and Snappy, Flatpak, and Docker containers.

We've covered a lot of ground, and we hope that you find this information useful.

Share this post

You may also like