I read there is something called firejail that does this, but according to the reviews on software manager, some have had it destroy their system or mess up their programs, so i dont want to risk that.

There was also something called bubblewrap, but it has no reviews at all.

How big risk does the firejail have and are there any other programs that are good or better for this? I already managed to mess up my system once (blackscreen after login. I think installing portmaster caused it or installing and uninstalling some software + its dependencies), but fortunately i had backup of the system so i could reverse the damage, so i’m a bit more cautious now.

Also, are there any other concerns that one should know about regarding sandboxing?

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    8
    ·
    7 hours ago

    There are a couple routes to doing this, and what’s appropriate here depends on what one is doing. One tends to do this if one is concerned about software potentially being malicious, or wanting to limit the scope of harm if non-malicious software is compromised in some way.

    Virtual Machines

    I guess the most-straightforward is to basically create a virtual machine. You’re creating another “computer” that runs atop your own. You install an operating system on it, then whatever software you want. This “guest” computer runs on your “host” computer, and from its standpoint, the “host” computer doesn’t exist. Software running in the “guest” computer can’t touch the “host” computer.

    Pros:

    • It’s pretty hard to make mistakes and expose the host computer to the guest computer.

    • As long as you know how to install an operating system and software on the thing, you know most of what’s involved to set this up. Mostly just need to learn how to use whatever software interacts with the guest.

    • You can run a different operating system. I sometimes run a Windows VM on my Linux machine, run isolated Windows software.

    • You can (usually at the cost of performance) run software designed for a different achitecture.

    • Software running in the guest can’t eat up all the memory on the host.

    • It’s pretty safe, hard to accidentally let malicious software in the guest touch the host.

    Cons:

    • While things have gotten better here, because you’re running another operating system, it tends to be relatively-heavyweight. Running many isolated VMs uses more memory. Disk space adds up, because you’re having to install whole operating systems, and their filesystems need to typically live on a “disk image”, a file on the host computer that stores the entire contents of what looks like a disk drive to the guest.

    • Networking can be more complicated, since one traditionally has what looks like an entire separate computer. For some applications, one can set up network address translation in the same sort of way that a consumer broadband router typically makes all computers on a home network appear to come from one IP address by intercepting its outbound connections to the Internet and opening connections on its behalf, one can have the host computer do network address translation. But it can be kind of obnoxious to, say, run a server on the guest.

    • Without adding special “paravirtualization” software that “breaks the walls” between the guest and the host — and bugs in that software might create holes where software in the guest might affect the host — transferring files between the guest and host can be pretty inefficient. Same thing for doing things like allocating more memory to the guest Doing things like file interchange between the guest and host or altering the amount of memory can also be relatively inefficient.

    • Traditionally, and while I haven’t looked recently, I believe still in 2025, on Linux, there still isn’t really a great way to share GPU hardware on the host with the guest, to create a “virtual 3D video card”. This means that this isn’t a great route for running 3D games on the guest. There are some ways to “pass through” hardware directly to a guest, so one could simply allocate a whole physical 3D video card to a guest.

    One open-source software package to do this on Linux is QEMU (which you’ll sometimes see referred to as KVM, after a second piece of software used to accelerate its execution on Linux). A graphical program to create virtual machines and interact with them on the desktop is virt-manager. An optional paravirtualization package is virtio.

    I’d typically use this as a reliable way to run a single piece of potentially-sketchy Windows software on Linux without being able to get at the host.

    Containers

    These days, Linux can set up a “container” — a sort of isolated environment where particular pieces of Linux software can run without being able to see software outside the “container”.

    Pros:

    • Efficient. Unlike virtual machines, this uses no more resources than running software on the host.

    • Not too complicated. Depending upon what one’s doing, this does require spending some time to learn software involved with the containerization.

    • You can typically run other Linux distros in the “guest” aside from using their kernel; there’s software to help assist in this.

    • Disk space usage can be more-efficient than a virtual machine, since it’s pretty straightforward to share part of a directory hierarchy on the host with the guest. By the same token, file interchange can be efficient.

    • The same is generally true for memory — it’s easy for the kernel to efficiently share a limited amount of (or all) the host memory with software running in the container.

    • Using the network is pretty straightforward, if one wants to run a server and wants it to look like it’s running on the host.

    Cons:

    • You can’t run other operating systems or other kernels, since they’re all sharing the host kernel. This is good for running (most) Linux software, but not useful for running other operating systems.

    • The main “window” between the host and the guest is the Linux kernel. This is a relatively large piece of software, with a larger “edge” than with VMs — different kernel APIs that might all have security holes and let malicious “guest” software break out.

    • I understand that it’s possible to do some level of GPU sharing (this is of interest for people running potentially-malicious generative AI software, where a lot of software is being rapidly written and shared these days). But in general, it’s probably going to be a pain to do things like run a typical game under.

    This has been increasingly popular as a way to efficiently run server software in isolation.

    While Linux can technically containerize things using lxc, it’s common to use higher-level software on top of it to provide some additional functionality.

    Docker. This has been popular as a way to distribute servers that come with enough of a Linux distribution that they can run without regard for the distribution that the host is running. This can efficiently store “images” — one can start with an existing, mini Linux distro and make a few changes and then just distribute the changes over the network. A newer, upcoming mostly-drop-in replacement is podman.

    Another system is flatpak. This internally uses bubblewrap, and is aimed at running desktop software in isolation. Notably, one can run Steam (and all games it runs) in a flatpak; I have not done this. Typically one expects the software provider to provide a flatpak.

    firejail

    Probably this is best-referred to as a containerized route, but I’ll split it out. This uses LXC and a range of of other techniques to set up an isolated environment for software. It’s more oriented towards simply letting you run a piece of software that you would normally run on the host in an environment, and sharing a number of resources from the host. I’ve found this useful for running 2D games in the past that would normally run on the guest and aren’t packaged by anyone else. It’s a nice way, if you know what you’re doing, to simply remove access to things like the filesystem, the network, or make parts of the filesystem only accessible read-only.

    Pros:

    • Outside of maybe flatpaked Steam, probably the most-practical route to run a arbitrary games that you’d normally run on the host. and I believe that it should be able to run 3D games via Wayland, though I haven’t done this myself.

    • Efficient.

    • One doesn’t need to have an existing package, like a Docker image or flatpak downloaded from the network, or go to the work of generating one oneself — this is oriented towards a minimal-setup way to run software already on the host in isolation.

    Cons:

    • “By default insecure”. That is, normally all host resources are shared with the guest — software can access the filesystem and everything. This is kind of a big deal, since if one makes an error in restricting resources, one might let software run unsandboxed in some aspect.

    • Takes some technical knowledge to set up and diagnose any problems (e.g. a given software package doesn’t like to run with a particular directory read-only).

    • There are “profiles” set up for a small number of software packages that ship with firejail, but in general, it’s aimed at you creating a profile yourself, which takes time and work.

    [continued in child comment]

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 hours ago

      [continued from parent]

      Here’s an example firejail profile that I use with renpy on Wayland, for example, which is a software package that runs [visual novels](https:. Note that this won’t run everything, especially since one is using a different version of renpy than a game ships with, but generally, with this in place, one can just go to a renpy game’s directory and type firejail renpy . and it’ll run. This doesn’t isolate RenPy games against each other, but it does keep them from mucking with the rest of the system:

      renpy firejail profile
      # whitelist profile for RenPy (game)
      noblacklist ${HOME}/.renpy
      
      include disable-common.inc
      include disable-programs.inc
      include disable-devel.inc
      
      caps.drop all
      net none
      nogroups
      nonewprivs
      noroot
      seccomp
      
      tracelog
      
      private-dev
      private-tmp
      
      mkdir     ~/.renpy
      whitelist ~/.renpy
      
      # All Renpy games need to be stored under here.
      whitelist ${HOME}/m/restricted-game/
      read-only ${HOME}/m/restricted-game/
      read-write ${HOME}/m/restricted-game/renpy
      
      nodvd
      notv
      nou2f
      seccomp.block-secondary
      

      More of a tool for letting one run that non-packaged software in isolation…but one needs to generally set up the profiles oneself. For example, that profile blocks network access to renpy games…but there are games that will fail if they can’t access the network (though you could say that this is desirable, if you don’t want those games phoning home).

  • just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    edit-2
    8 hours ago

    If you’re reading reviews on Software Manager, you don’t want to be messing with jails.

    Just use Flatpaks, and install Flatseal for permissions control over individual packages. They are sandboxed, but with permissive defaults set by the devs, so you can use Flatseal to lock them down, then set permissions you’re comfortable with. If it breaks something in that one Flatpak, then just reverse your permissions changes. Simple.

  • jdnewmil@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    7 hours ago

    The best approach is to not run untrusted software. Second best is to be a security expert and run it under the control of a debugger and analyze each instruction before it runs.

    This is probably not what you wanted to hear, but every sandbox has flaws and software that is written by someone aware of those flaws can conceivably exploit them.

    Tools like firejail are often useful early to mid software life cycle… before exploits become common for them. But there eventually comes a point where a zero day exploit is released and your peace of mind leads you to think you are safe. Their utility varies over time, and it is the nature of zero day exploits that they surprise you.

    I think flatpak is a configuration management tool… not a security sandbox… but really the question comes back to what is your use case… do you want to become a security consultant, or are you just looking for a bit more protection from common exploits? There is no magic bullet… even dealing with the minutiae of locking down specific system calls will not protect you perfectly yet it can significantly increase the hassle of onboarding new software. Simply relying on signed software packages most of the can reduce the chance of encountering malicious software significantly over using unsigned packages if you are an ordinary computer user… and getting wrapped up in security issues when you are not aiming to be an expert can just add overhead to your life without making you significantly safer. Beware of the rabbit hole… it can feed your hypochondria rather than protect you if you let the wolf in through the front door and hope the locks scattered around will stop it from harming you.