Darling Docs

This is the main documentation source for Darling.

It can be edited on GitHub.

Using Darling

Build Instructions

You must be running a 64-bit x86 Linux distribution. Darling cannot be used on a 32-bit x86 system, not even to run 32-bit applications.


It is recommended that you use at least Clang 6. You can force a specific version of Clang (if it is installed on your system) by editing Toolchain.cmake.

Linux 5.0 or higher is required.

Debian 10

$ sudo apt install cmake clang-6.0 bison flex xz-utils libfuse-dev libudev-dev pkg-config \
libc6-dev-i386 linux-headers-amd64 libcap2-bin git python2 libglu1-mesa-dev libcairo2-dev \
libgl1-mesa-dev libtiff5-dev libfreetype6-dev libxml2-dev libegl1-mesa-dev libfontconfig1-dev \
libbsd-dev libxrandr-dev libxcursor-dev libgif-dev libpulse-dev libavformat-dev libavcodec-dev \
libavresample-dev libdbus-1-dev libxkbfile-dev libssl-dev

Debian Testing

$ sudo apt install cmake clang-9 bison flex xz-utils libfuse-dev libudev-dev pkg-config \
libc6-dev-i386 linux-headers-amd64 libcap2-bin git python2 libglu1-mesa-dev libcairo2-dev \
libgl1-mesa-dev libtiff5-dev libfreetype6-dev libxml2-dev libegl1-mesa-dev libfontconfig1-dev \
libbsd-dev libxrandr-dev libxcursor-dev libgif-dev libpulse-dev libavformat-dev libavcodec-dev \
libavresample-dev libdbus-1-dev libxkbfile-dev libssl-dev

Ubuntu 18.04/20.04:

$ sudo apt install cmake clang bison flex libfuse-dev libudev-dev pkg-config libc6-dev-i386 \
linux-headers-generic gcc-multilib libcairo2-dev libgl1-mesa-dev libglu1-mesa-dev libtiff5-dev \
libfreetype6-dev git libelf-dev libxml2-dev libegl1-mesa-dev libfontconfig1-dev libbsd-dev \
libxrandr-dev libxcursor-dev libgif-dev libavutil-dev libpulse-dev libavformat-dev libavcodec-dev \
libavresample-dev libdbus-1-dev libxkbfile-dev libssl-dev

For Ubuntu 20.04, also install python2.

Arch Linux & Manjaro:

libavresample needs to be downloaded from the AUR.

$ sudo pacman -S --needed make cmake clang flex bison icu fuse linux-headers gcc-multilib \
lib32-gcc-libs pkg-config fontconfig cairo libtiff python2 mesa llvm libbsd libxkbfile \ 
libxcursor libxext libxkbcommon libxrandr

Make sure you install the headers package that matches your kernel version. The kernel version can be checked with uname -r.

$ uname -r

Then you should have linux54-headers installed. You will typically be prompted but may have to install this manually.

Fedora and CentOS

RPMFusion is required for FFmpeg.

$ sudo dnf install make cmake clang bison dbus-devel flex python2 glibc-devel.i686 fuse-devel \
systemd-devel kernel-devel elfutils-libelf-devel cairo-devel freetype-devel.{x86_64,i686} \
libjpeg-turbo-devel.{x86_64,i686} libtiff-devel.{x86_64,i686} fontconfig-devel.{x86_64,i686} \
libglvnd-devel.{x86_64,i686} mesa-libGL-devel.{x86_64,i686} mesa-libEGL-devel.{x86_64,i686} \
libxml2-devel libbsd-devel git libXcursor-devel libXrandr-devel giflib-devel ffmpeg-devel \
pulseaudio-libs-devel libxkbfile-devel llvm

Fetch the Sources

Darling makes extensive use of Git submodules, therefore you cannot use a plain git clone. Make a clone like this:

$ git clone --recursive https://github.com/darlinghq/darling.git

Attention: The source tree requires up to 4 GB of disk space!

Updating sources

If you have already cloned Darling and would like to get the latest changes, do this in the source root:

$ git pull
$ git submodule init
$ git submodule update


The build system of Darling is CMake. Makefiles are generated by CMake by default.

Attention: The build may require up to 10 GB of disk space! The Darling installation itself then takes up to 1 GB.

Building and Installing

Now let's build Darling:

# Move into the cloned sources
$ cd darling

# Make a build directory
$ mkdir build && cd build

# Configure the build
$ cmake ..

# Build and install Darling
$ make
$ sudo make install

Darling also requires a kernel module named darling-mach:

$ make lkm
$ sudo make lkm_install

If module installation produces warnings such as SSL error:02001002:system library:fopen:No such file or directory: bss_file.c:175, then these can be usually ignored, unless you configured your system to enforce secure boot.

The kernel module is an experimental piece of code; it's likely to have many bugs and vulnerabilities. Be prepared for kernel hangups and crashes, and run Darling on a virtual machine if possible.

Build Options

You will notice that it takes a long time to build Darling. Darling contains the software layer equivalent to an entire operating system, which means a large amount of code. You can optionally disable some large and less vital parts of the build in order to get faster builds.

To do this, use the -DFULL_BUILD=OFF option when configuring Darling through CMake.

You may encounter some things to be missing, such as JavaScriptCore. Before creating an issue about a certain library or framework missing from Darling, verify that you are doing a full build by not using this option or setting it to ''ON''.

Another way to speed up the build is to run make with multiple jobs. For this, run make -j8 instead, where 8 is a number of current jobs to run of your choosing. In general, avoid running more jobs than twice the amount CPU cores of your machine.

If you run LLDB and encounter messages indicating a lack of debug symbols, make sure you are doing a debug build. To do this, use the -DCMAKE_BUILD_TYPE=Debug.

Custom Installation Prefix

To install Darling in a custom directory use the CMAKE_INSTALL_PREFIX CMake option. However, a Darling installataion is NOT portable, because the installataion prefix is hardcoded into the darling executable. This is intentional. If you do move your Darling installation you will get this error message:

Cannot mount overlay: No such file or directory
Cannot open mnt namespace file: No such file or directory

If you wish to properly move your Darling installation, the only supported option is for you to uninstall your current Darling installation, and then rebuild Darling with a different installation prefix.

Known Issues


If your distribution is Backbox and you run into build issues try the following commands:

sudo update-alternatives --install /usr/bin/clang clang /usr/bin/clang-6.0 600
sudo update-alternatives --install /usr/bin/clang++ clang++ /usr/bin/clang++-6.0 600


On SELinux you may see the following error when starting Darling:

Cannot open mnt namespace file: No such file or directory

To work around this try this command: setsebool -P mmap_low_allowed 1.

Secure Boot

If Secure Boot is enabled you may see:

modprobe: ERROR: could not insert 'darling_mach': Operation not permitted
Failed to load the kernel module

Use the following commands to generate a key and self-sign the kernel module:

# Generate Key
openssl req -new -x509 -newkey rsa:2048 -keyout MOK.priv -outform DER -out MOK.der -nodes -days 36500 -subj "/CN=Darling LKM/"
# Enroll Key
sudo mokutil --import MOK.der
# Sign Module
sudo kmodsign sha512 MOK.priv MOK.der /lib/modules/$(uname -r)/extra/darling-mach.ko
# Reboot System and Enroll Key

No rule to make target 'modules'

This error can occur for a number of reasons. The most common is that your currently running kernel is no longer installed, which occurs after an upgrade. Before trying other steps reboot your system in order to test against this.

Another cause is that the kernel headers may not be installed. Distributions such as Ubuntu will install the correct headers automatically, but Arch/Manjaro may require you to install the appropriate headers manually. See the "Arch Linux & Manjaro" section earlier on this page for instructions on how to install the appropriate Linux headers.

make -C /lib/modules/5.4.2-1-MANJARO/build M=/home/xeab/Downloads/darling/src/lkm modules
make[5]: Entering directory '/usr/lib/modules/5.4.2-1-MANJARO/build'
make[5]: *** No rule to make target 'modules'.  Stop.

File System Support

Darling uses overlayfs for implementing prefixes on top of the macOS-like root filesystem. While overlayfs is not very picky about the lower (read-only) filesystem (where your /usr lives), it has stricter requirements for the upper filesystem (your home directory, unless you override the DPREFIX environment variable).

To quote the kernel documentation:

The lower filesystem can be any filesystem supported by Linux and does not need to be writable. The lower filesystem can even be another overlayfs. The upper filesystem will normally be writable and if it is it must support the creation of trusted.* extended attributes, and must provide valid d_type in readdir responses, so NFS is not suitable.

In addition to NFS not being supported, ZFS is also known not to work.

If you try to use an unsupported file system, this error will be printed:

Cannot mount overlay: Invalid argument


This page is only for if Darling was build and installed manually as instructed on Build Instructions. If you installed Darling through a package manager please remove the related packages using that package manager.

Uninstall commands

The following commands will completely remove Darling. Replace the source root with the path to your local copy of the Darling source code.

$ cd darling
$ tools/uninstall

Darling shell

We plan to implement a nice and user-friendly GUI for Darling, but for now the primary way to use Darling and interact with it is via the Darling shell.

Basic usage

To get a shell inside the container, just run darling shell as a regular user. Behind the scenes, this command will start the container or connect to an already-running one and spawn a shell inside. It will also automatically load the kernel module and initialize the prefix contents if needed.

Inside, you'll find an emulated macOS-like environment. macOS is Unix-like, so most familiar commands will work. For example, it may be interesting to run ls -l /, uname and sw_vers to explore the emulated system. Darling bundles many of the command-line tools macOS ships — of the same ancient versions. The shell itself is Bash version 3.2.

The filesystem layout inside the container is similar to that of macOS, including the top-level /Applications, /Users and /System directories. The original Linux filesystem is visible as a separate partition that's mounted on /Volumes/SystemRoot. When running macOS programs under Darling, you'll likely want them to access files in your home folder; to make this convenient, there's a LinuxHome symlink in your Darling home folder that points to your Linux home folder, as seen from inside the container; additionally, standard directories such as Downloads in your Darling home folder are symlinked to the corresponding folders in your Linux home folder.

Running Linux Binaries

You can run normal Linux binaries inside the container, too. They won't make use of Darling's libraries and system call emulation and may not see the macOS-like environment:

$ darling shell
Darling [~]$ uname
Darling [~]$ /Volumes/SystemRoot/bin/uname

Becoming root

Should you encounter an application that bails out because you are not root (typically because it needs write access outside your home directory), you can use the fake sudo command. It is fake, because it only makes getuid() and geteuid() system calls return 0, but grants you no extra privileges.


  • darling shell: Opens a Bash prompt.
  • darling shell /usr/local/bin/someapp arg: Execute /usr/local/bin/someapp with an argument. Note that the path is evaluated inside the Darling Prefix. The command is started through the shell (uses sh -c).
  • darling ~/.darling/usr/local/bin/someapp arg: Equivalent of the previous example (which doesn't make use of the shell), assuming that the prefix is ~/.darling.

Darling prefix

Darling prefix is a container overlayed on top of a base macOS-like root file system located in $installation_prefix/libexec/darling. The default location is ~/.darling and this can be controlled with the DPREFIX environment variable, very much like WINEPREFIX under Wine.

The container uses overlayfs along with a user mount namespace to provide a different idea of / for macOS applications.

When you run an executable inside the prefix for the first time (after boot), launchd, the Darwin init process representing the container is started. This init process keeps the root file system mounted.

Updating the prefix

Unlike Wine, Darling doesn't need to update the prefix whenever the Darling installation is updated. There is one caveat, though: since overlayfs caches the contents of underlying file system(s), you may need to terminate the container to see Darling's updated files:

$ darling shutdown

Note that this will terminate all processes running in the container.

Installing software

There are multiple ways to install software on macOS, and our aim is to make all of them work on Darling as well. However there currently are a few limitations, mainly the lack of GUI.

You might not even need to install it

Unlike Wine, Darling can run software that's installed on an existing macOS installation on the same computer. This is possible thanks to the way application bundles (.app-s) work on macOS and Darling.

To use an app that's already installed, you just need to locate the existing installation (e.g. /Volumes/SystemRoot/run/media/username/Macintosh HD/Applications/SomeApp.app) and run the app from there.

DMG files

Many apps for macOS are distributed as .dmg (disk image) files that contain the .app bundle inside. Under macOS, you would click the DMG to mount it and then drag the .app to your Applications folder to copy it there.

Under Darling, use hdiutil attach SomeApp.dmg to mount the DMG (the same command works on macOS too), and then copy the .app using cp:

Darling [~]$ hdiutil attach Downloads/SomeApp.dmg
Darling [~]$ cp -r /Volumes/SomeApp/SomeApp.app /Applications/


Some apps are distributed as archives instead of disk images. To install such an app, unpack the archive using the appropriate CLI tools and copy the .app to /Applications.

Mac App Store

Many apps are only available via Apple's Mac App Store. To install such an application in Darling, download the app from a real App Store (possibly running on another computer) and copy the .app over to Darling.

PKG files

Many apps use .pkg, the native package format of macOS, as their distribution format. It's not enough to simply copy the contents of a package somewhere, they are really meant to be installed and can run arbitrary scripts during installation.

Under macOS, you would use the graphical Installer.app or the command-line installer utility to install this kind of package. You can do the latter under Darling as well:

Darling [~]$ installer -pkg mc-4.8.7-0.pkg -target /

Unlike macOS, Darling also has the uninstaller command, which lets you easily uninstall packages.

Package managers

There are many third-party package managers for macOS, the most popular one being Homebrew. Ultimately, we want to make it possible to use all the existing package managers with Darling, however, some may not work well right now.

Command-line developer tools

To install command-line developer tools such as the C compiler (Clang) and LLDB, you can install Xcode using one of the method mentioned above, and then run

Darling [~]$ xcode-select --switch /Applications/Xcode.app

Alternatively, you can download and install only command-line tools from Apple by running

Darling [~]$ xcode-select --install

Note that both Xcode and command-line tools are subject to Apple's EULA.

What to try

Here are some things you may want to try after installing Darling.

Print "Hello World"

See if Darling can print the famous greeting:

Darling [~]$ echo Hello World
Hello World

It works!

Run uname

uname is a standard Unix command to get the name (and optionally the version) of the core OS. On Linux distributions, it prints "Linux":

$ uname

But Darling emulates a complete Darwin environment, so running uname results in "Darwin":

Darling [~]$ uname

Run sw_vers

sw_vers (for "software version") is a Darwin command that prints the user-facing name, version and code name (such as "El Capitan") of the OS:

Darling [~]$ sw_vers
ProductName:    Mac OS X
ProductVersion: 10.14
BuildVersion:   Darling

Explore the file system

Explore the file system Darling presents to Darwin programs, e.g.:

Darling [~]$ ls -l /
Darling [~]$ ls /System/Library
Caches Frameworks OpenSSL ...
Darling [~]$ ls /usr/lib
Darling [~]$ ls -l /Volumes

Inspect the Mach-O binaries

Darling ships with tools like nm and otool that let you inspect Mach-O binaries, ones that make up Darling and any third-party ones:

Darling [~]$ nm /usr/lib/libobjc.A.dylib
Darling [~]$ otool -L /bin/bash
	/usr/lib/libncurses.5.4.dylib (compatibility version 5.4.0, current version 5.4.0)
	/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1238.0.0)

Explore process memory layout

While Darling emulates a complete Darwin system, it's still powered by Linux underneath. Sometimes, this may prove useful. For example, you can use Linux's /proc pseudo-filesystem to explore the running processes. Let's use cat to explore its own process memory layout:

Darling [~]$ cat /proc/self/maps
7ffff7ffb000-7ffff7ffd000 rwxp 00000000 fe:01 20482                      /home/user/.darling/bin/cat
7ffff7ffd000-7ffff7ffe000 rw-p 00002000 fe:01 20482                      /home/user/.darling/bin/cat
7ffff7ffe000-7ffff7fff000 r--p 00003000 fe:01 20482                      /home/user/.darling/bin/cat
7ffff7fff000-7ffff80f3000 rwxp 00182000 fe:01 60690                      /home/user/.darling/usr/lib/dyld
7ffff80f3000-7ffff80fc000 rw-p 00276000 fe:01 60690                      /home/user/.darling/usr/lib/dyld
7ffff80fc000-7ffff8136000 rw-p 00000000 00:00 0
7ffff8136000-7ffff81d6000 r--p 0027f000 fe:01 60690                      /home/user/.darling/usr/lib/dyld
7fffffdde000-7fffffdff000 rw-p 00000000 00:00 0                          [stack]
7fffffe00000-7fffffe01000 r--s 00000000 00:0e 8761                       anon_inode:[commpage]

Check out the mounts

Darling runs in a mount namespace that's separate from the host. You can use host's native mount tool to inspect it:

Darling [~]$ /Volumes/SystemRoot/usr/bin/mount | column -t
/Volumes/SystemRoot/dev/sda3  on  /Volumes/SystemRoot  type  ext4     (rw,relatime,seclabel)
overlay                       on  /                    type  overlay  (rw,relatime,seclabel,lowerdir=/usr/local/libexec/darling,upperdir=/home/user/.darling,workdir=/home/user/.darling.workdir)
proc                          on  /proc                type  proc     (rw,relatime)

Notice that not only can you simply run a native ELF executable installed on the host, you can also pipe its output directly into a Darwin command (like column in this case).

Alternatively, you can read the same info from the /proc pseudo-filesystem:

Darling [~]$ column -t /proc/self/mounts

List running processes

Darling emulates the BSD sysctls that are needed for ps to work:

Darling [~]$ ps aux
user    32   0.0  0.4  4229972  13016   ??  ?     1Jan70   0:00.05 ps aux
user     5   0.0  0.5  4239500  15536   ??  ?     1Jan70   0:00.22 /bin/launchctl bootstrap -S System
user     6   0.0  0.4  4229916  11504   ??  ?     1Jan70   0:00.09 /usr/libexec/shellspawn
user     7   0.0  0.6  4565228  17308   ??  ?     1Jan70   0:00.14 /usr/sbin/syslogd
user     8   0.0  0.6  4407876  18936   ??  ?     1Jan70   0:00.15 /usr/sbin/notifyd
user    29   0.0  0.2  4229948   7584   ??  ?N    1Jan70   0:00.03 /usr/libexec/shellspawn
user    30   0.0  0.5  4231736  14268   ??  ?     1Jan70   0:00.11 /bin/bash
user     1   0.0  0.5  4256056  15484   ??  ?     1Jan70   0:00.25 launchd

Read the manual

Darling ships with many man pages you can read:

Darling [~]$ man dyld

Run a script

Like Darwin, Darling ships with a build of Python, Ruby and Perl. You can try running a script or exploring them interactively.

Darling [~]$ python
Python 2.7.10 (default, Sep  8 2018, 13:32:07) 
[GCC 4.2.1 Compatible Clang 6.0.1 (tags/RELEASE_601/final)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.platform

Trace a process

Use our xtrace tool to trace the emulated Darwin syscalls a process makes:

Darling [~]$ arch
Darling [~]$ xtrace arch
[321] mach_timebase_info_trap() -> numer = 1, denom = 1
[321] issetugid() -> 0
[321] host_self_trap() -> port right 2563
[321] mach_msg_trap(0x7fffffdff270, MACH_SEND_MSG|MACH_RCV_MSG, 40, 320, port 1543, 0, port 0)
[321]         {remote = copy send 2563, local = make send-once 1543, id = 200}, 16 bytes of inline data
[321]         mach_host::host_info(copy send 2563, 1, 12)
[321] mach_msg_trap() -> KERN_SUCCESS
[321]         {local = move send-once 1543, id = 300}, 64 bytes of inline data
[321]         mach_host::host_info() -> [8, 8, 0, 0, 3104465855, 4160607681, 4160604077, 0, 4292867120, 4292867064, 4151031935, 3160657432], 12 
[321] _kernelrpc_mach_port_deallocate_trap(task 259, port name 2563) -> KERN_SUCCESS
[321] ioctl(0, 1074030202, 0x7fffffdff3d4) -> 0
[321] fstat64(1, 0x7fffffdfef80) -> 0
[321] ioctl(1, 1074030202, 0x7fffffdfefd4) -> 0
[321] write_nocancel(1, 0x7f823680a400, 5)i386
 -> 5
[321] exit(0)

Control running services

Use launchctl tool to control launchd:

Darling [~]$ launchctl list
PID	Status	Label
323	-	0x7ffea3407da0.anonymous.launchctl
49	-	0x7ffea3406d50.anonymous.shellspawn
50	-	0x7ffea3406a20.anonymous.bash
39	-	0x7ffea3406350.anonymous.shellspawn
40	-	0x7ffea3405fd0.anonymous.bash
-	0	com.apple.periodic-monthly
-	0	com.apple.var-db-dslocal-backup
31	-	com.apple.aslmanager
-	0	com.apple.newsyslog
-	0	com.apple.periodic-daily
-	0	com.apple.securityd
19	-	com.apple.memberd
23	-	com.apple.notifyd
20	-	org.darlinghq.iokitd
-	0	com.apple.periodic-weekly
21	-	org.darlinghq.shellspawn
22	-	com.apple.syslogd
-	0	com.apple.launchctl.System

Darling [~]$ sudo launchctl bstree
    A  org.darlinghq.iokitd
    A  com.apple.system.logger
    D  com.apple.activity_tracing.cache-delete
    D  com.apple.SecurityServer
    A  com.apple.aslmanager
    A  com.apple.system.notification_center
    A  com.apple.PowerManagement.control
    com.apple.xpc.system (XPC Singleton Domain)/

Read man launchctl for more information of other commands launchctl has.

Fetch a webpage

See if networking works as it should:

Darling [~]$ curl http://example.og
<!doctype html>
    <title>Example Domain</title>

Try using sudo

Just like the real Mac OS X may, Darling allows you to get root priveleges without having to enter any password, except in our case it's a feature:

Darling [~]$ whoami
Darling [~]$ sudo whoami

Of course, our sudo command only gives the program the impression it's running as root; in reality, it still runs with privileges of your user. Some programs explicitly check that they're running as root, so you can use our sudo to convince them to run.

Use a package manager

Download and install the Rudix Package Manager:

Note: Not currently working due to lack of TLS support.

Darling [~]$ curl -s https://raw.githubusercontent.com/rudix-mac/rpm/2015.10.20/rudix.py | sudo python - install rudix

Now you can install arbitrary packages using the rudix command:

Darling [~]$ sudo rudix install wget mc

Try running Midnight Commander

If you've installed Midnight Commander (mc package in Rudix), launch it to see if it runs smoothly:

Darling [~]$ mc

Manually install a package

You can also try installing a .pkg file manually using the installer command:

Darling [~]$ installer -pkg mc-4.8.7-0.pkg -target /

Unlike macOS, Darling also ships with an uninstaller command which you can use to easily uninstall packages.

Attach disk images

Darling ships with an implementation of hdiutil, a tool that allows you to attach and detach DMG disk images:

Darling [~]$ hdiutil attach Downloads/SomeApp.dmg
Darling [~]$ ls /Volumes/SomeApp
SomeApp.app <...>
Darling [~]$ cp -r /Volumes/SomeApp/SomeApp.app /Applications/
Darling [~]$ hdiutil detach /Volumes/SomeApp

Check out user defaults

macOS software uses the so-called "user defaults" system to store preferences. You can access it using the defaults tool:

Darling [~]$ defaults read

For more information about using defaults, read the manual:

Darling [~]$ man defaults

Run neofetch

Get the neofetch.sh script from its homepage and run it:

Darling [~]$ bash neofetch.sh
                    'c.          user@hostname
                 ,xNMM.          ------------------------
               .OMMMMo           OS: macOS Mojave 10.14 Darling x86_64
               OMMM0,            Kernel: 16.0.0
     .;loddo:' loolloddol;.      Uptime: 1 day, 23 hours, 25 mins
   cKMMMMMMMMMMNWMMMMMMMMMM0:    Shell: bash 3.2.57
;MMMMMMMMMMMMMMMMMMMMMMMM:       WM Theme: Blue (Print: Entry, AppleInterfaceStyle, Does Not Exist)
:MMMMMMMMMMMMMMMMMMMMMMMM:       Terminal: /dev/pts/1
 kMMMMMMMMMMMMMMMMMMMMMMMMWd.    Memory: 12622017MiB / 2004MiB
       .cooc,.    .,coo:.

neofetch exercises a lot of the Darwin internals, including BSD sysctl calls, Mach IPC and host info API, and the "user defaults" subsystem.

Compile and run a program

If you have Xcode SDK installed, you can compile and run programs.

Darling [~]$ xcode-select --switch /Applications/Xcode.app

Now, build a "Hello World" C program using the Clang compiler:

Darling [~]$ cat > hello-world.c
#include <stdio.h>

int main() {
    puts("Hello World!");
Darling [~]$ clang hello-world.c -o hello-world

And run it:

Darling [~]$ ./hello-world
Hello world!

The whole compiler stack works, how cool is that! Now, let's try Swift:

Darling [~]$ cat > hi.swift
Darling [~]$ swiftc hi.swift
Darling [~]$ ./hi

Try running apps

Darling has experimental support for graphical applications written using Cocoa. If you have a simple app installed, you can try running it:

Darling [~]$ /Applications/HelloWorld.app/Contents/MacOS/HelloWorld

Darling internals

Documentation of Darling to help new developers understand how Darling (and macOS) work.



Whereas Linux uses the ELF as the format for applications, dynamic libraries and so on, macOS uses Mach-O.

Darling's Linux kernel module contains a static loader of Mach-O binaries, whose primary task is to load the application's binary and then load dyld and hand over control to it.

An additional trick is then needed to load ELF libraries into such processes.


Dyld is Apple's dynamic linker. It examines what libraries are needed by the macOS application, loads them and performs other necessary tasks. After it is done, it jumps to application's entry point.

Dyld is special in the sense that as the only "executable" on macOS, it does not (cannot) link to any libraries. As a consequence of this, it has to be statically linked to a subset of libSystem (counterpart of glibc on Linux).

System call emulation

System calls are the most critical interface of all user space software. They are the means of invoking kernel's functionality. Without them, software would not be able to communicate with the user, access the file system or establish network connections.

Every kernel provides a very specific set of system calls, even if there is a similar set of libc APIs available on different systems. For example, the system call set greatly differs between Linux and FreeBSD.

MacOS uses a kernel named XNU. XNU's system calls differ greatly from those of Linux, which is why XNU system call emulation is at the heart of Darling.

XNU system calls

Unlike other kernels, XNU has three distinct system call tables:

  1. BSD system calls, invoked via sysenter/syscall. These calls frequently have a direct counterpart on Linux, which makes them easiest to implement.
  2. Mach system calls, which use negative system call numbers. These are non-existent on Linux and are mostly implemented in the darling-mach Linux kernel module. These calls are built around Mach ports.
  3. Machine dependent system calls, invoked via int 0x82. There is only a few of them and they are mainly related to thread-local storage.


Darling emulates system calls by providing a modified /usr/lib/system/libsystem_kernel.dylib library, the source code of which is located in src/kernel. Even though some parts of the emulation are located in Darling's kernel module (located in src/external/lkm), Darling's emulation is user space based.

This is why libsystem_kernel.dylib (and also dyld, which contains a static build of this library) can never be copied from macOS to Darling.

Emulating XNU system calls directly in Linux would have a few benefits, but isn't really workable. Unlike BSD kernels, Linux has no support for foreign system call emulation and having such an extensive and intrusive patchset merged into Linux would be too difficult. Requiring Darling's users to patch their kernels is out of question as well.

Disadvantages of this approach

  • Inability to run a full copy of macOS under Darling (notwithstanding the legality of such endeavor), at least the files mentioned above need to be different.

  • Inability to run software making direct system calls. This includes some old UPX-packed executables. In the past, software written in Go also belonged in this group, but this is no longer the case. Note that Apple provides no support for making direct system calls (which is effectively very similar to distributing statically linked executables described in the article) and frequently changes the system call table, hence such software is bound to break over time.

Advantages of this approach

  • Significantly easier development.
  • No need to worry about having patches merged into Linux.
  • It is easier for users to have newest code available, it is not needed to run the latest kernel to have the most up to date system call emulation.




Darling supports use of multiple prefixes (virtual root directories), very much like Wine. Unlike Wine, Darling makes use of Linux's support for various user-controlled namespaces. This makes Darling's prefixes behave a lot more like Docker/LXC containers.


The implementation fully resides in the darling binary, which performs several tasks:

  • Create a new mount namespace. Mounts created inside the namespace are automatically destroyed when the container is stopped.
  • Set up an overlayfs mount, which overlays Darling's readonly root tree (which is installed e.g. in /usr/local/libexec/darling) with the prefix's path. This means the prefix gets updated prefix contents for free (unlike in Wine), but the user can still manipulate prefix contents.
  • Activate "vchroot". That is how we call our virtual chroot implementation, which still allows applications to escape into the outside system via a special directory (/Volumes/SystemRoot).
  • Set up a new PID namespace. launchd is then started as the init process for the container.

More namespaces (e.g. UID or network) will be considered in future.


  • When you make changes to Darling's installation directory (e.g. /usr/local/libexec/darling), you must stop running containers (via darling shutdown) so that the changes take effect. This is a limitaton of overlayfs.

Calling host system APIs

This article describes how Darling enables code compiled into Mach-O files to interact with API exported from ELF files on the host system. This is needed, for example, to access ALSA audio APIs.

Apple's dynamic loader (dyld) cannot load ELF files and extending it in this direction would be a rather extensive endeavor, also because Apple's linker (ld64) would need to be extended at the same time. This means some kind of bridge between the host platform's ELF loader and the "Mach-O world" has to be set up.

The ELF Bridge

The set of information provided to dyld by mldr whenever a Mach-O executable is to be launched is precisely defined. You probably know the typical C main function signature:

int main(int argc, const char** argv)

If you have fiddled with this stuff before, you probably know there is also an extra const char** envp parameter containing all of the environment variables. Well, on Apple's systems, envp is followed by const char** apple containing miscelaneous extra pieces of information, e.g. a full path to the executable being run or possibly a block of random bytes as a seed for libc.

So this gives us the following signature:

int main(int argc, const char** argv, const char** envp, const char** apple)

These arguments are passed to main by dyld, which receives them on the stack from mldr (or from the kernel on a real macOS). By the way, these arguments are also passed to all initializers marked with __attribute__((constructor)).

The apple parameter is ideal for passing additional information without interfering with the environment undesirably. mldr declares struct elf_calls with a set of function pointers, instantiates it and fills it out. An address of this structure is then added as a string formatted as elf_calls=%p into apple.

The address of struct elf_calls is later parsed inside libsystem_kernel an exported as a symbol named _elfcalls.


To enable easy linking, a concept of ELF wrappers was introduced, along with a tool named wrapgen. wrapgen parses ELF libraries, extracts the SONAME (name of library to be loaded) and a list of visible symbols exported from the library.

Now that we have the symbols, a neat trick is used to make them available to Mach-O applications. The Mach-O format supports so called symbol resolvers, which are functions that return the address of the symbol they represent. dyld calls them and provides the result as symbol address to whoever needs the symbol.

Therefore, wrapgen produces C files such as this:

#include <elfcalls.h>
extern struct elf_calls* _elfcalls;

static void* lib_handle;
__attribute__((constructor)) static void initializer() {
    lib_handle = _elfcalls->dlopen_fatal("libasound.so.2");

__attribute__((destructor)) static void destructor() {

void* snd_pcm_open() {
    __asm__(".symbol_resolver _snd_pcm_open");
    return _elfcalls->dlsym_fatal(lib_handle, "snd_pcm_open");

The C file is then compiled into a Mach-O library, which transparently wraps an ELF library on the host system.

CMake integration

To make things very easy, there is a CMake function that automatically takes care of wrapping a host system's library.



wrap_elf(asound libasound.so)

add_darling_executable(pcm_min pcm_min.c)
target_link_libraries(pcm_min system asound)

The wrap_elf() call creates a Mach-O library of given name, wrapping an ELF library of given name, and installs it into /usr/lib/native inside the prefix.


Thread implementation


Darling uses Apple's original libpthread library. MacOS applications running under Darling therefore use the same exact threading library as on macOS.

Apple's libpthread manages threads through a collection of bsdthread* system calls, which are implemented by Darling. This way Apple's libpthread could operate absolutely independently on Linux.

However, there is a huge catch. Darling could set up threads on its own and everything would be working fine, but only unless no calls to native Linux libraries (e.g. PulseAudio) would be made. The problem would become obvious as soon as given Linux library would make any thread-related operations on its own - they would crash immediately. This includes many pthread_* calls, but thread-local storage access as well.

Wrapping native libpthread

In Darling, bsdthread* system calls call back into the loader, which uses native libpthread to start a thread. Once native libpthread sets up the thread, the control is handed over to Apple's libpthread.

Apple's libpthread needs control over the stack, so we cannot use the stack provided and managed by native libpthread. Therefore we quickly switch to a different stack once we get control from the native libpthread library.

Thread-local storage

Thread-local storage (TLS) is a critical feature enabling store and retrieval of per-thread data.

In user applications (written in C/C++), TLS variables are typically accessed via pthread_getspecific() and pthread_setspecific() or via special attributes __thread or thread_local (since C++11). However, TLS is no less important even in applications that do not make use of this facility explicitly.

This is because TLS is a much needed functionality used by the threading library libpthread (otherwise pthread_self() would not work) and also in libc itself (errno is typically a per-thread value to avoid races).

On 32/64-bit x86, old segment registers from the 16-bit times have found new use thanks to TLS.

TLS setup

  1. When a new thread is being set up, a block of memory is allocated by libpthread. This block of memory is usually what pthread_self() returns.
  2. libpthread asks the kernel to set up a new GDT entry that refers to this memory block.
  3. The entry number is then set into the respective segment register.

Linux specific

On Linux, there are two different system calls used to create GDT entries.

  • set_thread_area() is available on both x86 and x86-64.
  • arch_prctl() is only available on x86-64, but unlike set_thread_area() it also supports 64-bit base addresses.

macOS specific

A machine-dependent ("machdep") system call thread_fast_set_cthread_self() is used to create a new segment. Darling translates this call to one of the above Linux system calls.

TLS usage

The concept of memory segments is very simple. You can imagine that the segment refers to a certain area of your process' virtual memory. Any memory accesses you make via the segment register are relative to start of that area and are limited by area's size.

Imagine that you have set up FS to point to an integer array. Now you can set an element whose index is in EAX to value 42:

movl $42, %fs:(%eax, 4)


While x86 offers a total of six segment registers, only two of them (FS and GS) can be used for TLS (the others being CS, DS, ES and SS). This effectively limits the number of facilities managing TLS areas in a single process to two. One of them is typically used by the platform's native libc, whilst the other one is typically used by foreign libc's (courtesy of Darling and Wine).

The following table explains which registers are used by whom:

SystemTLS register on i386TLS register on x86-64
Linux libcGSFS
Apple's libSystemGSGS
Apple's libSystem (Darling)FSGS

This is also why Wine for macOS can never run under Darling: it would always overwrite either the register used by Linux libc or Darling's libSystem.

MacOS specifics

Mach ports

Mach ports are the IPC primitives under Mach. They are conceptually similar to Unix pipes, sockets or message queues: using ports, tasks (and the kernel) can send each other messages.

Drawing the analogy with pipes further,

  • A port is like a pipe. It is conceptually a message queue maintained by the kernel — this is where it differs from a pipe, which is an uninterpreted stream of raw bytes.

  • Tasks and the kernel itself can enqueue and dequeue messages to/from a port via a port right to that port. A port right is a handle to a port that allows either sending (enqueuing) or receiving (dequeuing) messages, a lot like a file descriptor connected to either the read or the write end of a pipe. There are the following kinds of port rights:

    • Receive right, which allows receiving messages sent to the port. Mach ports are MPSC (multiple-producer, single-consumer) queues, which means that there may only ever be one receive right for each port in the whole system (unlike with pipes, where multiple processes can all hold file descriptors to the read end of one pipe).
    • Send right, which allows sending messages to the port.
    • Send-once right, which allows sending one message to the port and then disappears.
    • Port set right, which denotes a port set rather than a single port. Dequeuing a message from a port set dequeues a message from one of the ports it contains. Port sets can be used to listen on several ports simultaneously, a lot like select/poll/epoll/kqueue in Unix.
    • Dead name, which is not an actual port right, but merely a placeholder. When a port is destroyed, all existing port rights to the port turn into dead names.
  • A port right name is a specific integer value a task uses to refer to a port right it holds, a lot like a file descriptor that a process uses to refer to an open file. Sending a port right name, the integer, to another task does not allow it to use the name to access the port right, because the name is only meaningful in the context of the port right namespace of the original task.

The above compares both port rights and port right names to file descriptors, because Unix doesn't differentiate between the handle to an open file and the small integer value aspects of file descriptors. Mach does, but even when talking about Mach, it's common to say a port or a port right actually meaning a port right name that denotes the right to the port. In particular, the mach_port_t C type (aka int) is actually the type of port right names, not ports themselves (which are implemented in the kernel and have the real type of struct ipc_port).

Low-level API

It's possible to use Mach ports directly by using the mach_msg() syscall (Mach trap; actually, mach_msg() is a user-space wrapper over the mach_msg_overwrite_trap() trap) and various mach_*() functions provided by the kernel (which are in fact MIG routines, see below).

Here's an example of sending a Mach message between processes:

// File: sender.c

#include <stdio.h>

#include <mach/mach.h>
#include <servers/bootstrap.h>

int main() {

    // Lookup the receiver port using the bootstrap server.
    mach_port_t port;
    kern_return_t kr = bootstrap_look_up(bootstrap_port, "org.darlinghq.example", &port);
    if (kr != KERN_SUCCESS) {
        printf("bootstrap_look_up() failed with code 0x%x\n", kr);
        return 1;
    printf("bootstrap_look_up() returned port right name %d\n", port);

    // Construct our message.
    struct {
        mach_msg_header_t header;
        char some_text[10];
        int some_number;
    } message;

    message.header.msgh_bits = MACH_MSGH_BITS(MACH_MSG_TYPE_COPY_SEND, 0);
    message.header.msgh_remote_port = port;
    message.header.msgh_local_port = MACH_PORT_NULL;

    strncpy(message.some_text, "Hello", sizeof(message.some_text));
    message.some_number = 35;

    // Send the message.
    kr = mach_msg(
        &message.header,  // Same as (mach_msg_header_t *) &message.
        MACH_SEND_MSG,    // Options. We're sending a message.
        sizeof(message),  // Size of the message being sent.
        0,                // Size of the buffer for receiving.
        MACH_PORT_NULL,   // A port to receive a message on, if receiving.
        MACH_PORT_NULL    // Port for the kernel to send notifications about this message to.
    if (kr != KERN_SUCCESS) {
        printf("mach_msg() failed with code 0x%x\n", kr);
        return 1;
    printf("Sent a message\n");
// File: receiver.c

#include <stdio.h>

#include <mach/mach.h>
#include <servers/bootstrap.h>

int main() {

    // Create a new port.
    mach_port_t port;
    kern_return_t kr = mach_port_allocate(mach_task_self(), MACH_PORT_RIGHT_RECEIVE, &port);
    if (kr != KERN_SUCCESS) {
        printf("mach_port_allocate() failed with code 0x%x\n", kr);
        return 1;
    printf("mach_port_allocate() created port right name %d\n", port);

    // Give us a send right to this port, in addition to the receive right.
    kr = mach_port_insert_right(mach_task_self(), port, port, MACH_MSG_TYPE_MAKE_SEND);
    if (kr != KERN_SUCCESS) {
        printf("mach_port_insert_right() failed with code 0x%x\n", kr);
        return 1;
    printf("mach_port_insert_right() inserted a send right\n");

    // Send the send right to the bootstrap server, so that it can be looked up by other processes.
    kr = bootstrap_register(bootstrap_port, "org.darlinghq.example", port);
    if (kr != KERN_SUCCESS) {
        printf("bootstrap_register() failed with code 0x%x\n", kr);
        return 1;
    printf("bootstrap_register()'ed our port\n");

    // Wait for a message.
    struct {
        mach_msg_header_t header;
        char some_text[10];
        int some_number;
        mach_msg_trailer_t trailer;
    } message;

    kr = mach_msg(
        &message.header,  // Same as (mach_msg_header_t *) &message.
        MACH_RCV_MSG,     // Options. We're receiving a message.
        0,                // Size of the message being sent, if sending.
        sizeof(message),  // Size of the buffer for receiving.
        port,             // The port to receive a message on.
        MACH_PORT_NULL    // Port for the kernel to send notifications about this message to.
    if (kr != KERN_SUCCESS) {
        printf("mach_msg() failed with code 0x%x\n", kr);
        return 1;
    printf("Got a message\n");

    message.some_text[9] = 0;
    printf("Text: %s, number: %d\n", message.some_text, message.some_number);
Darling [/tmp]$ ./receiver
mach_port_allocate() created port right name 2563
mach_port_insert_right() inserted a send right
bootstrap_register()'ed our port

# in another terminal:
Darling [/tmp]$ ./sender
bootstrap_look_up() returned port right name 2563
Sent a message

# back in the first terminal:
Got a message
Text: Hello, number: 35

As you can see, in this case the kernel decided to use the same number (2563) for port right names in both processes. This cannot be relied upon; in general, the kernel is free to pick any unused names for new ports.

Ports as capabilities

In addition to custom "inline" data as shown above, it's possible for a message to contain:

  • Out-of-line data that will be copied to a new virtual memory page in the receiving task. If possible (i.e. if the out-of-line data is page-aligned in the sending task), Mach will use copy-on-write techniques to pass the data to the receiving task without actually copying it.
  • Port rights that will be sent to the receiving task. This way, it's possible for a task to transfer its port rights to another task. This is how bootstrap_register() and bootstrap_look_up() work in the above example.

The only way for a task to get a port right for a port is to either create that port, or have some other task (or the kernel) send it the right. In this way, Mach ports are capabilities (as in capability-based security).

Among other things, that means that for Mach programs (including the kernel itself) which allow other tasks to ask it to perform operations by sending messages to a port the program listens on, it's idiomatic not to explicitly check if the sender has any specific "permissions" to perform this task; just having a send right to the port is considered enough of a permission. For example, you can manipulate any task (with calls like vm_write() and thread_create()) as long as you can get its task port; it doesn't matter if you're root or not and so on (and indeed, UIDs and other Unix security measures don't exist on the Mach level).

Where to get ports

It would make a lot of sense to make Mach port inheritable across fork() and exec() like file descriptors are (and that is the way it work on the Hurd), but on Darwin, they're not. A task starts with a fresh port right namespace after either fork() or exec(), except for a few special ports that are inherited:

  • Task port (aka kernel port), a send right to a port whose receive right is held by the kernel. This port allows to manipulate the task it refers to, including reading and writing its virtual memory, creating and otherwise manipulating its threads, and terminating (killing) the task (see task.defs, mach_vm.defs and vm_map.defs). Call mach_task_self() to get the name for this port for the caller task. This port is only inherited across exec(); a new task created with fork() gets a new task port (as a special case, a task also gets a new task port after exec()ing a suid binary). The Mach task_create() function that allows creating new tasks returns the task port of the new task to the caller; but it's unavailable (always returns KERN_FAILURE) on Darwin, so the only way to spawn a task and get its port is to perform the "port swap dance" while doing a fork().

  • Host port, a send right to another port whose receive right is held by the kernel. The host port allows getting information about the kernel and the host machine, such as the OS (kernel) version, number of processors and memory usage statistics (see mach_host.defs). Get it using mach_host_self(). There also exists a "privileged host control port" (host_priv_t) that allows privileged tasks (aka processes running as root) to control the host (see host_priv.defs). The official way to get it is by calling host_get_host_priv_port() passing the "regular" host port; in reality it returns either the same port name (if the task is privileged) or MACH_PORT_NULL (if it's not).

  • Task name port, an Apple extension, an unprivileged version of the task port. It references the task, but does not allow controlling it. The only thing that seems to be available through it is task_info().

  • Bootstrap port, a send right to the bootstrap server (on Darwin, this is launchd). The bootstrap server serves as a server registry, allowing other servers to export their ports under well-known reverse-DNS names (such as com.apple.system.notification_center), and other tasks to look up those ports by these names. The bootstrap server is the primary way to "connect" to another task, comparable to D-Bus under Linux. The bootstrap port for the current task is available in the bootstrap_port global variable. On the Hurd, the filesystem serves as the server registry, and they use the bootstrap port for passing context to translators instead.

  • Seatbelt port, task access port, debug control port, which are yet to be documented.

In addition to using the *_self() traps and other methods mentioned above, you can get all these ports by calling task_get_special_port(), passing in the task port (for the caller task or any other task) and an identifier of the port (TASK_BOOTSTRAP_PORT and so on). There also exist wrapper macros (task_get_bootstrap_port()` and so on) that pass the right identifier automatically.

There also exists task_set_special_port() (and the wrapper macros) that allows you to change the special ports for a given task to any send right you provide. mach_task_self() and all the other APIs discussed above will, in fact, return these replaced ports rather than the real ports for the task, host and so on. This is a powerful mechanism that can be used, for example, to disallow task to manipulate itself, log and forward any messages it sends and receives (Hurd's rpctrace), or make it believe it's running on a different version of the kernel or on another host. This is also how tasks get the bootstrap port: the first task (launchd) starts with a null bootstrap port, allocates a port and sets it as the bootstrap port for the tasks it spawns.

There also are two special sets of ports that tasks inherit:

  • Registered ports, up to 3 of them. They can be registered using mach_ports_register() and later looked up using mach_ports_lookup(). Apple's XPC uses these to pass down its "XPC bootstrap port".
  • Exception ports, which are the ports where the kernel should send info about exceptions happening during the task execution (for example, Unix's Segmentation Fault corresponds to EXC_BAD_ACCESS). This way, a task can handle its own exceptions or let another task handle its exceptions. This is how the Crash Reporter.app works on macOS, and this is also what LLDB uses. Under Darling, the Linux kernel delivers these kinds of events as Unix signals to the process that they happen in, then the process converts the received signals to Mach exceptions and sends them to the correct exception port (see sigexc.c).

As a Darwin extension, there are pid_for_task(), task_for_pid(), and task_name_for_pid() syscalls that allow converting between Mach task ports and Unix PIDs. Since they essentially circumvent the capability model (PIDs are just integers in the global namespace), they are crippled on iOS/macOS with UID checks, entitlements, SIP, etc. limiting their use. On Darling, they are unrestricted.

Similarly to task ports, threads have their corresponding thread ports, obtainable with mach_thread_self() and interposable with thread_set_special_port().

Higher-level APIs

Most programs don't construct Mach messages manually and don't call mach_msg() directly. Instead, they use higher-level APIs or tools that wrap Mach messaging.


MIG stands for Mach Interface Generator. It takes interface definitions like this example:

// File: window.defs

subsystem window 35000;

#include <mach/std_types.defs>

routine create_window(
    server: mach_port_t;
    out window: mach_port_t);

routine window_set_frame(
    window: mach_port_t;
    x: int;
    y: int;
    width: int;
    height: int);

And generates the C boilerplate for serializing and deserializing the calls and sending/handling the Mach messages. Here's a (much simplified) client code that it generates:

// File: windowUser.c

/* Routine create_window */
kern_return_t create_window(mach_port_t server, mach_port_t *window) {

	typedef struct {
		mach_msg_header_t Head;
	} Request;

	typedef struct {
		mach_msg_header_t Head;
		/* start of the kernel processed data */
		mach_msg_body_t msgh_body;
		mach_msg_port_descriptor_t window;
		/* end of the kernel processed data */
		mach_msg_trailer_t trailer;
	} Reply;

	union {
		Request In;
		Reply Out;
	} Mess;

	Request *InP = &Mess.In;
	Reply *Out0P = &Mess.Out;

	mach_msg_return_t msg_result;

	/* msgh_size passed as argument */
	InP->Head.msgh_request_port = server;
	InP->Head.msgh_reply_port = mig_get_reply_port();
	InP->Head.msgh_id = 35000;
	InP->Head.msgh_reserved = 0;

	msg_result = mach_msg(
		(mach_msg_size_t) sizeof(Request),
		(mach_msg_size_t) sizeof(Reply),

	if (msg_result != MACH_MSG_SUCCESS) {
		return msg_result;

	*window = Out0P->window.name;

/* Routine window_set_frame */
kern_return_t window_set_frame(mach_port_t window, int x, int y, int width, int height) {

	typedef struct {
		mach_msg_header_t Head;
		int x;
		int y;
		int width;
		int height;
	} Request;

	typedef struct {
		mach_msg_header_t Head;
		mach_msg_trailer_t trailer;
	} Reply;

	union {
		Request In;
		Reply Out;
	} Mess;

	Request *InP = &Mess.In;
	Reply *Out0P = &Mess.Out;

	mach_msg_return_t msg_result;

	InP->x = x;
	InP->y = y;
	InP->width = width;
	InP->height = height;
	/* msgh_size passed as argument */
	InP->Head.msgh_request_port = window;
	InP->Head.msgh_reply_port = mig_get_reply_port();
	InP->Head.msgh_id = 35001;
	InP->Head.msgh_reserved = 0;

	msg_result = mach_msg(
		(mach_msg_size_t) sizeof(Request),
		(mach_msg_size_t) sizeof(Reply), 

	if (msg_result != MACH_MSG_SUCCESS) {
		return msg_result;


To get the reply from the server, MIG includes a port right (called the reply port) in the message, and then performs a send on the server port and a receive on the reply port with a single mach_msg() call. The client keeps the receive right for the reply port, while the server gets sent a send-once right. This way, event though MIG reuses a single (per-thread) reply port for all the servers it talks to, servers can't impersonate each other.

And the corresponding server code, also simplified:

// File: windowServer.c

/* Routine create_window */
extern kern_return_t create_window(mach_port_t server, mach_port_t *window);

/* Routine create_window */
mig_internal novalue _Xcreate_window(mach_msg_header_t *InHeadP, mach_msg_header_t *OutHeadP) {

	typedef struct {
		mach_msg_header_t Head;
		mach_msg_trailer_t trailer;
	} Request;

	typedef struct {
		mach_msg_header_t Head;
		/* start of the kernel processed data */
		mach_msg_body_t msgh_body;
		mach_msg_port_descriptor_t window;
		/* end of the kernel processed data */
	} Reply;

	Request *In0P = (Request *) InHeadP;
	Reply *OutP = (Reply *) OutHeadP;

	kern_return_t RetCode;

	OutP->window.disposition = MACH_MSG_TYPE_COPY_SEND;
	OutP->window.pad1 = 0;
	OutP->window.pad2 = 0;
	OutP->window.type = MACH_MSG_PORT_DESCRIPTOR;

	RetCode = create_window(In0P->Head.msgh_request_port, &OutP->window.name);

	if (RetCode != KERN_SUCCESS) {

	OutP->Head.msgh_bits |= MACH_MSGH_BITS_COMPLEX;
	OutP->Head.msgh_size = (mach_msg_size_t) (sizeof(Reply));
	OutP->msgh_body.msgh_descriptor_count = 1;

/* Routine window_set_frame */
extern kern_return_t window_set_frame(mach_port_t window, int x, int y, int width, int height);

/* Routine window_set_frame */
mig_internal novalue _Xwindow_set_frame(mach_msg_header_t *InHeadP, mach_msg_header_t *OutHeadP) {

	typedef struct {
		mach_msg_header_t Head;
		int x;
		int y;
		int width;
		int height;
		mach_msg_trailer_t trailer;
	} Request;

	typedef struct {
		mach_msg_header_t Head;
	} Reply;

	Request *In0P = (Request *) InHeadP;
	Reply *OutP = (Reply *) OutHeadP;

	window_set_frame(In0P->Head.msgh_request_port, In0P->x, In0P->y, In0P->width, In0P->height);

/* Description of this subsystem, for use in direct RPC */
const struct window_subsystem {
	mach_msg_id_t	start;		/* Min routine number */
	mach_msg_id_t	end;		/* Max routine number + 1 */
        unsigned int    maxsize;        /* Max msg size */
	struct routine_descriptor	/* Array of routine descriptors */
} window_subsystem = {
	(mach_msg_size_t) sizeof(union __ReplyUnion__window_subsystem),

boolean_t window_server(mach_msg_header_t *InHeadP, mach_msg_header_t *OutHeadP) {
	mig_routine_t routine;

	OutHeadP->msgh_bits = MACH_MSGH_BITS(MACH_MSGH_BITS_REPLY(InHeadP->msgh_bits), 0);
	OutHeadP->msgh_remote_port = InHeadP->msgh_reply_port;
	/* Minimal size: routine() will update it if different */
	OutHeadP->msgh_size = (mach_msg_size_t) sizeof(mig_reply_error_t);
	OutHeadP->msgh_local_port = MACH_PORT_NULL;
	OutHeadP->msgh_id = InHeadP->msgh_id + 100;
	OutHeadP->msgh_reserved = 0;

	if ((InHeadP->msgh_id > 35001) || (InHeadP->msgh_id < 35000)) {
		return FALSE;
	routine = window_subsystem.routine[InHeadP->msgh_id - 35000];
	(*routine) (InHeadP, OutHeadP);
	return TRUE;

Client-side usage looks just like invoking the routines:

mach_port_t server_port = ...;
mach_port_t window;
kern_return_t kr;

kr = create_window(server_port, &window);

kr = window_set_frame(window, 50, 55, 200, 100);

And on the server side, you implement the corresponding functions and then call mach_msg_server():

kern_return_t create_window(mach_port_t server, mach_port_t *window) {
    // ...

kern_return_t window_set_frame(mach_port_t window, int x, int y, int width, int height) {
    // ...

int main() {
    mach_port_t server_port = ...;
    mach_msg_server(window_server, window_subsystem.maxsize, server_port, 0);

MIG supports a bunch of useful options and features. It's extensively used in Mach for RPC (remote procedure calls), including for communicating between the kernel and the userspace. Other than a few direct Mach traps such as msg_send() itself, Mach kernel API functions (such as the ones for task and port manipulation) are in fact MIG routines.

Distributed Objects

Distributed Objects is a dynamic RPC platform, as opposed to MIG, which is fundamentally based on static code generation. Distributed objects allows you to transparently use Objective-C objects from other processes as if they were regular Objective-C objects.

// File: server.m

NSConnection *connection = [NSConnection defaultConnection];
[connection setRootObject: myAwesomeObject];
[connection registerName: @"org.darlinghq.example"];
[[NSRunLoop currentRunLoop] run];
// File: client.m

NSConnection *connection =
    [NSConnection connectionWithRegisteredName: @"org.darlinghq.example"
                                          host: nil];

MyAwesomeObject *proxy = [[connection rootProxy] retain];
[proxy someMethod];

There is no need to statically generate any code for this, it all works at runtime through the magic of objc message forwarding infrastructure. Methods you call may return other objects (or take them as arguments), and the distributed objects support code will automatically either send a copy of the object to the remote process (using NSCoding) or proxy methods called on those objects as well.


XPC is a newer IPC framework from Apple, tightly integrated with launchd. Its lower-level C API allows processes to exchange plist-like data via Mach messages. Higher-level Objective-C API (NSXPC*) exports a proxying interface similar to Distributed Objects. Unlike Distributed Objects, it's asynchronous, doesn't try to hide the possibility of connection errors, and only allows passing whitelisted types (to prevent certain kinds of attacks).

Apple's XPC is not open source. On Darling, the low-level XPC implementation (libxpc) is based on NextBSD libxpc. The high-level Cocoa APIs are not yet implemented.

Useful resources

Note that Apple's version of Mach as used in XNU/Darwin is subtly different than both OSF Mach and GNU Mach.

Mach Exceptions

This section documents how unix signals and Mach exceptions interact within XNU.

Unlike XNU, Linux uses unix signals for all purposes, i.e. for both typically hardware-generated (e.g. SIGSEGV) and software-generated (e.g. SIGINT) signals.

Hardware Exceptions

In XNU, hardware (CPU) exceptions are handled by the Mach part of the kernel (exception_triage() in osfmk/kern/exception.c). As a result of such an exception, a Mach exception is generated and delivered to the exception port of given task.

By default - i.e. when the debugger hasn't replaced the task's exception ports - these Mach exceptions make their way into bsd/uxkern/ux_exception.c, which contains a kernel thread receiving and translating Mach exceptions into BSD signals (via ux_exception()). This translation takes into account hardware specifics, also by calling machine_exception() in bsd/dev/i386/unix_signal.c. Finally, the BSD signal is sent using threadsignal() in bsd/kern/kern_sig.c.

Software Signals

Software signal processing in XNU follows the usual pattern from BSD/Linux.

However, if ptrace(PT_ATTACHEXC) was called on the target process, the signal is also translated into a Mach exception via do_bsdexception(), the pending signal is removed and the process is set to a stopped state (SSTOP).

The debugger then has to call ptrace(PT_THUPDATE) to set the unix signal number to be delivered to the target process (which could also be 0, i.e. no signal) and that also resumes the process (state SRUN).

Debugging under Darling

Debugging support in Darling makes use of what we call "cooperative debugging". It means the code in the debuggee is aware it's being debugged and actively assists the process. In Darling, this role is taken on mainly by sigexc.c in libsystem_kernel.dylib, so no application modifications are necessary.

MacOS debuggers use a combination of BSD-like and Mach APIs to control and inspect the debuggee.

To emulate the macOS behavior, Darling makes use of POSIX real-time signals to invoke actions in the cooperative debugging code.

OperationmacOSLinuxDarling implementation
Attach to debuggeeptrace(PT_ATTACHEXC)
Causes the kernel to redirect all signals (aka exceptions) to the Mach "exception port" of the process. Only debuggee termination is notified via wait().
Signals sent to the debuggee and the debuggee termination event are received in the debugger via wait().
Notify the LKM that we will be tracing the process. Send a RT signal to the debuggee to notify it of the situation. The debuggee sets up handlers to handle all signals and forward them to the exception port.
Examine registersthread_get_state(X86_THREAD_STATE)ptrace(PTRACE_GETREGS)Upon receiving a signal, the debuggee reads its own register state and passes it to the kernel via thread_set_state().
Pausing the debuggeekill(SIGSTOP)kill(SIGSTOP) or ptrace(PTRACE_INTERRUPT)Send a RT signal to the debuggee that it should act as if SIGSTOP were sent to the process. We cannot send a real SIGSTOP, because then the debuggee couldn't provide/update register state to the debugger etc.
Change signal deliveryptrace(PT_THUPDATE)ptrace(PTRACE_rest)Send a RT signal to the debuggee to inform it what it should do with the signal (ignore, pass it to the application etc.)
Set memory watchpointsthread_set_state(X86_DEBUG_STATE)ptrace(PTRACE_POKEUSER)Implement the effects of PTRACE_POKEUSER in the LKM.



Commpage is a special memory structure that is always located at the same address in all processes. On macOS, this mapping is provided by the kernel. In case of Darling, this functionality is supplemented by mldr.


  • CPU information (number of cores, CPU capabilities etc.)
  • Current precise date and time (this information is not filled in by Darling, causing a fall back to a system call).
  • PFZ - preemption-free zone. Contains a piece of code which when run prevents the process from being preempted. This is used for a lock-free implementation of certain primitives (e.g. OSAtomicFifoEnqueue). (Not available under Darling.)

It is somewhat related to vDSO on Linux, except that vDSO behaves like a real library, while commpage is just a chunk of data.

The commpage is not documented anywhere, meaning it's not an API intended to be used by 3rd party software. It is however used in source code provided on opensource.apple.com. Darling provides a commpage for compatibility reasons.


The address differs between 32-bit and 64-bit systems.

  • 32-bit systems: 0xffff0000
  • 64-bit systems: 0x7fffffe00000

Note that the 32-bit address is outside of permissible user space area on 32-bit Linux kernels. This is why Darling runs only under 64-bit kernels, which don't have this limitation.

Distributed Objects

Here's how the Distributed Objects are structured internally:

  • NSPort is a type that abstracts away a port -- something that can receive and send messages (NSPortMessage). The few commonly used concrete port classes are NSMachPort (Mach port), NSSocketPort (network socket, possibly talking to another host), and NSMessagePort (another wrapper over Mach ports). With some luck, it's possible to use a custom port class, too.

    NSPort is really supposed to be a Mach port (and that's what [NSPort port] creates), while other port types have kind of been retrofitted on top of the existing Mach port semantics. NSPort itself conforms to NSCoding, so you can "send" a port over another port (it does not support coders other than NSPortCoder).

    Some NSPort subclasses may not fully work unless you're using them for Distributed Objects (with an actual NSConnection *).

  • NSPortMessage roughly describes a Mach message. It has "send" and "receive" ports, a msgid, and an array of components. Individual components can be either data (NSData) or a port (NSPort), corresponding to MACH_MSG_OOL_DESCRIPTOR and MACH_MSG_PORT_DESCRIPTOR. Passing a port will only work with ports of the same type as the port you're sending this message through.

  • NSPortNameServer abstracts away a name server that you can use to map (string) names to port. You can register your port for a name and lookup other ports by name. NSMachBootstrapServer implements this interface on top of the Mach bootstrap server (launchd on Darwin).

  • NSPortCoder is an NSCoder that essentially serializes and deserializes data (and ports) to and from port messages, using the same NSCoding infrastructure a lot of types already implement. Unlike other coders (read: archivers), it supports encoding and decoding ports, though this is mostly useless as DO itself make very little use of this.

    NSPortCoder itself is an abstract class. In older versions of OS X, it had a concrete subclass, NSConcretePortCoder, which only supported non-keyed (also called unkeyed) coding. In newer versions, NSConcretePortCoder itself is now an abstract class, and it has two subclasses, NSKeyedPortCoder and NSUnkeyedPortCoder.

    NSPortCoder subclasses send the replacementObjectForPortCoder: message to the objects they encode. The default implementation of that method replaces the object with a NSDistantObject (a Distributed Objects proxy), no matter whether the type conforms to NSCoding or not. Some DO-aware classes that wish to be encoded differently (for example, NSString) override that method, and usually do something like this:

    - (id) replacementObjectForPortCoder: (NSPortCoder *) portCoder {
        if ([portCoder isByref]) {
            return [super replacementObjectForPortCoder: portCoder];
        return self;
  • NSDistantObject is a proxy (NSProxy) that stands in for a remote object and forwards any messages over to the remote object. The same NSDistantObject type is returned by the default NSObject implementation of replacementObjectForPortCoder:, and this instance is what gets serialized and sent over the connection (and deserialized to a NSDistantObject on the other end).

  • NSConnection represents a connection (described by a pair of ports). NSConnection stores local and remote object maps, to intern created proxies (NSDistantObjects) when receiving or sending objects. NSConnection actually implements forwarding method invocations, and also serves as a port's delegate, handling received messages.

You would think that NSPort, NSPortMessage, NSPortNameServer, and NSPortCoder do not "know" they're being used for DO/RPC, and are generic enough to be used for regular communication directly. This is almost true, but DO specifics pop up in unexpected places.


  • https://developer.apple.com/library/archive/documentation/Cocoa/Conceptual/DistrObjects/DistrObjects.html
  • GNUstep implementation


If you are familiar with how GitHub Pull Requests work, you should feel right at home contributing to Darling.

Fork the repository

Locate the repository that you made changes in on GitHub. The following command can help.

$ cd darling/src/external/less
$ git remote get-url origin

If it is an https scheme then you can paste the URL directly into your browser.

Once at the page for the repository you made changes to, click the Fork button. GitHub will then take you to the page for your fork of that repository. The last step here is to copy the URL for your fork. Use the Clone or download button to copy it.

Commit and push your changes

Create and check out a branch describing your changes. In this example, we will use reinvent-wheel. Next, add your fork as a remote.

$ git remote add my-fork git@github.com:user/darling-less.git

After this, push your commits to your fork.

git push -u my-fork reinvent-wheel

The -u my-fork part is only necessary when a branch has never been pushed to your fork before.

Submit a pull request

On the GitHub page of your fork, select the branch you just pushed from the branch dropdown menu (you may need to reload). Click the New pull request button. Give it a useful title and descriptive comment. After this, you can click create.

After this, your changes will be reviewed by a Darling Project member.


We provide a build of LLDB that is known to work under Darling. It is built from vanilla sources, i.e. without any modifications.

That doesn't mean it works reliably. Fully supporting a debugger is a complex task, so LLDB is known to be buggy under Darling.

Troubleshooting LLDB

If you want to troubleshoot a problem with how LLDB runs under Darling, you need to make debugserver produce a log file.

If running debugserver separately, add -l targetfile. If using LLDB directly (which spawns debugserver as a subprocess automatically), pass an additional environment variable to LLDB:


External DebugServer

If you're having trouble using LLDB normally, you may get luckier by running the debugserver separately.

In one terminal, start the debugserver:

$ ./debugserver /bin/bash

In another terminal, connect to the server in LLDB:

$ ./lldb
(lldb) platform select remote-macosx
Platform: remote-macosx
Connected: no
(lldb) process connect connect://

Please note that environment variables may be missing by default, if used like this.

Debug with core dump

If you are unable to start your application through lldb, you can generate a core dump and load it through lldb. You will need to enable some options on the Linux side before you are able to generate a core dump. You will to tell Linux that you want to generate a core dump, and that there is no size limit for the core dump.

sudo sysctl -w kernel.core_pattern=core_dump
ulimit -c unlimited

Note that the core dump will be stored into the current working directory that Linux (not Darling) is pointing to. So you should cd into the directory you want the core dump to be stored in before you execute darling shell. From there, you can execute the application.

cd /path/you/want/to/store/core/dump/in
darling shell

If everything was set up properly, you should find a file called core_dump. It will be located in the current working directory that Linux is pointing to. Once you found the file, you can load it up on lldb.

lldb --core /path/to/core/dump/file

Built-in debugging utilities


libmalloc (the default library that implements malloc(), free() and friends) supports many debug features that can be turned on via environment, among them:

  • MallocScribble (fill freed memory with 0x55)
  • MallocPreScribble (fill allocated memory with 0xaa)
  • MallocGuardEdges (guard edges of large allocations)

When this is not enough, you can use libgmalloc, which is (for the most part) a drop-in replacement for libmalloc. This is how you use it:

$ DYLD_INSERT_LIBRARIES=/usr/lib/libgmalloc.dylib DYLD_FORCE_FLAT_NAMESPACE=1 ./test`

libgmalloc catches memory issues such as use-after-free and buffer overflows, and attempts to do this the moment they occur, rather than wait for them to mess up some internal state and manifest elsewhere. It does that by placing each allocation on its own memory page(s), and adding a guard page next to it (like MallocGuardEdges on steroids) — by default, after the allocated buffer (to catch buffer overruns), or with MALLOC_PROTECT_BEFORE, before it (to catch buffer underruns). You're likely want to try it both ways, with and without MALLOC_PROTECT_BEFORE. Another useful option is MALLOC_FILL_SPACE (similar to MallocPreScribble from above). See libgmalloc(3) for more details.

Objective-C and Cocoa

In Objective-C land, you can set NSZombieEnabled=YES in order to detect use-after-free of Objective-C objects. This replaces -[NSObject dealloc] with a different implementation that does not deallocate the memory, but instead turns the object into a "zombie". Upon receiving any message (which indicates a use-after-free), the zombie will log the message and abort the process.

Another useful option is NSObjCMessageLoggingEnabled=YES, which will instruct the Objective-C runtime to log all the sent messages to a file in /tmp.

In AppKit, you can set the NSShowAllViews default (e.g. with -NSShowAllViews 1 on the command line) to cause it to draw a colorful border around each view.

You can find more tips (not all of which work under Darling) here.


You can use xtrace to trace Darwin syscalls a program makes, a lot like using `strace on Linux:

$ xtrace vm_stat
[139] fstat64(1, 0x7fffffdfe340) -> 0
[139] host_self_trap() -> port right 2563
[139] mach_msg_trap(0x7fffffdfdfc0, MACH_SEND_MSG|MACH_RCV_MSG, 40, 1072, port 1543, 0, port 0)
[139]         {remote = copy send 2563, local = make send-once 1543, id = 219}, 16 bytes of inline data
[139]         mach_host::host_statistics64(copy send 2563, 4, 38)
[139]     mach_msg_trap() -> KERN_SUCCESS
[139]         {local = move send-once 1543, id = 319}, 168 bytes of inline data
[139]         mach_host::host_statistics64() -> [75212, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1024, 4159955152, 720014745, 503911965, 0, 4160737656, 2563, 4292863152, 4160733284, 4160733071, 0], 38 
[139] write_nocancel(1, 0x7fd456800600, 58)Mach Virtual Memory Statistics: (page size of 4096 bytes)
-> 58
[139] write_nocancel(1, 0x7fd456800600, 49)Pages free:                               75212.
-> 49

xtrace can trace both BSD syscalls and Mach traps. For mach_msg() in particular, xtrace additionally displays the message being sent or received, as well as tries to decode it as a MIG routine call or reply.

Note that xtrace only traces emulated Darwin syscalls, so any native Linux syscalls made (usually by native ELF libraries) will not be displayed, which means information and open file descriptors may appear to come from nowhere in those cases.

Generating stubs

Darling has a stub generator that is capable of generating stubs for C and Objective-C frameworks and shared libraries. A computer running macOS is required to run the stub generator.


You don't need to do this step if you already have a bin folder in your home directory, with the PATH variable pointing to it. If not, copy/paste the following commands into Terminal.

Create the `bin folder if it doesn't exist:

$ mkdir ~/bin

If you PATH variable does not include the bin folder, you will need to add it.

# For bash
$ echo "export PATH=\"~/bin:\$PATH\"" >> ~/.bash_profile && source ~/.bash_profile
# For zsh
% echo "export PATH=\"\$HOME/bin:\$PATH\"" >> ~/.zshenv && source ~/.zshenv

Getting the stub generator

Copy/paste the following command into Terminal. It will download both darling-stub-gen and class-dump and place it in the bin folder

$ curl https://raw.githubusercontent.com/darlinghq/darling/master/tools/darling-stub-gen -o ~/bin/darling-stub-gen && chmod +x ~/bin/darling-stub-gen && curl https://github.com/darlinghq/class-dump/releases/download/mojave/class-dump -L -o ~/bin/class-dump && chmod +x ~/bin/class-dump

Using the stub generator

To run the stub generator, structure your arguments like this:

$ darling-stub-gen /System/Library/Frameworks/DVDPlayback.framework/DVDPlayback DVDPlayback

The process is identical for dynamic libraries.

The above command will create a folder that can be placed in the src/frameworks/ directory of Darling's source tree. It is generated from the DVDPlayback framework. Note that the first argument points to the actual binary of the framework, not the root directory of the framework.

Applying the stubs to Darling

Once you have generated the stub folder for the framework, copy that folder into Darling's source tree under src/frameworks/.

Then traverse to the src/frameworks/include/ directory (also located inside Darling's source tree) and create a soft symbolic link. The link should point to the folder inside the include directory (ex: MyNewFolder/include/MyNewFolder).


$ cd src/frameworks/include/
$ ln -s ../MyNewFolder/include/MyNewFolder MyNewFolder`

Finally, you will need to add the folder to the build. In src/frameworks/CMakeLists.txt, add the following line: add_subdirectory(MyNewFolder). Make sure you put it in alphabetical order.

Run a build and make sure your new code compiles. After that completes, you are ready to submit a pull request.

See Contributing for how to submit a pull request. This commit is an example of a stub for a framework that was added to Darling using the process described in this article. Most notable is what it does to src/CMakeLists.txt.

Known issues

  • The stub generator does not currently generate symbols for constants. Those must be manually added if a program needs them.
  • Generating stubs for platforms outside of x86 (macOS, iOS Simulator) is not supported.
  • TODO: Figure out how to generate stubs from a dyld_shared_cache file.

High priority stuff

The intention of this page is to serve as a location for pointing developers to areas where work is most needed.

Work to be done

  • Reimplement CoreCrypto.

    CoreCrypto's source code is publicly available, but its license prevents us from using it for Darling. Luckily, it's not that difficult! Some work has already been done. Having CoreCrypto will also enable us to build a recent version of CommonCrypto.

  • AppKit.framework

    More details to be added.

  • libxpc

    • Implement missing APIs - many things in xpc_dictionary are missing.
    • xpc_main() should access the calling application's bundle and automatically listen as the defined XPCService.
      • This is a little tricky, because libxpc may not use CoreFoundation to parse the Info.plist (at least not directly). It should use _NSGetExecutablePath() and xpc_create_from_plist().
  • Foundation.framework

    • Implement NSXPC* classes that wrap libxpc.
  • CoreAudio.framework

    • Implement AudioFormat and ExtAudioConverter APIs.
    • Implement AUGraph and AudioQueue utility APIs.
    • Implement various Audio Units existing by default on macOS. This includes units providing audio mixing or effects.
  • CoreServices.framework

    • Implement LaunchServices APIs for applications and file type mappings, backed by a database.
    • Implement UTI (Uniform Type Identifiers) API, also backed by a database.

Google Summer of Code

2019 tasks

  • Implement NSUserNotification over libnotify or directly over the D-Bus API.
  • Implement launch services & open(1).
  • Fix all warnings in CoreFoundation, Foundation, AppKit, and CoreGraphics.
  • Get TextEdit to compile and fully function.


NOTE: This is not extensivly tested, and may break


To package Darling for Debian-baded system we provide the tools/makedeb script.

All output files are stored in .. because of a technical limtation of debuild.

Install Dependencies

$ sudp apt install devscripts equivs dpkg-dev debhelper

Building Binary Packages

Install Build Dependencies

$ sudo mk-build-deps -ir


$ tools/makedeb

Build Sources Packages

Use this if you want to upload to a service like Launchpad.

$ tools/makedeb --dsc



  1. Install docker and docker-compose
  2. cd rpm
  3. Build the docker image: docker-compose build rpm
  4. Build the rpms: docker-compose run rpm (Can take over half an hour)
  5. Now you can run dnf install RPMS/x84_64/darling*.rpm
  6. If using SELinux run setsebool -P mmap_low_allowed 1 to allow Darling low level access

Build for other operating systems

By default, it will build for Fedora 30. To use a different OS, simply use:

RPM_OS=fedora:31 docker-compose build rpm

Future improvements