Elixir/Ports and external process wiring

Revision as of 07:00, 16 October 2025 by Adamw (talk | contribs) (increase all heading levels)

Challenge: controlling "rsync"

This exploration began as I wrote a simple library to run rsync from Elixir.[1] I was hoping to learn how to interface with long-lived external processes, in this case to transfer files and monitor progress. Starting and reading from rsync went very well, thanks to the --info=progress2 option which reports progress in a fairly machine-readable format. I was able to start the file transfer, capture status, and report it back to the Elixir caller in various ways.

My library starts rsync using a low-level Port call, which maps directly to the base Erlang open_port[2] implementation:

Port.open(
  {:spawn_executable, rsync_path},
  [
    :binary,
    :exit_status,
    :hide,
    :use_stdio,
    :stderr_to_stdout,
    args:
      ~w(-a --info=progress2) ++
        rsync_args ++
        sources ++
        [args[:target]],
    env: env
  ]
)

Problem: runaway processes

Since I was calling my rsync library from an application under development, I would often kill the program abruptly by crashing or by typing <control>-C in the terminal. What I found is that the rsync transfer would continue to run in the background even after Elixir had completely shut down.

That would have to change—leaving overlapping file transfers running unmonitored is exactly what I wanted to avoid by having Elixir control the process in the first place.

Bad assumption: pipe-like processes

A common use case is to use external processes for something like compression and decompression. A program like gzip or cat will stop once it detects that its input has ended, using a C system call like this:

ssize_t n_read = read (input_desc, buf, bufsize);
if (n_read < 0) { error... }
if (n_read == 0) { end of file... }

The manual for read[3] explains that reading 0 bytes indicates the end of file, and a negative number indicates an error such as the input file descriptor already being closed.

BEAM assumes the connected process behaves like this, so nothing needs to be done to clean up a dangling external process because it will end itself as soon as the Port is closed or the BEAM exits. If the external process is known to not behave this way, the recommendation is to wrap it in a shell script which converts a closed stdin into a kill signal.[4]

BEAM internal and external processes

BEAM applications are built out of supervision trees and excel at managing huge numbers of parallel actor processes, all scheduled internally. Although the communities' mostly share a philosophy of running as much as possible inside of the VM because it builds on this strength, and simplifies away much interface glue and context switching, on many occasions it will still start an external OS process. There are some straightforward ways to simply run a command line, which might be familiar to programmers coming from another language: os:cmd takes a string and runs the thing. At a lower level, external programs are managed through a Port which is a flexible abstraction allowing a backend driver to communicate data in and out, and to send some control signals such as reporting an external process's exit and exit status.

When it comes to internal processes, BEAM is among the most mature and robust, achieved by good isolation and by its hierarchical supervisors liberally pruning entire subprocess trees at the first sign of going out of specification. But for external processes, results are mixed. Some programs are twitchy and crash easily, for example cat, but others like the BEAM itself or a long-running server are built to survive any ordinary I/O glitch or accidental mashing of the keyboard. Furthermore, this will usually be a fundamental assumption of that program and there will be no configuration to make the program behave differently depending on stimulus.

Reliable clean up

What I discovered is that the BEAM external process library assumes that its spawned processes will respond to standard input and output shutting down or so called end of file, for example what happens when <control>-d is typed into the shell. This works very well for a subprocess like bash but has no effect on a program like sleep or rsync.

The hole created by this mismatch is interestingly solved by something shaped like the BEAM's supervisor itself. I would expect the VM to spawn many processes as necessary, but I wouldn't expect the child process to outlive the VM, just because it happens to be insensitive to end of file. Instead, I was hoping that the VM would try harder to kill these processes as the Port is closed, or if the VM halts.

In fact, letting a child process outlive the one that spawned it is unusual enough that the condition is called an "orphan process". The POSIX standard recommends that when this happens the process should be adopted by the top-level system process "init" if it exists, but this is a "should have" and not a must. The reason it can be undesirable to allow this to happen at all is that the orphan process becomes entirely responsible for itself, potentially running forever without any more intervention according to the purpose of the process. Even the system init process tracks its children, and can restart them in response to service commands. Init will know nothing about its adopted, orphan processes.

When I ran into this issue, I found the suggested workaround of writing a wrapper script to track its child (the program originally intended to run), listen for the end of file from BEAM, and kill the external program. How much simpler it would be if this workaround were already built into the Erlang Port module!

It's always a pleasure to ask questions in the BEAM communities, they have earned a reputation as being friendly and open. The first big tip was to look at the third-party library erlexec, which demonstrates some best practices that might be backported into the language itself. Everyone speaking on the problem has generally agreed that the fragile clean up of external processes is a bug, and supported the idea that one of the "terminate" signals should be sent to spawned programs.

Which signal to use is still an open issue, there's a softer version HUP which says "Goodbye!" and the program is free to interpret as it will, the mid-level TERM that I prefer because it makes the intention explicit but can still be blocked or handled gracefully if needed, and KILL which is bursting with destructive potential. The world of unix signals is a wild and scary place, on which there's a refreshing diversity of opinion around the Internet.

Inside the BEAM

Despite its retro-futuristic appearance of being one of the most time-tested yet forward-facing programming environments, I was brought back to Earth by digging around inside the VM to find that it's just a C program like any other. There's nothing holy about the BEAM emulator, there are some good and some great ideas about functional languages and they're buried in a mass of ancient procedural ifdefs, with unnerving memory management and typedefs wrapping the size of an integer on various platforms, just like you might find in other relics from the dark ages of computing, next to the Firefox or linux kernel source code.

Tantalizingly, message-passing is at the core of the VM, but is not a first-class concept when reaching out to external processes. There's some fancy footwork with pipes and dup, but communication is done with enums, unions, and bit-rattling stdlib. I love it, but... it might something to look at on another rainy day.