Elixir/Ports and external process wiring

Revision as of 09:46, 17 October 2025 by Adamw (talk | contribs) (clarify)

This deceivingly simple programming adventure veers unexpectedly into piping and signaling between unix processes.

Context: controlling "rsync"


My exploration begins while writing a beta-quality rsync library for Elixir which transfers files in the background and can monitor progress. I hoped to learn better how to interface with long-lived external processes—and I got more than I wished for.

 

Starting rsync should be as easy as calling out to a shell:

System.shell("rsync -a source target")

This has a few shortcomings, such as the static filenames—it feels unsafe to even demonstrate how string interpolation like #{source} could be misused so let's skip straight to the next tool, System.cmd which doesn't expand its argv:

System.find_executable(rsync_path)
|> System.cmd([~w(-a), source, target])

This is safer, but the calling thread loses control and gets no feedback until the transfer is complete. To run a external process asynchronously we reach for Elixir's lowest-level Port.open which maps directly to ERTS open_port[1]. These are tremendously flexible, here we demonstrate turning a few knobs:

Port.open(
  {:spawn_executable, rsync_path},
  [
    :binary,
    :exit_status,
    :hide,
    :use_stdio,
    :stderr_to_stdout,
    args:
      ~w(-a --info=progress2) ++
        rsync_args ++
        sources ++
        [args[:target]],
    env: env
  ]
)

Progress lines come in with a fairly self-explanatory format:

      3,342,336  33%    3.14MB/s    0:00:02



Each rsync output line is sent to the library's handle_info callback as {:data, line} and after the transfer is finished we receive a conclusive {:exit_status, status_code}.

We extract the percent_done column and strictly reject any other output:

with terms when terms != [] <- String.split(line, ~r"\s", trim: true),
         percent_done_text when is_binary(percent_done_text) <- Enum.at(terms, 1),
         {percent_done, "%"} <- Float.parse(percent_done_text) do
      percent_done
    else
      _ ->
        {:unknown, line}
    end

The trim lets us ignore spacing and newline trickery—or even a leading carriage return as you can see in the rsync source code,

rprintf(FCLIENT, "\r%15s %3d%% %7.2f%s %s%s", ...);



One more comment about this carriage return: the "control" character is just a byte in the binary data coming over the pipe from rsync, but it plays a control function because of how the tty interprets it. Still, a repeated theme is that data and control are leaky categories. We come to the more formal control side channels later.

This is where Erlang/OTP really starts to shine: by opening the port inside of a dedicated gen_server[2] we have a separate thread communicating with rsync, which receives an asynchronous message like {:data, text_line} for each progress line. It's easy to parse the line, update some internal state and optionally send a progress summary to the code calling the library.

Problem: runaway processes

This would have been the end of the story, but I'm a very flat-footed and iterative developer and as I was calling my rsync library from my application under development, I would often kill the program abruptly by crashing or by typing <control>-C in the terminal. Dozens of times. What I found is that the rsync transfers would continue to run in the background even after Elixir had completely shut down.

That would have to change—leaving overlapping file transfers running unmonitored is exactly what I wanted to avoid by having Elixir control the process in the first place. Once the BEAM stops there was no way to clearly identify and kill the sketchy rsyncing.

In fact, killing the lower-level threads when a higher-level supervising process dies is central to the BEAM concept of supervisors[3] which has earned the virtual machine its reputation for being legendarily robust. Why would some external processes stop and others not? There seemed to be no way to send a signal or close the port to stop the process, either.

Bad assumption: pipe-like processes

A straightforward use case for external processes would be to run a standard transformation such as compression or decompression. A program like gzip or cat will stop once it detects that its input has ended, because the main loop usually makes a C system call to read like this:

ssize_t n_read = read (input_desc, buf, bufsize);
if (n_read < 0) { error... }
if (n_read == 0) { end of file... }

The manual for read[4] explains that reading 0 bytes indicates the end of file, and a negative number indicates an error such as the input file descriptor already being closed. If you think this sounds weird, I would agree: how do we tell the difference between a stream which is stalled and one which has ended? Does the calling process yield control until input arrives? How do we know if more than bufsize bytes are available? If that word salad excites you, read more about O_NONBLOCK[5] and unix pipes[6].

But here we'll focus on how processes affect each other through pipes. Surprising answer: not very much! Try opening a "cat" in the terminal and then type <control>-d to "send" an end-of-file. Oh no, you killed it! You didn't actually send anything, instead the <control>-d is interpreted by bash and it responds by closing the pipe to the child process. This is similar to how <control>-c is not sending a character but is interpreted by the terminal, trapped by the shell and forwarded as an interrupt signal to the child process, completely independently of the data pipe. My entry point to learning more is this stty webzine[7] by Julia Evans. Go ahead, try it: stty -a

Any special behavior at the other end of a pipe is the result of intentional programming decisions and "end of file" (EOF) is more a convention than a real thing. Now try opening "watch ls" or "sleep 60" and try <control>-d all you want—no effect. You did close its stdin but nobody cares because it wasn't listening anway.

Back to the problem at hand, as it turns out "rsync" is in this latter category of programs which sees itself as a daemon which should continue even when input is closed. This makes sense enough, since rsync expects no user input and its output is just a side-effect of its main purpose.

BEAM assumes the connected process behaves like this, so nothing needs to be done to clean up a dangling external process because it will end itself as soon as the Port is closed or the BEAM exits. If the external process is known to not behave this way, the recommendation is to wrap it in a shell script which converts a closed stdin into a kill signal.[8]

BEAM internal and external processes

BEAM applications are built out of supervision trees and excel at managing huge numbers of parallel actor processes, all scheduled internally. Although the communities' mostly share a philosophy of running as much as possible inside of the VM because it builds on this strength, and simplifies away much interface glue and context switching, on many occasions it will still start an external OS process. There are some straightforward ways to simply run a command line, which might be familiar to programmers coming from another language: os:cmd takes a string and runs the thing. At a lower level, external programs are managed through a Port which is a flexible abstraction allowing a backend driver to communicate data in and out, and to send some control signals such as reporting an external process's exit and exit status.

When it comes to internal processes, BEAM is among the most mature and robust, achieved by good isolation and by its hierarchical supervisors liberally pruning entire subprocess trees at the first sign of going out of specification. But for external processes, results are mixed. Some programs are twitchy and crash easily, for example cat, but others like the BEAM itself or a long-running server are built to survive any ordinary I/O glitch or accidental mashing of the keyboard. Furthermore, this will usually be a fundamental assumption of that program and there will be no configuration to make the program behave differently depending on stimulus.

Reliable clean up

What I discovered is that the BEAM external process library assumes that its spawned processes will respond to standard input and output shutting down or so called end of file, for example what happens when <control>-d is typed into the shell. This works very well for a subprocess like bash but has no effect on a program like sleep or rsync.

The hole created by this mismatch is interestingly solved by something shaped like the BEAM's supervisor itself. I would expect the VM to spawn many processes as necessary, but I wouldn't expect the child process to outlive the VM, just because it happens to be insensitive to end of file. Instead, I was hoping that the VM would try harder to kill these processes as the Port is closed, or if the VM halts.

In fact, letting a child process outlive the one that spawned it is unusual enough that the condition is called an "orphan process". The POSIX standard recommends that when this happens the process should be adopted by the top-level system process "init" if it exists, but this is a "should have" and not a must. The reason it can be undesirable to allow this to happen at all is that the orphan process becomes entirely responsible for itself, potentially running forever without any more intervention according to the purpose of the process. Even the system init process tracks its children, and can restart them in response to service commands. Init will know nothing about its adopted, orphan processes.

When I ran into this issue, I found the suggested workaround of writing a wrapper script to track its child (the program originally intended to run), listen for the end of file from BEAM, and kill the external program. How much simpler it would be if this workaround were already built into the Erlang Port module!

It's always a pleasure to ask questions in the BEAM communities, they have earned a reputation as being friendly and open. The first big tip was to look at the third-party library erlexec, which demonstrates some best practices that might be backported into the language itself. Everyone speaking on the problem has generally agreed that the fragile clean up of external processes is a bug, and supported the idea that one of the "terminate" signals should be sent to spawned programs.

Which signal to use is still an open issue, there's a softer version HUP which says "Goodbye!" and the program is free to interpret as it will, the mid-level TERM that I prefer because it makes the intention explicit but can still be blocked or handled gracefully if needed, and KILL which is bursting with destructive potential. The world of unix signals is a wild and scary place, on which there's a refreshing diversity of opinion around the Internet.

Inside the BEAM

Despite its retro-futuristic appearance of being one of the most time-tested yet forward-facing programming environments, I was brought back to Earth by digging around inside the VM to find that it's just a C program like any other. There's nothing holy about the BEAM emulator, there are some good and some great ideas about functional languages and they're buried in a mass of ancient procedural ifdefs, with unnerving memory management and typedefs wrapping the size of an integer on various platforms, just like you might find in other relics from the dark ages of computing, next to the Firefox or linux kernel source code.

Tantalizingly, message-passing is at the core of the VM, but is not a first-class concept when reaching out to external processes. There's some fancy footwork with pipes and dup, but communication is done with enums, unions, and bit-rattling stdlib. I love it, but... it might something to look at on another rainy day.