Jump to content

Elixir/Ports and external process wiring

From ludd
Revision as of 12:26, 20 October 2025 by Adamw (talk | contribs) (c/e)

A deceivingly simple programming adventure veers unexpectedly into piping and signaling between unix processes.

Context: controlling "rsync"


My exploration begins while writing a beta-quality library for Elixir to transfer files in the background and monitor progress using rsync.

I was excited to learn how to interface with long-lived external processes—and this project offered more than I hoped for.



A Toque macaque (Macaca radiata) Monkey eating peanuts. Pictured in Bangalore, India

Naive shelling

Starting rsync should be as easy as calling out to a shell:

System.shell("rsync -a source target")

This has a few shortcomings, starting with how we pass the filenames. It would be possible to pass a dynamic path using string interpolation like #{source} but this is risky: consider what happens if the filenames include whitespace or even special shell characters such as ";".

Safe path handling

We turn next to System.cmd, which takes a raw argv and can't be fooled special characters in the path arguments:

System.find_executable(rsync_path)
|> System.cmd([~w(-a), source, target])

For a short job this is perfect, but for longer transfers our program loses control and observability, waiting indefinitely for a monolithic command to return.

Asynchronous call and communication

To run a external process asynchronously we reach for Elixir's low-level Port.open, nothing but a one-line wrapper[1] which passes its parameters directly to ERTS open_port[2]. This function is tremendously flexible, here we turn a few knobs:

Port.open(
  {:spawn_executable, rsync_path},
  [
    :binary,
    :exit_status,
    :hide,
    :use_stdio,
    :stderr_to_stdout,
    args:
      ~w(-a --info=progress2) ++
        rsync_args ++
        sources ++
        [args[:target]],
    env: env
  ]
)



We've chosen --info=progress2 , so the meaning of the reported percentage is "overall percent complete". Rsync outputs these progress lines in a fairly self-explanatory columnar format:

          percent complete       time remaining
bytes transferred |  transfer speed    |
         |        |        |           |
      3,342,336  33%    3.14MB/s    0:00:02

Our Port captures output and each line is sent to the library's handle_info callback as {:data, line}. After the transfer is finished we receive a conclusive {:exit_status, status_code} message.

As a first step, we extract the percent_done column and flag any unrecognized output:

with terms when terms != [] <- String.split(line, ~r"\s", trim: true),
         percent_done_text when percent_done_text != nil <- Enum.at(terms, 1),
         {percent_done, "%"} <- Float.parse(percent_done_text) do
      percent_done
    else
      _ ->
        {:unknown, line}
    end

The trim is lifting more than its weight here: it lets us completely ignore spacing and newline trickery—even skipping the leading carriage return that can be seen in the rsync source code,[4]

rprintf(FCLIENT, "\r%15s %3d%% %7.2f%s %s%s", ...);

Carriage return \r deserves special mention: this "control" character is just a byte in the binary data coming over the pipe from rsync, but its normal role is to control the terminal emulator, rewinding the cursor so that the current line can be overwritten!

A repeated theme in inter-process communication is that data and control are leaky categories. We come to the more formal control side channels later.



OTP generic server

The Port API is convenient enough so far, but Erlang/OTP really starts to shine once we wrap each Port connection under a gen_server[5] module, giving us several properties for free: A dedicated application thread coordinates with its rsync process independent of anything else. Input and output are asynchronous and buffered, but handled sequentially in a thread-safe way. The gen_server holds internal state including the up-to-date completion percentage. And the caller can request updates as needed, or it can listen for push messages with the parsed statistics.

This gen_server is also expected to run safely under an OTP supervision tree[6] but this is where our dream falls apart for the moment. The Port already watches for rsync completion or failure and reports upwards to its caller, but we fail at the critical property of being able to propagate a termination downwards to shut down rsync if the calling code or our library module crashes.

Problem: runaway processes

The unpleasant real-world consequence is that rsync transfers will continue to run in the background even after Elixir kills our gen_server or shuts down, because the BEAM has no way of stopping the external process.

It's possible to send a signal by shelling out to unix kill PID, but BEAM doesn't expose the child process ID and doesn't include any built-in functions to send a signal to an OS process. Clearly we're expected to do this another way. Another problem with "kill" is that we want the external process to stop no matter how badly the BEAM is damaged, so we shouldn't rely on stored data or on running final clean-up logic before exiting.

To debug what happens during port_close and to eliminate variables, I tried to spawn sleep 60 using the same Port command, and I found that it behaves exactly the same way, hanging until the sleep ends naturally regardless of what happened in Elixir or whether its pipes are still open. This happens to have been a lucky choice as I learned later: "sleep" is unusual in the same way as rsync but its behavior is much simpler to reason about.

Bad assumption: pipe-like processes

A pipeline like gzip or cat it built to read from its input and write to its output. These will stop once they detects that input has ended because the main loop usually makes a C system call to read like this:

ssize_t n_read = read (input_desc, buf, bufsize);
if (n_read < 0) { error... }
if (n_read == 0) { end of file... }

The manual for read[7] explains that reading 0 bytes indicates the end of file, and a negative number indicates an error such as the input file descriptor already being closed. If you think this sounds weird, I would agree: how do we tell the difference between a stream which is stalled and one which has ended? Does the calling process yield control until input arrives? How do we know if more than bufsize bytes are available? If that word salad excites you, read more about O_NONBLOCK[8] and unix pipes[9].

But here we'll focus on how processes affect each other through pipes. Surprising answer: it doesn't affect very much! Try opening a "cat" in the terminal and then type <control>-d to "send" an end-of-file. Oh no, you killed it! You didn't actually send anything, though—the <control>-d is interpreted by bash and it responds by closing its pipe connected to "standard input" of the child process. This is similar to how <control>-c is not sending a character but is interpreted by the terminal, trapped by the shell and forwarded as an interrupt signal to the child process, completely independently of the data pipe. My entry point to learning more is this stty webzine[10] by Julia Evans. Go ahead and try this command, what could go wrong: stty -a

Any special behavior at the other end of a pipe is the result of intentional programming decisions and "end of file" (EOF) is more a convention than a hard reality. You could even reopen stdin from the application, to the great surprise of your friends and neighbors. For example, try opening "watch ls" or "sleep 60" and try <control>-d all you want—no effect. You did close its stdin but nobody cared, it wasn't listening to you anyway.

Back to the problem at hand, "rsync" is in this latter category of "daemon-like" programs which will carry on even after standard input is closed. This makes sense enough, since rsync isn't interactive and any output is just a side effect of its main purpose.

Shimming can kill

It's possible to write a small adapter which is sensitive to stdin closing, then converts this into a stronger signal like SIGTERM which it forwards to its own child. This is the idea behind a suggested shell script[11] for Elixir and the erlexec[12] library. The opposite adapter is also found in the nohup shell command and the grimsby[13] library: these will keep standard in and/or standard out open for the child process even after the parent exits.

I took the shim approach with my rsync library and included a small C program[14] which wraps rsync and makes it sensitive to the BEAM port_close. It's featherweight, leaving pipes unchanged as it passes control to rsync—its only real effect is to convert SIGHUP to SIGKILL (but should have been SIGTERM, see the sidebar discussion of different signals below).

Reliable clean up

It's always a pleasure to ask questions in the BEAM communities, they have earned their reputation for being friendly and open. The first big tip was to look at the third-party library erlexec, which demonstrates emerging best practices which could be backported into the language itself. Everyone speaking on the problem has generally agreed that the fragile clean up of external processes is a bug, and supported the idea that some flavor of "terminate" signal should be sent to spawned programs.

I would be lying to hide my disappointment that the required core changes are mostly in a C program and not actually in Erlang, but it was still fascinating to open such an elegant black box and find the technological equivalent of a steam engine inside. All of the futuristic, high-level features we've come to know actually map closely to a few scraps of wizardry with ordinary pipes, using stdlib read, write, and select[15].

Port drivers[16] are fundamental to ERTS and external processes are launched through several levels of wiring: the spawn driver starts a forker driver which sends a control message to erl_child_setup to execute your external command. Each BEAM has a single erl_child_setup process to watch over all children.

Letting a child process outlive the one that spawned leaves it in a state called an "orphaned process" in POSIX, and the standard recommends that when this happens the process should be adopted by the top-level system process "init" if it exists. This can be seen as undesirable because unix itself has a paradigm similar to OTP's Supervisors, in which each parent is responsible for its children. Without supervision, a process could potentially run forever or do naughty things. The system init process starts and tracks its own children, and can restart them in response to service commands. But init will know nothing about adopted, orphan processes or how to monitor and restart them.

The patch PR#9453 adapting port_close to SIGTERM is waiting for review and responses look generally positive so far.



Future directions

Discussion threads also included some notable grumbling about the Port API in general, it seems this part of ERTS is overdue for a larger redesign.

There's a good opportunity to unify the different platform implementations: Windows lacks the erl_child_setup layer entirely, for example.

Another idea to borrow from the erlexec library is to have an option to kill the entire process group of a child, which is shared by any descendants that haven't explicitly broken out of its original group. This would be useful for managing deep trees of external processes launched by a forked command.

References