Jump to content

Elixir/Ports and external process wiring: Difference between revisions

From ludd
Adamw (talk | contribs)
Adamw (talk | contribs)
 
(25 intermediate revisions by the same user not shown)
Line 1: Line 1:
This is a short programming adventure which goes into piping and signaling between processes.
A deceivingly simple programming adventure veers unexpectedly into piping and signaling between unix processes.


== Context: controlling "rsync" ==
== Context: controlling "rsync" ==
This exploration began when I wrote a simple library to run rsync from an Elixir program<ref>https://hexdocs.pm/rsync/Rsync.html</ref>, to transfer files in a background thread while monitoring progress.  I was hoping to learn how to interface with long-lived external processes, and I ended up learning more than I wished for.
{{Project|source=https://gitlab.com/adamwight/rsync_ex/|status=beta|url=https://hexdocs.pm/rsync/Rsync.html}}


Starting rsync and reading from it went very well, mostly thanks to the <code>--info=progress2</code> option which reports progress with a simple columnar format that can be easily parsed:<syntaxhighlight lang="text">
My exploration begins while writing a beta-quality library for Elixir to transfer files in the background and monitor progress using rsync.
      3,342,336  33%    3.14MB/s    0:00:02
 
</syntaxhighlight>In case you're here to integrate with rsync, there's also a slightly different <code>--progress</code> option which reports statistics per file, and an option <code>--itemize-changes</code> which can be included to get information about the operations taken on each file, but in my case I care more about the overall transfer progress.
I was excited to learn how to interface with long-lived external processes—and this project offered more than I hoped for.
 
{{Aside|text=<p>[[w:rsync|Rsync]] is the standard utility for file transfers, locally or over a network.  It can resume incomplete transfers and synchronize directories efficiently, and after almost 30 years of usage rsync can be trusted to handle any edge case.</p>
<p>BEAM<ref>The virtual machine shared by Erlang, Elixir, Gleam, Ash, and so on: [https://blog.stenmans.org/theBeamBook/ the BEAM Book]</ref> is a fairly unique ecosystem in which it's not considered deviant to reinvent a rounder wheel: an external dependency like "cron" will often be ported into native Erlang—but the complexity of rsync and its dependence on a matching remote daemon makes it unlikely that it will be rewritten any time soon, which is why I've decided to wrap external command execution in a library.</p>}}
 
[[File:Monkey eating.jpg|alt=A Toque macaque (Macaca radiata) Monkey eating peanuts. Pictured in Bangalore, India|right|300x300px]]
 
=== Naïve shelling ===


On the terminal the progress line is updated in-place by restarting the line with the fun [[w:Carriage return|carriage return]] control character <code>0x0d</code> or <code>\r</code>.  This is apparently named after pushing the physical paper carriage of a typewriter and on a terminal it will erases the current line so it can be written again!  But over a pipe we see this as a regular byte in the stream, like "<code>-old line-^M-new line-</code>".  [[W:|Disagreements]] about carriage return vs. newline have caused eye-rolling since the dawn of personal computing but we can double-check the rsync source code and we see that it will format output using carriage return on any platform: <syntaxhighlight lang="c">
Starting rsync should be as easy as calling out to a shell:<syntaxhighlight lang="elixir">
rprintf(FCLIENT, "\r%15s %3d%% %7.2f%s %s%s", ...);
System.shell("rsync -a source target")
</syntaxhighlight>
</syntaxhighlight>
This has a few shortcomings, starting with how one would pass it dynamic paths.  It's unsafe to use string interpolation (<code>"#{source}"</code> ): consider what could happen if the filenames include unescaped whitespace or special shell characters such as ";".


My library starts rsync using Elixir's low-level <code>Port</code> call, which maps directly to the base Erlang open_port<ref>https://www.erlang.org/doc/apps/erts/erlang.html#open_port/2</ref> implementation:<syntaxhighlight lang="elixir">
=== Safe path handling ===
We turn next to <code>System.cmd</code>, which takes a raw argv and can't be fooled special characters in the path arguments:<syntaxhighlight lang="elixir">
System.find_executable(rsync_path)
|> System.cmd([~w(-a), source, target])
</syntaxhighlight>For a short job this is perfect, but for longer transfers our program loses control and observability, waiting indefinitely for a monolithic command to return.
 
=== Asynchronous call and communication ===
To run a external process asynchronously we reach for Elixir's low-level <code>Port.open</code>, nothing but a one-line wrapper<ref>See the [https://github.com/elixir-lang/elixir/blob/809b035dccf046b7b7b4422f42cfb6d075df71d2/lib/elixir/lib/port.ex#L232 port.ex source code]</ref> passing its parameters directly to ERTS <code>open_port</code><ref>[https://www.erlang.org/doc/apps/erts/erlang.html#open_port/2 Erlang <code>open_port</code> docs]</ref>.  This function is tremendously flexible, here we turn a few knobs:<syntaxhighlight lang="elixir">
Port.open(
Port.open(
   {:spawn_executable, rsync_path},
   {:spawn_executable, rsync_path},
Line 29: Line 44:
   ]
   ]
)
)
</syntaxhighlight>This is where Erlang/OTP really starts to shine: by opening the port inside of a dedicated gen_server<ref>https://www.erlang.org/doc/apps/stdlib/gen_server.html</ref> we have a separate thread communicating with rsync, which receives an asynchronous message like <code>{:data, text_line}</code> for each progress lineIt's easy to parse the line, update some internal state and optionally send a progress summary to the code calling the library.
</syntaxhighlight>
 
{{Aside|text=
'''Rsync progress reporting options'''
 
There are a variety of ways to report progress:
 
; <code>-v</code> : list each filename as it's transferred
 
; <code>--progress</code> : report statistics per file
 
; <code>--info=progress2</code> : report overall progress
 
; <code>--itemize-changes</code> : list the operations taken on each file
 
; <code>--out-format=FORMAT</code> : any custom format string following rsyncd.conf's <code>log format</code><ref>[https://man.archlinux.org/man/rsyncd.conf.5#log~2 rsyncd.conf log format] docs</ref>
}}
 
Rsync outputs <code>--info=progress2</code> lines like so:<syntaxhighlight lang="text">
      overall percent complete  time remaining
bytes transferred |  transfer speed    |
        |        |        |          |
      3,342,336  33%    3.14MB/s    0:00:02
</syntaxhighlight>
 
The controlling Port captures these lines is sent to the library's <code>handle_info</code> callback as <code>{:data, line}</code>.  After the transfer is finished we receive a conclusive <code>{:exit_status, status_code}</code> message.
 
As a first step, we extract the overall_percent_done column and flag any unrecognized output:
<syntaxhighlight lang="elixir">
with terms when terms != [] <- String.split(line, ~r"\s", trim: true),
        percent_done_text when percent_done_text != nil <- Enum.at(terms, 1),
        {percent_done, "%"} <- Float.parse(percent_done_text) do
      percent_done
    else
      _ ->
        {:unknown, line}
    end
</syntaxhighlight>The <code>trim</code> is lifting more than its weight here: it lets us completely ignore spacing and newline trickery—and ignores the leading carriage return before each line, seen in the rsync source code:<ref>[https://github.com/RsyncProject/rsync/blob/797e17fc4a6f15e3b1756538a9f812b63942686f/progress.c#L129 rsync/progress.c] source code</ref>
<syntaxhighlight lang="c">
rprintf(FCLIENT, "\r%15s %3d%% %7.2f%s %s%s", ...);
</syntaxhighlight>Carriage return <code>\r</code> deserves special mention: this is the first "control" character we come across and it looks the same as an ordinary byte in the binary data coming over the pipe from rsync, similar to newline <code>\n</code>.  Its normal role is to control the terminal emulator, rewinding the cursor so that the current line can be overwritten!  And like newline, carriage return can be ignored.  Control signaling is exactly what goes haywire about this project, and the leaky category distinction between data and control seems to be a repeated theme in inter-process communication.  The reality is not so much data vs. control, as it seems to be a sequence of layers like with [[w:OSI model|networking]].
 
{{Aside|text=
[[File:Chinese typewriter 03.jpg|right|200x200px]]
 
On the terminal, rsync progress lines are updated in place by beginning each line with a [[w:Carriage return|carriage return]] control character, <code>\r</code>, <code>0x0d</code> sometimes rendered as <code>^M</code>.  Try this command in a terminal:<syntaxhighlight lang="shell">
# echo "three^Mtwo"
twoee
</syntaxhighlight>
You'll have to use <control>-v <control>-m to type a literal carriage return, copy-and-paste won't work.
 
The character is named after the pushing of a physical typewriter carriage to return to the beginning of the current line without feeding the roller to a new line.
 
[[File:Baboons Playing in Chobe National Park-crlf.jpg|left|300x300px|Three young baboons playing on a rock ledge.  Two are on the ridge and one below, grabbing the tail of another.  A meme font shows "\r", "\n", and "\r\n" personified as each baboon.]]
[[w:https://en.wikipedia.org/wiki/Newline#Issues_with_different_newline_formats|Disagreement about carriage return]] vs. line feed has caused eye-rolling since the dawn of personal computing.
}}
 
== OTP generic server ==
The Port API is convenient enough so far, but Erlang/OTP really starts to shine once we wrap each Port connection under a <code>gen_server</code><ref>[https://www.erlang.org/doc/apps/stdlib/gen_server.html Erlang gen_server docs]</ref> module, giving us several properties for free: A dedicated application thread coordinates with its rsync process independent of anything else.  Input and output are asynchronous and buffered, but handled sequentially in a thread-safe way.  The gen_server holds internal state including the up-to-date completion percentage.  And the caller can request updates as needed, or it can listen for push messages with the parsed statistics.
 
This gen_server is also expected to run safely under an OTP supervision tree<ref>[https://adoptingerlang.org/docs/development/supervision_trees/ "Supervision Trees"] chapter from [https://adoptingerlang.org/ Adopting Erlang]</ref> but this is where our dream falls apart for the momentThe Port already watches for rsync completion or failure and reports upwards to its caller, but we fail at the critical property of being able to propagate a termination downwards to shut down rsync if the calling code or our library module crashes.


== Problem: runaway processes ==
== Problem: runaway processes ==
This would have been the end of the story, but I'm a very flat-footed and iterative developer and as I was calling my rsync library from my application under development, I would often kill the program abruptly by crashing or by typing <control>-C in the terminal.  Dozens of times.  What I found is that the rsync transfers would continue to run in the background even after Elixir had completely shut down.
[[File:CargoNet Di 12 Euro 4000 Lønsdal - Bolna.jpg|thumb]]
The unpleasant real-world consequence is that rsync transfers will continue to run in the background even after Elixir kills our gen_server or shuts down, because the BEAM has no way of stopping the external process.


That would have to change—leaving overlapping file transfers running unmonitored is exactly what I wanted to avoid by having Elixir control the process in the first placeOnce the BEAM stops there was no way to clearly identify and kill the sketchy rsyncing.
It's possible to find the operating system PID of the child process with <code>Port.info(port, :os_pid)</code> and send it a signal by shelling out to unix <code>kill PID</code>, but BEAM doesn't include built-in functions to send a signal to an OS process, and there is an ugly race condition between closing the port and sending this signalWe'll keep looking for another way to "link" the processes.


In fact, killing the lower-level threads when a higher-level supervising process dies is central to the BEAM concept of supervisors<ref>https://www.erlang.org/doc/system/sup_princ.html</ref> which has earned the virtual machine its reputation for being legendarily robustWhy would some external processes stop and others not?  There seemed to be no way to send a signal or close the port to stop the process, either.
To debug what happens during <code>port_close</code> and to eliminate variables, I tried spawning  <code>sleep 60</code> instead of rsync and I found that it behaves in exactly the same way: hanging until <code>sleep</code> ends naturally regardless of what happened in Elixir or whether its pipes are still openThis happens to have been a lucky choice as I learned later: "sleep" is daemon-like so similar to rsync, but its behavior is much simpler to reason about.


== Bad assumption: pipe-like processes ==
== Bad assumption: pipe-like processes ==
A straightforward use case for external processes would be to run a standard transformation such as compression or decompression.  A program like <code>gzip</code> or <code>cat</code> will stop once it detects that its input has ended, because the main loop usually makes a C system call to <code>read</code> like this:<syntaxhighlight lang="c">
A pipeline like <code>gzip</code> or <code>cat</code> it built to read from its input and write to its output.  We can roughly group the different styles of command-line application into "pipeline" programs which read and write, "interactive" programs which require user input, and "daemon" programs which are designed to run in the backgroundSome programs support multiple modes depending on the arguments given at launch, or by detecting the terminal using <code>isatty</code><ref>[https://man.archlinux.org/man/isatty.3.en docs for <code>isatty</code>]</ref>.  The BEAM is currently optimized to interface with pipeline programs and it assumes that the external process will stop when its "standard input" is closed.
ssize_t n_read = read (input_desc, buf, bufsize);
if (n_read < 0) { error... }
if (n_read == 0) { end of file... }
</syntaxhighlight>The manual for read<ref>https://man.archlinux.org/man/read.2</ref> explains that reading 0 bytes indicates the end of file, and a negative number indicates an error such as the input file descriptor already being closedIf you think this sounds weird, I would agree: how do we tell the difference between a stream which is stalled and one which has ended?  Does the calling process yield control until input arrives?  How do we know if more than bufsize bytes are available?  If that word salad excites you, read more about <code>O_NONBLOCK</code><ref>https://man.archlinux.org/man/open.2.en#O_NONBLOCK</ref> and unix pipes<ref>https://man.archlinux.org/man/pipe.7.en</ref>.


But here we'll focus on how processes affect each other through pipes.  Surprising answer: not very much!  Try opening a "cat" in the terminal and then type <control>-d to "send" an end-of-file.  Oh no, you killed it!  You didn't actually send anything, instead the <control>-d is interpreted by bash and it responds by closing the pipe to the child process.  This is similar to how <control>-c is not sending a character but is interpreted by the terminal, trapped by the shell and forwarded as an interrupt signal to the child process, completely independently of the data pipe.  My entry point to learning more is this stty webzine<ref>https://wizardzines.com/comics/stty/</ref> by Julia Evans.  Go ahead, try it: <code>stty -a</code>
A typical pipeline program will stop once it detects that input has ended, for example by calling <code>read</code><ref>[https://man.archlinux.org/man/read.2 libc <code>read</code> docs]</ref> in a loop:<syntaxhighlight lang="c">
size_read = read (input_desc, buf, bufsize);
if (size_read < 0) { error... }
if (size_read == 0) { end of file... }
</syntaxhighlight>


Any special behavior at the other end of a pipe is the result of intentional programming decisions and "end of file" (EOF) is more a convention than a real thingNow try opening "watch ls" or "sleep 60" and try <control>-d all you want—no effect. You did close its stdin but nobody cares because it wasn't listening anway.
If the program does blocking I/O, then a zero-byte <code>read</code> indicates the end of file conditionA program which does asynchronous I/O with <code>O_NONBLOCK</code><ref>[https://man.archlinux.org/man/open.2.en#O_NONBLOCK O_NONBLOCK docs]</ref> might instead detect EOF by listening for the <code>HUP</code> hang-up signal which is can be arranged (TODO: document how this can be done with <code>prctl</code>, and on which platforms).


Back to the problem at hand, as it turns out "rsync" is in this latter category of programs which sees itself as a daemon which should continue even when input is closed.  This makes sense enough, since rsync expects no user input and its output is just a side-effect of its main purpose.
But here we'll focus on how processes can more generally affect each other through pipes.  Surprising answer: without much effect!  You can experiment with the <code>/dev/null</code> device which behaves like a closed pipe, for example compare these two commands:


BEAM assumes the connected process behaves like this, so nothing needs to be done to clean up a dangling external process because it will end itself as soon as the Port is closed or the BEAM exits.  If the external process is known to not behave this way, the recommendation is to wrap it in a shell script which converts a closed stdin into a kill signal.<ref>https://hexdocs.pm/elixir/main/Port.html#module-orphan-operating-system-processes</ref>
<syntaxhighlight lang="shell">
cat < /dev/null


== BEAM internal and external processes ==
sleep 10 < /dev/null
[[W:BEAM (Erlang virtual machine)|BEAM]] applications are built out of supervision trees and excel at managing huge numbers of parallel actor processes, all scheduled internally.  Although the communities' mostly share a philosophy of running as much as possible inside of the VM because it builds on this strength, and simplifies away much interface glue and context switching, on many occasions it will still start an external OS process.  There are some straightforward ways to simply run a command line, which might be familiar to programmers coming from another language: <code>[https://www.erlang.org/doc/apps/kernel/os.html#cmd/2 os:cmd]</code> takes a string and runs the thing.  At a lower level, external programs are managed through a [https://www.erlang.org/doc/system/ports.html Port] which is a flexible abstraction allowing a backend driver to communicate data in and out, and to send some control signals such as reporting an external process's exit and exit status.
</syntaxhighlight><code>cat</code> exits immediately, but <code>sleep</code> does its thing as usual.


When it comes to internal processes, BEAM is among the most mature and robust, achieved by good isolation and by its hierarchical [https://www.erlang.org/doc/system/sup_princ supervisors] liberally pruning entire subprocess trees at the first sign of going out of specificationBut for external processes, results are mixedSome programs are twitchy and crash easily, for example <code>cat</code>, but others like the BEAM itself or a long-running server are built to survive any ordinary I/O glitch or accidental mashing of the keyboardFurthermore, this will usually be a fundamental assumption of that program and there will be no configuration to make the program behave differently depending on stimulus.
You could do the same experiment by opening a "cat" in the terminal and then type <control>-d to "send" an end-of-file.  Interestingly, what happened here is that <control>-d is interpreted by bash which responds by closing its pipe connected to standard input of the child process.  This is similar to how <control>-c is not sending a character but is interpreted by the terminal, trapped by the shell and forwarded as an interrupt signal to the child process, completely independently of the data pipe.  My entry point to learning more is this stty webzine<ref>[https://wizardzines.com/comics/stty/ ★ wizard zines ★: stty]</ref> by Julia Evans.  Dump information about your own terminal emulator: <code>stty -a</code>
 
Any special behavior at the other end of a pipe is the result of intentional programming decisions and "end of file" (EOF) is more a convention than a hard realityA program with a chaotic disposition could even reopen stdin after it was closed and connect it to something else, to the great surprise of friends and neighbors.
 
Back to the problem at hand, "rsync" is in the category of "daemon-like" programs which will carry on even after standard input is closedThis makes sense enough, since rsync isn't interactive and any output is just a side effect of its main purpose.
 
== Shimming can kill ==
A small shim can adapt a daemon-like program to behave more like a pipeline.  The shim is sensitive to stdin closing or SIGHUP, and when this is detected it converts this into a stronger signal like SIGTERM which it forwards to its own child.  This is the idea behind a suggested shell script<ref>[https://hexdocs.pm/elixir/1.19.0/Port.html#module-orphan-operating-system-processes Elixir Port docs showing a shim script]</ref> for Elixir, and the <code>erlexec</code><ref name=":0">[https://hexdocs.pm/erlexec/readme.html <code>erlexec</code> library]</ref> library.  The opposite adapter can be found in the [[w:nohup|nohup]] shell command and the grimsby<ref>[https://github.com/shortishly/grimsby <code>grimsby</code> library]</ref> library: these will keep standard in and/or standard out open for the child process even after the parent exits, so that a pipe-like program can behave more like a daemon.
 
I used the shim approach in my rsync library and it includes a small C program<ref>[https://gitlab.com/adamwight/rsync_ex/-/blob/main/src/main.c?ref_type=heads rsync_ex C shim program]</ref> which wraps rsync and makes it sensitive to BEAM <code>port_close</code>It's featherweight, leaving pipes unchanged as it passes control to rsync, here are the business parts:<syntaxhighlight lang="c">// Set up a fail-safe to self-signal with HUP if the controlling process dies.
prctl(PR_SET_PDEATHSIG, SIGHUP);</syntaxhighlight><syntaxhighlight lang="c">
void handle_signal(int signum) {
  if (signum == SIGHUP && child_pid > 0) {
    // Send the child TERM so that rsync can perform clean-up such as shutting down a remote server.
    kill(child_pid, SIGTERM);
  }
}
</syntaxhighlight>


== Reliable clean up ==
== Reliable clean up ==
What I discovered is that the BEAM external process library assumes that its spawned processes will respond to standard input and output shutting down or so called end of file, for example what happens when <control>-d is typed into the shell.  This works very well for a subprocess like <code>bash</code> but has no effect on a program like <code>sleep</code> or <code>rsync</code>.
{{Project|status=in review|url=https://erlangforums.com/t/open-port-and-zombie-processes|source=https://github.com/erlang/otp/pull/9453}}
It's always a pleasure to ask questions in the BEAM communities, they deserve their reputation for being friendly and open.  The first big tip was to look at the third-party library <code>erlexec</code><ref name=":0" />, which demonstrates emerging best practices which could be backported into the language itself.  Everyone speaking on the problem generally agrees that the fragile clean up of external processes is a bug, and supports the idea that some flavor of "terminate" signal should be sent to spawned programs when the port is closed.
[[File:Itinerant glassworker exhibition with spinning wheel and steam engine.jpg|thumb]]
I would be lying to hide my disappointment that the required core changes are mostly in an auxiliary C program and not written in Erlang or even in the BEAM itself, but it was still fascinating to open such an elegant black box and find the technological equivalent of a steam engine inside.  All of the futuristic, high-level features we've come to know actually map closely to a few scraps of wizardry with ordinary pipes<ref>[https://man.archlinux.org/man/pipe.7.en Overview of unix pipes]</ref>, using libc's pipe<ref>[https://man.archlinux.org/man/pipe.2.en Docs for the <code>pipe</code> syscall]</ref>, read, write, and select<ref>[https://man.archlinux.org/man/select.2.en libc <code>select</code> docs]</ref>.
 
Port drivers<ref>[https://www.erlang.org/doc/system/ports.html Erlang ports docs]</ref> are fundamental to ERTS, and several levels of port wiring are involved in launching external processes: the spawn driver starts a forker driver which sends a control message to <code>erl_child_setup</code> to execute your external command.  Each BEAM has a single erl_child_setup process to watch over all children.  This architecture reflects the Supervisor paradigm and we can leverage it to produce some of the same properties: the subprocess can buffer reads and writes asynchronously and handle them sequentially; and if the BEAM crashes then erl_child_setup can detect the condition and do its own cleanup.
 
Letting a child process outlive its controlling process leaves the child in a state called "orphaned" in POSIX, and the standard recommends that when this happens the process should be adopted by the top-level system process "init" if it exists.  This can be seen as undesirable because unix itself has a paradigm similar to OTP's Supervisors, in which each parent is responsible for its children.  Without supervision, a process could potentially run forever or do naughty things.  The system <code>init</code> process starts and tracks its own children, and can restart them in response to service commands.  But init will know nothing about adopted, orphan processes or how to monitor and restart them.
 
The patch [https://github.com/erlang/otp/pull/9453 PR#9453] adapting port_close to SIGTERM is waiting for review and responses look generally positive so far.
 
{{Aside|text='''Which signal?'''
 
Which signal to use is still an open question:
 
; <code>HUP</code> : sent to a process when its standard input stream is closed<ref>[https://pubs.opengroup.org/onlinepubs/9799919799/basedefs/V1_chap11.html#tag_11_01_10 POSIX standard "General Terminal Interface: Modem Disconnect"]</ref>
 
; <code>TERM</code> : has a clear intention of "kill this thing" but still possible to trap at the target and handle in a customized way
 
; <code>KILL</code> : bursting with destructive potential, this signal cannot be stopped and you may not clean up


The hole created by this mismatch is interestingly solved by something shaped like the BEAM's supervisor itself.  I would expect the VM to spawn many processes as necessary, but I wouldn't expect the child process to outlive the VM, just because it happens to be insensitive to end of file.  Instead, I was hoping that the VM would try harder to kill these processes as the Port is closed, or if the VM halts.
There is a refreshing diversity of opinion, so it could be worthwhile to make the signal configurable for each port.
}}


In fact, letting a child process outlive the one that spawned it is unusual enough that the condition is called an "orphan process".  The POSIX standard recommends that when this happens the process should be adopted by the top-level system process "init" if it exists, but this is a "should have" and not a must.  The reason it can be undesirable to allow this to happen at all is that the orphan process becomes entirely responsible for itself, potentially running forever without any more intervention according to the purpose of the process.  Even the system init process tracks its children, and can restart them in response to service commands.  Init will know nothing about its adopted, orphan processes.
== TODO: consistency with unix process groups ==


When I ran into this issue, I found the suggested workaround of writing a [https://hexdocs.pm/elixir/1.18.3/Port.html#module-zombie-operating-system-processes wrapper script] to track its child (the program originally intended to run), listen for the end of file from BEAM, and kill the external program. How much simpler it would be if this workaround were already built into the Erlang Port module!
... there is something fun here about how unix already has process tree behaviors which are close analogues to a BEAM supervisor tree.


It's always a pleasure to ask questions in the BEAM communities, they have earned a reputation as being friendly and open.  The first big tip was to look at the third-party library [https://hexdocs.pm/erlexec/ erlexec], which demonstrates some best practices that might be backported into the language itself.  Everyone speaking on the problem has generally agreed that the fragile clean up of external processes is a bug, and supported the idea that one of the "terminate" signals should be sent to spawned programs.
== Future directions ==
Discussion threads also included some notable grumbling about the Port API in general, it seems this part of ERTS is overdue for a larger redesign.


Which signal to use is still an open issue, there's a softer version <code>HUP</code> which says "Goodbye!" and the program is free to interpret as it will, the mid-level <code>TERM</code> that I prefer because it makes the intention explicit but can still be blocked or handled gracefully if needed, and <code>KILL</code> which is bursting with destructive potential.  The world of unix signals is a wild and scary place, on which there's a refreshing diversity of opinion around the Internet.
There's a good opportunity to unify the different platform implementations: Windows lacks the erl_child_setup layer entirely, for example.


== Inside the BEAM ==
Another idea to borrow from the erlexec library is to have an option to kill the entire process group of a child, which is shared by any descendants that haven't explicitly broken out of its original groupThis would be useful for managing deep trees of external processes launched by a forked command.
Despite its retro-futuristic appearance of being one of the most time-tested yet forward-facing programming environments, I was brought back to Earth by digging around inside the VM to find that it's just a C program like any otherThere's nothing holy about the BEAM emulator, there are some good and some great ideas about functional languages and they're buried in a mass of ancient procedural ifdefs, with unnerving memory management and typedefs wrapping the size of an integer on various platforms, just like you might find in other relics from the dark ages of computing, next to the Firefox or linux kernel source code.


Tantalizingly, message-passing is at the core of the VM, but is not a first-class concept when reaching out to external processes.  There's some fancy footwork with [[W:Anonymous pipe|pipes]] and [[W:Dup (system call)|dup]], but communication is done with enums, unions, and bit-rattling stdlib.  I love it, but... it might something to look at on another rainy day.
== References ==

Latest revision as of 11:52, 24 October 2025

A deceivingly simple programming adventure veers unexpectedly into piping and signaling between unix processes.

Context: controlling "rsync"


My exploration begins while writing a beta-quality library for Elixir to transfer files in the background and monitor progress using rsync.

I was excited to learn how to interface with long-lived external processes—and this project offered more than I hoped for.



A Toque macaque (Macaca radiata) Monkey eating peanuts. Pictured in Bangalore, India

Naïve shelling

Starting rsync should be as easy as calling out to a shell:

System.shell("rsync -a source target")

This has a few shortcomings, starting with how one would pass it dynamic paths. It's unsafe to use string interpolation ("#{source}" ): consider what could happen if the filenames include unescaped whitespace or special shell characters such as ";".

Safe path handling

We turn next to System.cmd, which takes a raw argv and can't be fooled special characters in the path arguments:

System.find_executable(rsync_path)
|> System.cmd([~w(-a), source, target])

For a short job this is perfect, but for longer transfers our program loses control and observability, waiting indefinitely for a monolithic command to return.

Asynchronous call and communication

To run a external process asynchronously we reach for Elixir's low-level Port.open, nothing but a one-line wrapper[2] passing its parameters directly to ERTS open_port[3]. This function is tremendously flexible, here we turn a few knobs:

Port.open(
  {:spawn_executable, rsync_path},
  [
    :binary,
    :exit_status,
    :hide,
    :use_stdio,
    :stderr_to_stdout,
    args:
      ~w(-a --info=progress2) ++
        rsync_args ++
        sources ++
        [args[:target]],
    env: env
  ]
)



Rsync outputs --info=progress2 lines like so:

       overall percent complete   time remaining
bytes transferred |  transfer speed    |
         |        |        |           |
      3,342,336  33%    3.14MB/s    0:00:02

The controlling Port captures these lines is sent to the library's handle_info callback as {:data, line}. After the transfer is finished we receive a conclusive {:exit_status, status_code} message.

As a first step, we extract the overall_percent_done column and flag any unrecognized output:

with terms when terms != [] <- String.split(line, ~r"\s", trim: true),
         percent_done_text when percent_done_text != nil <- Enum.at(terms, 1),
         {percent_done, "%"} <- Float.parse(percent_done_text) do
      percent_done
    else
      _ ->
        {:unknown, line}
    end

The trim is lifting more than its weight here: it lets us completely ignore spacing and newline trickery—and ignores the leading carriage return before each line, seen in the rsync source code:[5]

rprintf(FCLIENT, "\r%15s %3d%% %7.2f%s %s%s", ...);

Carriage return \r deserves special mention: this is the first "control" character we come across and it looks the same as an ordinary byte in the binary data coming over the pipe from rsync, similar to newline \n. Its normal role is to control the terminal emulator, rewinding the cursor so that the current line can be overwritten! And like newline, carriage return can be ignored. Control signaling is exactly what goes haywire about this project, and the leaky category distinction between data and control seems to be a repeated theme in inter-process communication. The reality is not so much data vs. control, as it seems to be a sequence of layers like with networking.



OTP generic server

The Port API is convenient enough so far, but Erlang/OTP really starts to shine once we wrap each Port connection under a gen_server[6] module, giving us several properties for free: A dedicated application thread coordinates with its rsync process independent of anything else. Input and output are asynchronous and buffered, but handled sequentially in a thread-safe way. The gen_server holds internal state including the up-to-date completion percentage. And the caller can request updates as needed, or it can listen for push messages with the parsed statistics.

This gen_server is also expected to run safely under an OTP supervision tree[7] but this is where our dream falls apart for the moment. The Port already watches for rsync completion or failure and reports upwards to its caller, but we fail at the critical property of being able to propagate a termination downwards to shut down rsync if the calling code or our library module crashes.

Problem: runaway processes

The unpleasant real-world consequence is that rsync transfers will continue to run in the background even after Elixir kills our gen_server or shuts down, because the BEAM has no way of stopping the external process.

It's possible to find the operating system PID of the child process with Port.info(port, :os_pid) and send it a signal by shelling out to unix kill PID, but BEAM doesn't include built-in functions to send a signal to an OS process, and there is an ugly race condition between closing the port and sending this signal. We'll keep looking for another way to "link" the processes.

To debug what happens during port_close and to eliminate variables, I tried spawning sleep 60 instead of rsync and I found that it behaves in exactly the same way: hanging until sleep ends naturally regardless of what happened in Elixir or whether its pipes are still open. This happens to have been a lucky choice as I learned later: "sleep" is daemon-like so similar to rsync, but its behavior is much simpler to reason about.

Bad assumption: pipe-like processes

A pipeline like gzip or cat it built to read from its input and write to its output. We can roughly group the different styles of command-line application into "pipeline" programs which read and write, "interactive" programs which require user input, and "daemon" programs which are designed to run in the background. Some programs support multiple modes depending on the arguments given at launch, or by detecting the terminal using isatty[8]. The BEAM is currently optimized to interface with pipeline programs and it assumes that the external process will stop when its "standard input" is closed.

A typical pipeline program will stop once it detects that input has ended, for example by calling read[9] in a loop:

size_read = read (input_desc, buf, bufsize);
if (size_read < 0) { error... }
if (size_read == 0) { end of file... }

If the program does blocking I/O, then a zero-byte read indicates the end of file condition. A program which does asynchronous I/O with O_NONBLOCK[10] might instead detect EOF by listening for the HUP hang-up signal which is can be arranged (TODO: document how this can be done with prctl, and on which platforms).

But here we'll focus on how processes can more generally affect each other through pipes. Surprising answer: without much effect! You can experiment with the /dev/null device which behaves like a closed pipe, for example compare these two commands:

cat < /dev/null

sleep 10 < /dev/null

cat exits immediately, but sleep does its thing as usual.

You could do the same experiment by opening a "cat" in the terminal and then type <control>-d to "send" an end-of-file. Interestingly, what happened here is that <control>-d is interpreted by bash which responds by closing its pipe connected to standard input of the child process. This is similar to how <control>-c is not sending a character but is interpreted by the terminal, trapped by the shell and forwarded as an interrupt signal to the child process, completely independently of the data pipe. My entry point to learning more is this stty webzine[11] by Julia Evans. Dump information about your own terminal emulator: stty -a

Any special behavior at the other end of a pipe is the result of intentional programming decisions and "end of file" (EOF) is more a convention than a hard reality. A program with a chaotic disposition could even reopen stdin after it was closed and connect it to something else, to the great surprise of friends and neighbors.

Back to the problem at hand, "rsync" is in the category of "daemon-like" programs which will carry on even after standard input is closed. This makes sense enough, since rsync isn't interactive and any output is just a side effect of its main purpose.

Shimming can kill

A small shim can adapt a daemon-like program to behave more like a pipeline. The shim is sensitive to stdin closing or SIGHUP, and when this is detected it converts this into a stronger signal like SIGTERM which it forwards to its own child. This is the idea behind a suggested shell script[12] for Elixir, and the erlexec[13] library. The opposite adapter can be found in the nohup shell command and the grimsby[14] library: these will keep standard in and/or standard out open for the child process even after the parent exits, so that a pipe-like program can behave more like a daemon.

I used the shim approach in my rsync library and it includes a small C program[15] which wraps rsync and makes it sensitive to BEAM port_close. It's featherweight, leaving pipes unchanged as it passes control to rsync, here are the business parts:

// Set up a fail-safe to self-signal with HUP if the controlling process dies.
prctl(PR_SET_PDEATHSIG, SIGHUP);
void handle_signal(int signum) {
  if (signum == SIGHUP && child_pid > 0) {
    // Send the child TERM so that rsync can perform clean-up such as shutting down a remote server.
    kill(child_pid, SIGTERM);
  }
}

Reliable clean up

It's always a pleasure to ask questions in the BEAM communities, they deserve their reputation for being friendly and open. The first big tip was to look at the third-party library erlexec[13], which demonstrates emerging best practices which could be backported into the language itself. Everyone speaking on the problem generally agrees that the fragile clean up of external processes is a bug, and supports the idea that some flavor of "terminate" signal should be sent to spawned programs when the port is closed.

I would be lying to hide my disappointment that the required core changes are mostly in an auxiliary C program and not written in Erlang or even in the BEAM itself, but it was still fascinating to open such an elegant black box and find the technological equivalent of a steam engine inside. All of the futuristic, high-level features we've come to know actually map closely to a few scraps of wizardry with ordinary pipes[16], using libc's pipe[17], read, write, and select[18].

Port drivers[19] are fundamental to ERTS, and several levels of port wiring are involved in launching external processes: the spawn driver starts a forker driver which sends a control message to erl_child_setup to execute your external command. Each BEAM has a single erl_child_setup process to watch over all children. This architecture reflects the Supervisor paradigm and we can leverage it to produce some of the same properties: the subprocess can buffer reads and writes asynchronously and handle them sequentially; and if the BEAM crashes then erl_child_setup can detect the condition and do its own cleanup.

Letting a child process outlive its controlling process leaves the child in a state called "orphaned" in POSIX, and the standard recommends that when this happens the process should be adopted by the top-level system process "init" if it exists. This can be seen as undesirable because unix itself has a paradigm similar to OTP's Supervisors, in which each parent is responsible for its children. Without supervision, a process could potentially run forever or do naughty things. The system init process starts and tracks its own children, and can restart them in response to service commands. But init will know nothing about adopted, orphan processes or how to monitor and restart them.

The patch PR#9453 adapting port_close to SIGTERM is waiting for review and responses look generally positive so far.



TODO: consistency with unix process groups

... there is something fun here about how unix already has process tree behaviors which are close analogues to a BEAM supervisor tree.

Future directions

Discussion threads also included some notable grumbling about the Port API in general, it seems this part of ERTS is overdue for a larger redesign.

There's a good opportunity to unify the different platform implementations: Windows lacks the erl_child_setup layer entirely, for example.

Another idea to borrow from the erlexec library is to have an option to kill the entire process group of a child, which is shared by any descendants that haven't explicitly broken out of its original group. This would be useful for managing deep trees of external processes launched by a forked command.

References