Elixir/Ports and external process wiring: Difference between revisions
increase all heading levels |
lots of background detail |
||
| Line 1: | Line 1: | ||
This is a short programming adventure which goes into piping and signaling between processes. | |||
This | |||
My library starts rsync using | == Context: controlling "rsync" == | ||
This exploration began when I wrote a simple library to run rsync from an Elixir program<ref>https://hexdocs.pm/rsync/Rsync.html</ref>, to transfer files in a background thread while monitoring progress. I was hoping to learn how to interface with long-lived external processes, and I ended up learning more than I wished for. | |||
Starting rsync and reading from it went very well, mostly thanks to the <code>--info=progress2</code> option which reports progress with a simple columnar format that can be easily parsed:<syntaxhighlight lang="text"> | |||
3,342,336 33% 3.14MB/s 0:00:02 | |||
</syntaxhighlight>In case you're here to integrate with rsync, there's also a slightly different <code>--progress</code> option which reports statistics per file, and an option <code>--itemize-changes</code> which can be included to get information about the operations taken on each file, but in my case I care more about the overall transfer progress. | |||
On the terminal the progress line is updated in-place by restarting the line with the fun [[w:Carriage return|carriage return]] control character <code>0x0d</code> or <code>\r</code>. This is apparently named after pushing the physical paper carriage of a typewriter and on a terminal it will erases the current line so it can be written again! But over a pipe we see this as a regular byte in the stream, like "<code>-old line-^M-new line-</code>". [[W:|Disagreements]] about carriage return vs. newline have caused eye-rolling since the dawn of personal computing but we can double-check the rsync source code and we see that it will format output using carriage return on any platform: <syntaxhighlight lang="c"> | |||
rprintf(FCLIENT, "\r%15s %3d%% %7.2f%s %s%s", ...); | |||
</syntaxhighlight> | |||
My library starts rsync using Elixir's low-level <code>Port</code> call, which maps directly to the base Erlang open_port<ref>https://www.erlang.org/doc/apps/erts/erlang.html#open_port/2</ref> implementation:<syntaxhighlight lang="elixir"> | |||
Port.open( | Port.open( | ||
{:spawn_executable, rsync_path}, | {:spawn_executable, rsync_path}, | ||
| Line 19: | Line 29: | ||
] | ] | ||
) | ) | ||
</syntaxhighlight> | </syntaxhighlight>This is where Erlang/OTP really starts to shine: by opening the port inside of a dedicated gen_server<ref>https://www.erlang.org/doc/apps/stdlib/gen_server.html</ref> we have a separate thread communicating with rsync, which receives an asynchronous message like <code>{:data, text_line}</code> for each progress line. It's easy to parse the line, update some internal state and optionally send a progress summary to the code calling the library. | ||
== Problem: runaway processes == | == Problem: runaway processes == | ||
This would have been the end of the story, but I'm a very flat-footed and iterative developer and as I was calling my rsync library from my application under development, I would often kill the program abruptly by crashing or by typing <control>-C in the terminal. Dozens of times. What I found is that the rsync transfers would continue to run in the background even after Elixir had completely shut down. | |||
That would have to change—leaving overlapping file transfers running unmonitored is exactly what I wanted to avoid by having Elixir control the process in the first place. | That would have to change—leaving overlapping file transfers running unmonitored is exactly what I wanted to avoid by having Elixir control the process in the first place. Once the BEAM stops there was no way to clearly identify and kill the sketchy rsyncing. | ||
In fact, killing the lower-level threads when a higher-level supervising process dies is central to the BEAM concept of supervisors<ref>https://www.erlang.org/doc/system/sup_princ.html</ref> which has earned the virtual machine its reputation for being legendarily robust. Why would some external processes stop and others not? There seemed to be no way to send a signal or close the port to stop the process, either. | |||
== Bad assumption: pipe-like processes == | == Bad assumption: pipe-like processes == | ||
A | A straightforward use case for external processes would be to run a standard transformation such as compression or decompression. A program like <code>gzip</code> or <code>cat</code> will stop once it detects that its input has ended, because the main loop usually makes a C system call to <code>read</code> like this:<syntaxhighlight lang="c"> | ||
ssize_t n_read = read (input_desc, buf, bufsize); | ssize_t n_read = read (input_desc, buf, bufsize); | ||
if (n_read < 0) { error... } | if (n_read < 0) { error... } | ||
if (n_read == 0) { end of file... } | if (n_read == 0) { end of file... } | ||
</syntaxhighlight>The manual for read<ref>https://man.archlinux.org/man/read.2</ref> explains that reading 0 bytes indicates the end of file, and a negative number indicates an error such as the input file descriptor already being closed. | </syntaxhighlight>The manual for read<ref>https://man.archlinux.org/man/read.2</ref> explains that reading 0 bytes indicates the end of file, and a negative number indicates an error such as the input file descriptor already being closed. If you think this sounds weird, I would agree: how do we tell the difference between a stream which is stalled and one which has ended? Does the calling process yield control until input arrives? How do we know if more than bufsize bytes are available? If that word salad excites you, read more about <code>O_NONBLOCK</code><ref>https://man.archlinux.org/man/open.2.en#O_NONBLOCK</ref> and unix pipes<ref>https://man.archlinux.org/man/pipe.7.en</ref>. | ||
But here we'll focus on how processes affect each other through pipes. Surprising answer: not very much! Try opening a "cat" in the terminal and then type <control>-d to "send" an end-of-file. Oh no, you killed it! You didn't actually send anything, instead the <control>-d is interpreted by bash and it responds by closing the pipe to the child process. This is similar to how <control>-c is not sending a character but is interpreted by the terminal, trapped by the shell and forwarded as an interrupt signal to the child process, completely independently of the data pipe. My entry point to learning more is this stty webzine<ref>https://wizardzines.com/comics/stty/</ref> by Julia Evans. Go ahead, try it: <code>stty -a</code> | |||
Any special behavior at the other end of a pipe is the result of intentional programming decisions and "end of file" (EOF) is more a convention than a real thing. Now try opening "watch ls" or "sleep 60" and try <control>-d all you want—no effect. You did close its stdin but nobody cares because it wasn't listening anway. | |||
Back to the problem at hand, as it turns out "rsync" is in this latter category of programs which sees itself as a daemon which should continue even when input is closed. This makes sense enough, since rsync expects no user input and its output is just a side-effect of its main purpose. | |||
BEAM assumes the connected process behaves like this, so nothing needs to be done to clean up a dangling external process because it will end itself as soon as the Port is closed or the BEAM exits. If the external process is known to not behave this way, the recommendation is to wrap it in a shell script which converts a closed stdin into a kill signal.<ref>https://hexdocs.pm/elixir/main/Port.html#module-orphan-operating-system-processes</ref> | BEAM assumes the connected process behaves like this, so nothing needs to be done to clean up a dangling external process because it will end itself as soon as the Port is closed or the BEAM exits. If the external process is known to not behave this way, the recommendation is to wrap it in a shell script which converts a closed stdin into a kill signal.<ref>https://hexdocs.pm/elixir/main/Port.html#module-orphan-operating-system-processes</ref> | ||
Revision as of 17:12, 16 October 2025
This is a short programming adventure which goes into piping and signaling between processes.
Context: controlling "rsync"
This exploration began when I wrote a simple library to run rsync from an Elixir program[1], to transfer files in a background thread while monitoring progress. I was hoping to learn how to interface with long-lived external processes, and I ended up learning more than I wished for.
Starting rsync and reading from it went very well, mostly thanks to the --info=progress2 option which reports progress with a simple columnar format that can be easily parsed:
3,342,336 33% 3.14MB/s 0:00:02In case you're here to integrate with rsync, there's also a slightly different --progress option which reports statistics per file, and an option --itemize-changes which can be included to get information about the operations taken on each file, but in my case I care more about the overall transfer progress.
On the terminal the progress line is updated in-place by restarting the line with the fun carriage return control character 0x0d or \r. This is apparently named after pushing the physical paper carriage of a typewriter and on a terminal it will erases the current line so it can be written again! But over a pipe we see this as a regular byte in the stream, like "-old line-^M-new line-". Disagreements about carriage return vs. newline have caused eye-rolling since the dawn of personal computing but we can double-check the rsync source code and we see that it will format output using carriage return on any platform:
rprintf(FCLIENT, "\r%15s %3d%% %7.2f%s %s%s", ...);
My library starts rsync using Elixir's low-level Port call, which maps directly to the base Erlang open_port[2] implementation:
Port.open(
{:spawn_executable, rsync_path},
[
:binary,
:exit_status,
:hide,
:use_stdio,
:stderr_to_stdout,
args:
~w(-a --info=progress2) ++
rsync_args ++
sources ++
[args[:target]],
env: env
]
)This is where Erlang/OTP really starts to shine: by opening the port inside of a dedicated gen_server[3] we have a separate thread communicating with rsync, which receives an asynchronous message like {:data, text_line} for each progress line. It's easy to parse the line, update some internal state and optionally send a progress summary to the code calling the library.
Problem: runaway processes
This would have been the end of the story, but I'm a very flat-footed and iterative developer and as I was calling my rsync library from my application under development, I would often kill the program abruptly by crashing or by typing <control>-C in the terminal. Dozens of times. What I found is that the rsync transfers would continue to run in the background even after Elixir had completely shut down.
That would have to change—leaving overlapping file transfers running unmonitored is exactly what I wanted to avoid by having Elixir control the process in the first place. Once the BEAM stops there was no way to clearly identify and kill the sketchy rsyncing.
In fact, killing the lower-level threads when a higher-level supervising process dies is central to the BEAM concept of supervisors[4] which has earned the virtual machine its reputation for being legendarily robust. Why would some external processes stop and others not? There seemed to be no way to send a signal or close the port to stop the process, either.
Bad assumption: pipe-like processes
A straightforward use case for external processes would be to run a standard transformation such as compression or decompression. A program like gzip or cat will stop once it detects that its input has ended, because the main loop usually makes a C system call to read like this:
ssize_t n_read = read (input_desc, buf, bufsize);
if (n_read < 0) { error... }
if (n_read == 0) { end of file... }The manual for read[5] explains that reading 0 bytes indicates the end of file, and a negative number indicates an error such as the input file descriptor already being closed. If you think this sounds weird, I would agree: how do we tell the difference between a stream which is stalled and one which has ended? Does the calling process yield control until input arrives? How do we know if more than bufsize bytes are available? If that word salad excites you, read more about O_NONBLOCK[6] and unix pipes[7].
But here we'll focus on how processes affect each other through pipes. Surprising answer: not very much! Try opening a "cat" in the terminal and then type <control>-d to "send" an end-of-file. Oh no, you killed it! You didn't actually send anything, instead the <control>-d is interpreted by bash and it responds by closing the pipe to the child process. This is similar to how <control>-c is not sending a character but is interpreted by the terminal, trapped by the shell and forwarded as an interrupt signal to the child process, completely independently of the data pipe. My entry point to learning more is this stty webzine[8] by Julia Evans. Go ahead, try it: stty -a
Any special behavior at the other end of a pipe is the result of intentional programming decisions and "end of file" (EOF) is more a convention than a real thing. Now try opening "watch ls" or "sleep 60" and try <control>-d all you want—no effect. You did close its stdin but nobody cares because it wasn't listening anway.
Back to the problem at hand, as it turns out "rsync" is in this latter category of programs which sees itself as a daemon which should continue even when input is closed. This makes sense enough, since rsync expects no user input and its output is just a side-effect of its main purpose.
BEAM assumes the connected process behaves like this, so nothing needs to be done to clean up a dangling external process because it will end itself as soon as the Port is closed or the BEAM exits. If the external process is known to not behave this way, the recommendation is to wrap it in a shell script which converts a closed stdin into a kill signal.[9]
BEAM internal and external processes
BEAM applications are built out of supervision trees and excel at managing huge numbers of parallel actor processes, all scheduled internally. Although the communities' mostly share a philosophy of running as much as possible inside of the VM because it builds on this strength, and simplifies away much interface glue and context switching, on many occasions it will still start an external OS process. There are some straightforward ways to simply run a command line, which might be familiar to programmers coming from another language: os:cmd takes a string and runs the thing. At a lower level, external programs are managed through a Port which is a flexible abstraction allowing a backend driver to communicate data in and out, and to send some control signals such as reporting an external process's exit and exit status.
When it comes to internal processes, BEAM is among the most mature and robust, achieved by good isolation and by its hierarchical supervisors liberally pruning entire subprocess trees at the first sign of going out of specification. But for external processes, results are mixed. Some programs are twitchy and crash easily, for example cat, but others like the BEAM itself or a long-running server are built to survive any ordinary I/O glitch or accidental mashing of the keyboard. Furthermore, this will usually be a fundamental assumption of that program and there will be no configuration to make the program behave differently depending on stimulus.
Reliable clean up
What I discovered is that the BEAM external process library assumes that its spawned processes will respond to standard input and output shutting down or so called end of file, for example what happens when <control>-d is typed into the shell. This works very well for a subprocess like bash but has no effect on a program like sleep or rsync.
The hole created by this mismatch is interestingly solved by something shaped like the BEAM's supervisor itself. I would expect the VM to spawn many processes as necessary, but I wouldn't expect the child process to outlive the VM, just because it happens to be insensitive to end of file. Instead, I was hoping that the VM would try harder to kill these processes as the Port is closed, or if the VM halts.
In fact, letting a child process outlive the one that spawned it is unusual enough that the condition is called an "orphan process". The POSIX standard recommends that when this happens the process should be adopted by the top-level system process "init" if it exists, but this is a "should have" and not a must. The reason it can be undesirable to allow this to happen at all is that the orphan process becomes entirely responsible for itself, potentially running forever without any more intervention according to the purpose of the process. Even the system init process tracks its children, and can restart them in response to service commands. Init will know nothing about its adopted, orphan processes.
When I ran into this issue, I found the suggested workaround of writing a wrapper script to track its child (the program originally intended to run), listen for the end of file from BEAM, and kill the external program. How much simpler it would be if this workaround were already built into the Erlang Port module!
It's always a pleasure to ask questions in the BEAM communities, they have earned a reputation as being friendly and open. The first big tip was to look at the third-party library erlexec, which demonstrates some best practices that might be backported into the language itself. Everyone speaking on the problem has generally agreed that the fragile clean up of external processes is a bug, and supported the idea that one of the "terminate" signals should be sent to spawned programs.
Which signal to use is still an open issue, there's a softer version HUP which says "Goodbye!" and the program is free to interpret as it will, the mid-level TERM that I prefer because it makes the intention explicit but can still be blocked or handled gracefully if needed, and KILL which is bursting with destructive potential. The world of unix signals is a wild and scary place, on which there's a refreshing diversity of opinion around the Internet.
Inside the BEAM
Despite its retro-futuristic appearance of being one of the most time-tested yet forward-facing programming environments, I was brought back to Earth by digging around inside the VM to find that it's just a C program like any other. There's nothing holy about the BEAM emulator, there are some good and some great ideas about functional languages and they're buried in a mass of ancient procedural ifdefs, with unnerving memory management and typedefs wrapping the size of an integer on various platforms, just like you might find in other relics from the dark ages of computing, next to the Firefox or linux kernel source code.
Tantalizingly, message-passing is at the core of the VM, but is not a first-class concept when reaching out to external processes. There's some fancy footwork with pipes and dup, but communication is done with enums, unions, and bit-rattling stdlib. I love it, but... it might something to look at on another rainy day.
- ↑ https://hexdocs.pm/rsync/Rsync.html
- ↑ https://www.erlang.org/doc/apps/erts/erlang.html#open_port/2
- ↑ https://www.erlang.org/doc/apps/stdlib/gen_server.html
- ↑ https://www.erlang.org/doc/system/sup_princ.html
- ↑ https://man.archlinux.org/man/read.2
- ↑ https://man.archlinux.org/man/open.2.en#O_NONBLOCK
- ↑ https://man.archlinux.org/man/pipe.7.en
- ↑ https://wizardzines.com/comics/stty/
- ↑ https://hexdocs.pm/elixir/main/Port.html#module-orphan-operating-system-processes