Jump to content

Elixir/Ports and external process wiring: Difference between revisions

From ludd
Adamw (talk | contribs)
c/e and move out some asides
Adamw (talk | contribs)
c/e, image, formatting and arrangement
Line 1: Line 1:
This is a short programming adventure which goes into piping and signaling between processes.
This deceivingly simple programming adventure veers unexpectedly into piping and signaling between unix processes.


== Context: controlling "rsync" ==
== Context: controlling "rsync" ==
This exploration began with writing a library<ref>https://hexdocs.pm/rsync/Rsync.html</ref> to run rsync in order to transfer files in a background thread and monitor progress.  I hoped to learn how to interface with long-lived external processes, and I got more than I wished for.
{{Project|source=https://gitlab.com/adamwight/rsync_ex/|status=beta|url=https://hexdocs.pm/rsync/Rsync.html}}


Starting rsync would be as easy as calling out to a shell:<syntaxhighlight lang="elixir">
My exploration begins while writing a beta-quality rsync library for Elixir which transfers files in the background and can monitor progress.  I hoped to learn better how to interface with long-lived external processes—and I got more than I wished for.
System.shell("rsync -a src target")
 
[[File:Monkey eating.jpg|alt=A Toque macaque (Macaca radiata) Monkey eating peanuts. Pictured in Bangalore, India|right|400x400px]]
 
Starting rsync should be as easy as calling out to a shell:<syntaxhighlight lang="elixir">
System.shell("rsync -a source target")
</syntaxhighlight>
</syntaxhighlight>
This has a few shortcomings: filename escaping is hard to do safely so <code>System.cmd</code> should be used instead, and the job would block until the transfer is done so we get no feedback until completion.  Ending the shell command in an ampersand <code>&</code> is not enough, so the caller would have to manually start a new thread.
This has a few shortcomings, starting with filename escaping so at a minimum we should use <code>System.cmd</code>:<syntaxhighlight lang="elixir">
System.find_executable(rsync_path)
|> System.cmd([~w(-a), source, target])
</syntaxhighlight>However this job would block until the transfer is finished and we get no feedback until completion.


Elixir's low-level <code>Port</code> call maps directly to the base Erlang open_port<ref>https://www.erlang.org/doc/apps/erts/erlang.html#open_port/2</ref> and it gives much more flexibility:<syntaxhighlight lang="elixir">
Elixir's low-level <code>Port.open</code> maps directly to ERTS <code>open_port</code><ref>https://www.erlang.org/doc/apps/erts/erlang.html#open_port/2</ref> which provides flexibility.  Here we have a command turning some knobs:<syntaxhighlight lang="elixir">
Port.open(
Port.open(
   {:spawn_executable, rsync_path},
   {:spawn_executable, rsync_path},
Line 26: Line 33:
   ]
   ]
)
)
</syntaxhighlight>
Progress lines have a fairly self-explanatory format:
<syntaxhighlight lang="text">
      3,342,336  33%    3.14MB/s    0:00:02
</syntaxhighlight>
</syntaxhighlight>


{{Aside|text=
{{Aside|text=
If you're here for rsync, it includes a few alternatives for progress reporting:
rsync has a variety of progress options, we chose overall progress above so the meaning of the percentage is "overall percent complete".
 
Here is the menu:
 
; <code>--info=progress2</code> : report overall progress


; <code>--info=progress2</code> : reports overall progress
; <code>--progress</code> : report statistics per file
; <code>--progress</code> : reports statistics per file
; <code>--itemize-changes</code> ; lists the operations taken on each file


Progress reporting uses a columnar format:
; <code>--itemize-changes</code> : list the operations taken on each file
<syntaxhighlight lang="text">
      3,342,336  33%    3.14MB/s    0:00:02
</syntaxhighlight>
}}
}}


{{Aside|text=
Each rsync output line is sent to the library callback <code>handle_info</code> as <code>{:data, line}</code>, and after transfer is finished it receives a conclusive <code>{:exit_status, status_code}</code>.
On the terminal the progress line is updated in-place by restarting the line with the fun [[w:Carriage return|carriage return]] control character <code>0x0d</code> or <code>\r</code>.  This is apparently named after pushing the physical paper carriage of a typewriter and on a terminal it will erases the current line so it can be written again!  But over a pipe we see this as a regular byte in the stream, like "<code>-old line-^M-new line-</code>".  [[W:|Disagreements]] about carriage return vs. newline have caused eye-rolling since the dawn of personal computing but we can double-check the rsync source code and we see that it will format output using carriage return on any platform: <syntaxhighlight lang="c">
 
Here we extract the percent_done column and strictly reject any other output:
<syntaxhighlight lang="elixir">
with terms when terms != [] <- String.split(line, ~r"\s", trim: true),
        percent_done_text when is_binary(percent_done_text) <- Enum.at(terms, 1),
        {percent_done, "%"} <- Float.parse(percent_done_text) do
      percent_done
    else
      _ ->
        {:unknown, line}
    end
</syntaxhighlight>The <code>trim</code> lets us ignore spacing and newline trickery—or the leading carriage return you can see in this line from rsync's source,
<syntaxhighlight lang="c">
rprintf(FCLIENT, "\r%15s %3d%% %7.2f%s %s%s", ...);
rprintf(FCLIENT, "\r%15s %3d%% %7.2f%s %s%s", ...);
</syntaxhighlight>
</syntaxhighlight>
{{Aside|text=
On the terminal, rsync progress lines are updated in-place by emitting the fun [[w:Carriage return|carriage return]] control character <code>0x0d</code> or <code>\r</code> as you see above.  The character seems to be named after pushing the physical paper carriage of a typewriter backwards without feeding a new line.  On the terminal this overwrites the current line!
[[w:https://en.wikipedia.org/wiki/Newline#Issues_with_different_newline_formats|Disagreements about carriage return]] vs. newline have caused eye-rolling since the dawn of personal computing.
}}
}}
One more comment about this carriage return: it's a byte in the binary data coming over the pipe from rsync, but it plays a "control" function because of how it will be interpreted by the tty.  A repeated theme is that data and control are leaky categories,


This is where Erlang/OTP really starts to shine: by opening the port inside of a dedicated gen_server<ref>https://www.erlang.org/doc/apps/stdlib/gen_server.html</ref> we have a separate thread communicating with rsync, which receives an asynchronous message like <code>{:data, text_line}</code> for each progress line.  It's easy to parse the line, update some internal state and optionally send a progress summary to the code calling the library.
This is where Erlang/OTP really starts to shine: by opening the port inside of a dedicated gen_server<ref>https://www.erlang.org/doc/apps/stdlib/gen_server.html</ref> we have a separate thread communicating with rsync, which receives an asynchronous message like <code>{:data, text_line}</code> for each progress line.  It's easy to parse the line, update some internal state and optionally send a progress summary to the code calling the library.

Revision as of 09:33, 17 October 2025

This deceivingly simple programming adventure veers unexpectedly into piping and signaling between unix processes.

Context: controlling "rsync"


My exploration begins while writing a beta-quality rsync library for Elixir which transfers files in the background and can monitor progress. I hoped to learn better how to interface with long-lived external processes—and I got more than I wished for.

A Toque macaque (Macaca radiata) Monkey eating peanuts. Pictured in Bangalore, India

Starting rsync should be as easy as calling out to a shell:

System.shell("rsync -a source target")

This has a few shortcomings, starting with filename escaping so at a minimum we should use System.cmd:

System.find_executable(rsync_path)
|> System.cmd([~w(-a), source, target])

However this job would block until the transfer is finished and we get no feedback until completion. Elixir's low-level Port.open maps directly to ERTS open_port[1] which provides flexibility. Here we have a command turning some knobs:

Port.open(
  {:spawn_executable, rsync_path},
  [
    :binary,
    :exit_status,
    :hide,
    :use_stdio,
    :stderr_to_stdout,
    args:
      ~w(-a --info=progress2) ++
        rsync_args ++
        sources ++
        [args[:target]],
    env: env
  ]
)

Progress lines have a fairly self-explanatory format:

      3,342,336  33%    3.14MB/s    0:00:02



Each rsync output line is sent to the library callback handle_info as {:data, line}, and after transfer is finished it receives a conclusive {:exit_status, status_code}.

Here we extract the percent_done column and strictly reject any other output:

with terms when terms != [] <- String.split(line, ~r"\s", trim: true),
         percent_done_text when is_binary(percent_done_text) <- Enum.at(terms, 1),
         {percent_done, "%"} <- Float.parse(percent_done_text) do
      percent_done
    else
      _ ->
        {:unknown, line}
    end

The trim lets us ignore spacing and newline trickery—or the leading carriage return you can see in this line from rsync's source,

rprintf(FCLIENT, "\r%15s %3d%% %7.2f%s %s%s", ...);



One more comment about this carriage return: it's a byte in the binary data coming over the pipe from rsync, but it plays a "control" function because of how it will be interpreted by the tty. A repeated theme is that data and control are leaky categories,

This is where Erlang/OTP really starts to shine: by opening the port inside of a dedicated gen_server[2] we have a separate thread communicating with rsync, which receives an asynchronous message like {:data, text_line} for each progress line. It's easy to parse the line, update some internal state and optionally send a progress summary to the code calling the library.

Problem: runaway processes

This would have been the end of the story, but I'm a very flat-footed and iterative developer and as I was calling my rsync library from my application under development, I would often kill the program abruptly by crashing or by typing <control>-C in the terminal. Dozens of times. What I found is that the rsync transfers would continue to run in the background even after Elixir had completely shut down.

That would have to change—leaving overlapping file transfers running unmonitored is exactly what I wanted to avoid by having Elixir control the process in the first place. Once the BEAM stops there was no way to clearly identify and kill the sketchy rsyncing.

In fact, killing the lower-level threads when a higher-level supervising process dies is central to the BEAM concept of supervisors[3] which has earned the virtual machine its reputation for being legendarily robust. Why would some external processes stop and others not? There seemed to be no way to send a signal or close the port to stop the process, either.

Bad assumption: pipe-like processes

A straightforward use case for external processes would be to run a standard transformation such as compression or decompression. A program like gzip or cat will stop once it detects that its input has ended, because the main loop usually makes a C system call to read like this:

ssize_t n_read = read (input_desc, buf, bufsize);
if (n_read < 0) { error... }
if (n_read == 0) { end of file... }

The manual for read[4] explains that reading 0 bytes indicates the end of file, and a negative number indicates an error such as the input file descriptor already being closed. If you think this sounds weird, I would agree: how do we tell the difference between a stream which is stalled and one which has ended? Does the calling process yield control until input arrives? How do we know if more than bufsize bytes are available? If that word salad excites you, read more about O_NONBLOCK[5] and unix pipes[6].

But here we'll focus on how processes affect each other through pipes. Surprising answer: not very much! Try opening a "cat" in the terminal and then type <control>-d to "send" an end-of-file. Oh no, you killed it! You didn't actually send anything, instead the <control>-d is interpreted by bash and it responds by closing the pipe to the child process. This is similar to how <control>-c is not sending a character but is interpreted by the terminal, trapped by the shell and forwarded as an interrupt signal to the child process, completely independently of the data pipe. My entry point to learning more is this stty webzine[7] by Julia Evans. Go ahead, try it: stty -a

Any special behavior at the other end of a pipe is the result of intentional programming decisions and "end of file" (EOF) is more a convention than a real thing. Now try opening "watch ls" or "sleep 60" and try <control>-d all you want—no effect. You did close its stdin but nobody cares because it wasn't listening anway.

Back to the problem at hand, as it turns out "rsync" is in this latter category of programs which sees itself as a daemon which should continue even when input is closed. This makes sense enough, since rsync expects no user input and its output is just a side-effect of its main purpose.

BEAM assumes the connected process behaves like this, so nothing needs to be done to clean up a dangling external process because it will end itself as soon as the Port is closed or the BEAM exits. If the external process is known to not behave this way, the recommendation is to wrap it in a shell script which converts a closed stdin into a kill signal.[8]

BEAM internal and external processes

BEAM applications are built out of supervision trees and excel at managing huge numbers of parallel actor processes, all scheduled internally. Although the communities' mostly share a philosophy of running as much as possible inside of the VM because it builds on this strength, and simplifies away much interface glue and context switching, on many occasions it will still start an external OS process. There are some straightforward ways to simply run a command line, which might be familiar to programmers coming from another language: os:cmd takes a string and runs the thing. At a lower level, external programs are managed through a Port which is a flexible abstraction allowing a backend driver to communicate data in and out, and to send some control signals such as reporting an external process's exit and exit status.

When it comes to internal processes, BEAM is among the most mature and robust, achieved by good isolation and by its hierarchical supervisors liberally pruning entire subprocess trees at the first sign of going out of specification. But for external processes, results are mixed. Some programs are twitchy and crash easily, for example cat, but others like the BEAM itself or a long-running server are built to survive any ordinary I/O glitch or accidental mashing of the keyboard. Furthermore, this will usually be a fundamental assumption of that program and there will be no configuration to make the program behave differently depending on stimulus.

Reliable clean up

What I discovered is that the BEAM external process library assumes that its spawned processes will respond to standard input and output shutting down or so called end of file, for example what happens when <control>-d is typed into the shell. This works very well for a subprocess like bash but has no effect on a program like sleep or rsync.

The hole created by this mismatch is interestingly solved by something shaped like the BEAM's supervisor itself. I would expect the VM to spawn many processes as necessary, but I wouldn't expect the child process to outlive the VM, just because it happens to be insensitive to end of file. Instead, I was hoping that the VM would try harder to kill these processes as the Port is closed, or if the VM halts.

In fact, letting a child process outlive the one that spawned it is unusual enough that the condition is called an "orphan process". The POSIX standard recommends that when this happens the process should be adopted by the top-level system process "init" if it exists, but this is a "should have" and not a must. The reason it can be undesirable to allow this to happen at all is that the orphan process becomes entirely responsible for itself, potentially running forever without any more intervention according to the purpose of the process. Even the system init process tracks its children, and can restart them in response to service commands. Init will know nothing about its adopted, orphan processes.

When I ran into this issue, I found the suggested workaround of writing a wrapper script to track its child (the program originally intended to run), listen for the end of file from BEAM, and kill the external program. How much simpler it would be if this workaround were already built into the Erlang Port module!

It's always a pleasure to ask questions in the BEAM communities, they have earned a reputation as being friendly and open. The first big tip was to look at the third-party library erlexec, which demonstrates some best practices that might be backported into the language itself. Everyone speaking on the problem has generally agreed that the fragile clean up of external processes is a bug, and supported the idea that one of the "terminate" signals should be sent to spawned programs.

Which signal to use is still an open issue, there's a softer version HUP which says "Goodbye!" and the program is free to interpret as it will, the mid-level TERM that I prefer because it makes the intention explicit but can still be blocked or handled gracefully if needed, and KILL which is bursting with destructive potential. The world of unix signals is a wild and scary place, on which there's a refreshing diversity of opinion around the Internet.

Inside the BEAM

Despite its retro-futuristic appearance of being one of the most time-tested yet forward-facing programming environments, I was brought back to Earth by digging around inside the VM to find that it's just a C program like any other. There's nothing holy about the BEAM emulator, there are some good and some great ideas about functional languages and they're buried in a mass of ancient procedural ifdefs, with unnerving memory management and typedefs wrapping the size of an integer on various platforms, just like you might find in other relics from the dark ages of computing, next to the Firefox or linux kernel source code.

Tantalizingly, message-passing is at the core of the VM, but is not a first-class concept when reaching out to external processes. There's some fancy footwork with pipes and dup, but communication is done with enums, unions, and bit-rattling stdlib. I love it, but... it might something to look at on another rainy day.