_____                   _                  _____            _____       _ 
  |     |___ _____ ___ _ _| |_ ___ ___ ___   |  _  |___ ___   | __  |___ _| |
  |   --| . |     | . | | |  _| -_|  _|_ -|  |     |  _| -_|  | __ -| .'| . |
  |_____|___|_|_|_|  _|___|_| |___|_| |___|  |__|__|_| |___|  |_____|__,|___|
  a newsletter by |_| j. b. crawford               home archive subscribe rss

>>> 2020-05-20 file transfer pseudoprotocol (PDF)

Something I have been thinking about lately is FTP.

FTP is interesting in that it is one of the remnants of an earlier day of internet protocol design. You can tell this, right off the bat, because it has a low, odd port number. Odd-numbered ports in the low end are, broad speaking, direct carry-overs from a pre-TCP protocol called NCP or Network Control Program. Much like TCP, NCP was connection-oriented. Entirely unlike TCP, NCP connections were one-way (simplex). So, a normal two-way connection between two hosts required the establishment of two connections, one for each direction of the dialog. To facilitate these two connections, protocols were each allocated two port numbers for use with NCP, one for server-to-client and the other for client-to- server. These ports were assigned adjacent and, by convention, an odd-number port was allocated for client-to-server and the following even-numbered port for server-to-client.

Part of what's going on here is that NCP predates the invention of the 'ephemeral port,' TCP's magical made up numbers. Both ends of a given NCP connection make use of the same port number. So the client talks one one port and listens on the other, the server uses the same port numbers the other way around. This might seem very similar to how we would allocate TX and RX lines or pairs on a serial cable, and it is - the NCP protocol design seems a little bizarre to us, given what we know today, but I suspect that at the time it was in fact the obvious and elegant approach; it was structured around the metaphor of existing point-to-point cables.

When NCP gave way to TCP, the same port numbers were retained (TCP was more or less a direct enhancement of NCP). But since TCP was duplex the even-numbered 'reverse connection' ports were no longer required and were dropped. Under NCP, FTP used port 21 for one direction and 22 for the other. When TCP replaced NCP, port 21 was retained for FTP but port 22 was no longer needed. Years later, the now-free port 22 was allocated to the up-and-coming SSH protocol. The morals of this story are that, first, if a protocol uses a two-digit odd-numbered port, it is older than I am[1], and second, networking largely happens by accident rather than by design.

As you no doubt suspect, I will be further exploring the latter.

When I said that FTP was historically allocated ports 21 and 22, I lied. In actuality, FTP was historically allocated ports 20, 21, and 22. So let's think this out from principles. NCP even numbered ports were usually referred to as "receive" ports, although I prefer to say server-to-client. So FTP involves not one, but two server-to-client ports, and one client-to-server.

If you have ever issued the PASV command that probably clicks for you, and not in a good way.

Ports 21 and 22 historically, and port 21 today, are better referred to as the FTP control ports. Port 20 is the FTP data port. In the design of FTP, control of the session and transmission of actual data are strictly separate matters handled as separate line protocols on separate connections. But, in order to minimize client-side business logic (keep in mind that this was developed in the mainframe days where the client was potentially a very thin device, 'thin' in the sense that you would call a person 'thick'), data connection initiation was made a server responsibility.

Roughly speaking, here's how it works. Your client connects to the server and starts issuing commands. The server responds to those commands over the same connection (assuming we're in the TCP era where we can talk both ways on one connection). But, when it comes to actually transferring data, which in the case of FTP includes directory listings, the server connects back to the client using the data connection port starts sending data[2].

If you turn your head to the right angle and squint, this is an elegant design. There is a bidirectional control channel used to exchange commands, and then any actual payload is conveyed over a new connection created just for that purpose. The directionality of that convention (or, more importantly to TCP, who initiates it) is dependent on which way the payload is moving, not some more artificial requirement of the protocol design.

Of course, all of this crashes and burns the moment a real modern network gets involved. Today, a large number of client computers are behind NAT, a firewall, or both (and from the outside they don't look that different anyway). When the server tries to connect back to the client to send data, it encounters a rejection or more often complete silence in reply. On the modern internet, the concept of the server initiating a connection to the client is fundamentally untenable. Sometimes by intention and sometimes by mistake, most clients do not have the ability to listen on the internet even if they wanted to.

The FTP protocol had to be modified to account for this scenario. FTP, which was already divided into two modes (binary and ASCII for the data connection), was further split into two more modes, called active and passive. The old behavior, which we have discussed, came to be called active mode and remains the default. However, when clients fail to receive a connection back from the server (or, for some clients, at the very start), they issue the PASV command which asks the server to switch to passive mode. The terms active and passive here are intended from the server's perspective: in active mode the server initiates connections, in passive mode it does not, and instead waits.

Essentially, passive mode just flips the direction of the data connection from server to client, so that regardless of the direction the data is moving the client initiates the connection to the server. This is starting to sound a lot more like we expect protocols to behave today: the client initiates the connection and data moves whichever way it will.

This says something big about the internet and the design of protocols: historically, back in the before times, it was assumed that any host could initiate a connection to any other host, and the direction in which connections were initiated often reflected the logical flow of data (call this business logic or payload) rather than the needs of the network. Today, it is generally assumed that clients ("edge" or "end-user" systems, meaning desktops, laptops, phones, internet-connected air purifiers, etc) can initiate connections to servers (which as we know live in vertical tenements called racks and are connected directly to the internet). The other direction? Who knows, but probably not.

I cannot succinctly convey how interesting I find this. A fairly fundamental architectural element of the internet has undergone a radical shift since many of the protocols we use today are designed. This change rears its head in the form of many protocols that are difficult or impossible to reliably operate across the internet, and instead require connection helpers, bent-pipe servers, and VPN tunnels to coddle them into believing they exist on the internet prior to consumer routers and corporate firewalls. What's more, none of this is on purpose, or was even necessarily done knowingly.

The large-scale change of one of the basic premises of internet design, from a peer-to-peer system to a strictly client-server system, happened by accident.

This is not a short discussion, and I feel that I have already gone on too long about FTP. Instead of taking up even more of your time today, I will take up more of your time later. Look forward to our next discussion, where I will transition from an opinionated explanation of FTP to an opinionated explanation of the internet's foremost Faustian bargain: NAT.

[1] Another way to identify these very old protocols is if the IANA port number registry lists Jon Postel as the responsible person. Jon Postel died over 20 years ago; continuing to list him as contact for certain archaic but well-known protocols actually seems like a brilliant metaphor for the status these artifacts of networking history have attained. Dead but unable to find peace, remote job entry (5) and systat (11) will forever haunt the internet.

[2] You might be wondering why there is only port 20 for FTP data considering that NCP required one port per direction - was port 19 for FTP the other way? I'm honestly not sure, I can't find any reference to FTP ever making use of port 19. It may be that since the FTP data channel is only ever open one way at a time (no parallel FTP operations were allowed back then) it was considered okay to open connections either direction using port 20 - the data channel of FTP is half-duplex so you only need one wire.