_____                   _                  _____            _____       _ 
  |     |___ _____ ___ _ _| |_ ___ ___ ___   |  _  |___ ___   | __  |___ _| |
  |   --| . |     | . | | |  _| -_|  _|_ -|  |     |  _| -_|  | __ -| .'| . |
  |_____|___|_|_|_|  _|___|_| |___|_| |___|  |__|__|_| |___|  |_____|__,|___|
  a newsletter by |_| j. b. crawford               home archive subscribe rss

>>> 2021-01-12 taking this serially (PDF)

Conceptually, if we want two computer systems to communicate with each other, all we need is a data link over which we can send a series of bits. This is exactly what serial interfaces do. You can probably quickly imagine a simple and effective scheme to send a series of bits in order, which is precisely why there are a thousand different standards, many of them are confusingly similar to each other, and the most popular are remarkably complex.

So, in this message, I will try to break down what exactly a "serial port" is. I will only be talking about the hardware standards in this case, not the software side of a COM port in Windows or a tty device in Unix derivatives. That's an interesting topic, and I hope to write about it some time, but it will be A Whole Thing. The software situation is at least as complicated as the hardware side.

With that warning in place, let's look at this strictly from a consumer perspective. Most computer users, or at least those with experience working with serial ports, are familiar with serial in the form of a DE9[1] connector, and often as opposed to a parallel port, which uses a truly enormous DB25 connector in order to send eight bits at a time. These were the two primary peripheral interfaces of old computers. Actually, that's not quite true; it's more accurate to say that these two were the primary peripheral interfaces of early personal computers that managed to survive into the modern era, basically because they were adopted by various iterations of the IBM PC, which is the platonic ideal of a personal computer.

To simplify the history somewhat, the pseudo-standard parallel port was designed in order to simplify the construction of printers. Although that advantage basically disappeared as printer technology changed, parallel ports remained conventional for printers for many years after. In practice they had few advantages over serial connectors and so basically died out with the printers that used them. We should all be glad, the time it took to send a rasterized page to a laser printer over a parallel interface was truly infuriating.

It would be surprising to run into a parallel interface in use these days, although I'm sure there are a few out there. It is quite common, though, to find serial ports today. They're remarkably durable. This is partially because they are extremely simple, and thus inexpensive to implement. It is mostly because they are predominantly used by the kind of industries that love to have weird proprietary connectors, and serial is the ultimate way to have a weird proprietary connector, because it is one of the least standardized standards I have ever seen.

Serial ports on computers are vaguely related to an EIA/TIA standard called RS-232. In fact, a number of computers and other devices label these ports RS-232, which is a bold move because they are almost universally non-compliant with that specification. There are usually several ways that they violate the standard, but the most obvious is that RS-232 specifies the use of a DB25 connector, with, you guessed it, 25 pins. "Good god, 25 pins for a simple serial connection?" you say. That's right, 25 pins. It is precisely because of that bourgeois excess of pins that personal computer manufacturers, as an industry, decided to trash the standard and use the smaller DE9 connector instead.

In order to understand RS-232 and its amusingly large connector specification better, we need to understand a bit about the history of the standard. Fortunately, that will also help a bit in understanding the baffling complexity of actually making serial interfaces work today.

RS-232 was originally introduced to connect a terminal (originally a TTY) to a modem. The specification was actually rather specific to that purpose, which shows when we look at all the bonus pins. More generically, RS-232 is designed to connect "data terminal equipment" (DTE) to "data communications equipment" (DCE). As you often run into with these sorts of cabling, the TX pin of one computer needs to connect to the RX pin of the other, and vice versa. For this reason, the two devices on a serial connection should have their RX and TX pins reversed compared to each other.

The terms DTE and DCE are usually used to identify these two different configurations. That said, it was sometimes necessary to, for example, connect two computers to each other, when both used the same interface. Further, some manufacturers made (and continue to make) inconsistent decisions about whether the computer or peripheral should be DCE or DTE. This necessitates using a "null modem" cable, which is the equivalent of "crossover" for Ethernet before GbE specified a far more flexible MDI.

Trying to figure out whether you should use a null modem or straight through cable, which is usually easiest to do by experimentation, is just the first of the many fun steps in successfully using a serial device.

Conceptually, RS-232 functions in an extremely simple way. To transmit a one, you put a high (positive) voltage on the TX pin. To transmit a zero, you put a low (negative) voltage on the TX pin. This is in reference to a ground pin, which is usually connected right to local ground.

This gets us to our second bit of serial complexity, after figuring out whether or not we need to add an extra swap in the RX and TX wires: clock recovery. In most cases (we'll get to the exceptions later maybe), there is no explicit clock information provided by a serial connection. Instead, the receiving device must heuristically recover the clock rate. Some serial devices work this out by expecting the transmitter to always send a certain preamble (common in some industrial control and other situations), but the problem is solved more generally by the convention of sending a "start bit" and "stop bit" of one and zero successively, before and after each byte. This ensures at least one transition and helps with detecting the timing of the whole byte.

Most devices expect one and one bit. Some devices expect two of each. Some devices expect none at all (although they then usually use a protocol that expects a preamble).

From this mess, you might think that the RS-232 standard does not specify any method of clock synchronization. In fact, it does. It's just not implemented on PC serial interfaces.

So I said 25 pins. Now we know what our first five are: we have ground, tx, rx, and two pins that are used for synchronization, one in each direction. That's 5. What about the other twenty?

Well, some are just for stuff that no one uses any more. There's a pin to select the data rate (rarely used and unimplemented on PCs). There's a pin for a transceiver self-test feature (rarely used and unimplemented on PCs). But most of all, there is telephone modem arcana.

Remember, RS-232 was designed fairly specifically for a terminal to communicate with its modem. Modems at the time had some fairly peculiar requirements and restrictions. An obvious one is that the modem needs to signal the terminal when it is ringing (in the sense of a phone), and there's a pin for that. There's also a pin to indicate that the DCE has established a connection with a remote DCE. There are two pins that the DTE can use to essentially conduct a handshake with the DCE in preparation for sending data, a requirement due to the half-duplex nature of modems. There are two pins for similar, but different readiness negotiation in the other direction.

Most entertaining of all, there is an entire second serial connection. That's right, the RS-232 standard specifies pins on the cable for a complete second data path with its own data pins and control pins.

Add all this up and you get somewhere near 25, but not quite, because there are a few unused.

When early PC manufacturers (but mostly IBM) were hunting around for a fairly general-purpose interface to connect peripherals, RS-232 looked like a pretty good solution because it was fairly simple to implement and provided a good data rate. The problem is that it was over complicated. So, they just took the parts they didn't like and threw them in the trash. The result is "RS-232 lite," a loosely specified standard that carried RS-232-esque signaling over a smaller DE9 connector.

Here are the pins of the DE9 connector:

  1. Data carrier detect (DCD)
  2. Rx
  3. Tx
  4. Data terminal ready (DTR)
  5. Ground
  6. Data set ready (DSR)
  7. Request to send (RTS)
  8. Clear to send (CTS)
  9. Ring indicator

This has the ground and two signal pins that we know we need, and then many, but not all, of the control pins specified by RS-232. Notably, no clock pins, meaning that properly synchronous serial standards (like several early computer network standards) cannot be carried on these RS-232 interfaces. Don't worry, this didn't stop anyone from trying to do general-purpose networking with these things.

Quick sidebar: I said positive and negative voltage earlier. The RS-232 standard is really unspecific about these and in newer revisions they can range from 3 to 25 volts, either positive or negative. As a de facto norm, most computer "RS-232-ish" interfaces use 13 volts, but there are exceptions. The standard requires that all interfaces tolerate up to the full 25 volts, but I would not trust this too far.

So what about all the control pins that PC serial interfaces retain... well, this gets us into the world of flow control. Often, when communicating with a peripheral or another computer, it is necessary to somehow signal when you are capable of receiving data (e.g. room in buffer). This is conventionally done on serial ports by using the RTS and CTS pins to indicate readiness to receive data by the DTE and DCE respectively, which is consistent with which ends of the connection "own" those pins in the proper RS-232 specification. This is all fine and good.

Except for it's not, because there are a number of serial devices out there which do not implement the RTS/CTS pins for whatever reason (mostly money reasons, I assume). So, a second convention was developed, in which the two ends send bytes at each other to signal when they are ready to receive data. These bytes are referred to as XON and XOFF.

These are referred to as "hardware flow control" and "software flow control" formally, but are often referred to as RTS/CTS and XON/XOFF flow control. They are also not the only two conventions for flow control, but they are the dominant ones.

With flow control, of course, we wonder about speed. While RS-232 specified a scheme for speed selection, it was quickly outdated and is not implemented on the DE9-type serial interface. Instead, the transmitter simply needs to select a speed that it thinks the receiver is capable of. Moreover, because RS-232 and even the DE9-type serial interface predate 8 bits being a universal word length, the length (bits) of a word in serial are typically understood to be variable.

Finally in our tour of "Imitation RS-232 Product" frustration is the consideration of error detection. Some serial devices expect a parity bit for error detection, others do not.

So, in the world we live in, connecting two devices by serial requires that you determine the following:

  1. Physical interface type (null modem or straight through)
  2. Word length
  3. Stop bits, usually 2, 1, or 0.
  4. Parity, on or off.
  5. Speed.
  6. Flow control, software or hardware.

These are often written in a sort of shorthand as "3200 baud 8N1," meaning, well, 3200 baud, 8 bit words, no parity, 1 stop bit. Not specified in this shorthand is flow control, but I feel like it's sort of traditional for random serial devices to tell you some, but not all, of the required parameters. There's often a bit of slack in these specifications anyway, as the serial transceivers in peripherals often actually support a number of different modes, and almost always any arbitrary speed up to their maximum.

Of course, don't count on it, because there are plenty of oddball devices out there that either autodetect nothing or only support one or a couple of combinations of parameters. It's not unusual on older devices for speed to not be autodetected and for it to be necessary to set the speed on the peripheral for example by DIP switches. I own a couple of devices like this. Also fun are devices that always initially start at a low speed (e.g. 1200 baud) and then allow you to send a command to tell them to switch to a higher speed.

To make things extra fun, it is not entirely uncommon for devices to use the serial port that don't really support serial communications at all. Instead, they just check the voltages put on the different control pins. For example, a common brand of adapter used to allow a computer to key up a radio transmitter (I'm not protecting the innocent here, I just can't remember which one I'm thinking of for sure, I think it's the RigBlaster) was originally just a transistor or relay or something that closed the push to talk circuit on the radio whenever the serial RTS pin was set high.

Software running on a computer had direct control of these control pins, so these sorts of things were pretty easy to make work[2]. Wikipedia notes that many UPSs had a similar situation where they just lit up different control pins based on their state, which is completely consistent with my opinion of APC's engineering quality.

The ring indicator pin on PCs is often (but not always) hooked up to the interrupt controller, which was super handy for input devices like mice. Later, the industry abandoned peripherals being able to trigger interrupts, then several years later rediscovered it.

The bottom line is that the "RS-232" port on your old PC is not actually RS-232, and even if it was it would still be frustrating as the RS-232 spec is very allowing of different, incompatible applications. It's also that way back when people designing interconnects just threw a ton of pins in there. Serial data connection, hey, let's use 25 pins, I'm sure we'll find uses for them all. By the time PCs were taking off economy had seeped in and it was cut to 9 pins, but then USB cut it down to 4! But of course that didn't last, because the need was quickly found to stuff a whole lot more pins in.

To anyone who's dealt with trying to interconnect serial devices before, none if this is really new information, but it might make it a little bit clearer why the situation is so complex. But I didn't even really start typing this with the idea of going into such depth on RS-232, instead I wanted to contrast it with some other standards in the RS serial family which appear often in industrial, embedded, robotics, and other applications. Standards that are similar to (and based on) RS-232 but different.

Although none of these are especially glorious, they serve as the underlying physical media for a huge number of common embedded and automation protocols. The takeaway is that even if you've never heard of these, you've almost certainly used them. These things get used internally in cars, in commercial HVAC, access control systems, all over the place.

RS-422

RS-422 is very similar to RS-232, identical for most logical purposes, but makes use of differential signaling. This means two pins each for TX and RX, which for electrical engineering reasons means things like longer cable runs and better EMI rejection.

Another very interesting enhancement of RS-422 over RS-232 is that it specifies a multi-drop feature in which one transmitter can address multiple receivers. This allows for the beginning of an RS-422 "network," in which one controller can send messages to multiple devices using a shared medium (a la coaxial Ethernet). The fact that RS-422 only supports one transmitter is sometimes worked around by having the device capable of transmitting (the driver) sequentially poll all of the other devices and route messages for them. This pops up sometimes in industrial control applications with the odd consequence that there is one particular box that must be turned on or the entire thing dies. It's not always obvious which one.

For some people, it may be slightly surprising to hear that RS-232 does not support multi-drop. It doesn't, but that has in no way stopped people from devising multi-drop systems based on RS-232. I have a couple such devices. There are various ways of doing it, all of them non-standard and probably unwise, but commonly each device has an RS-232 transceiver that repeats everything it receives on one interface to the TX pin on another. This allows a "daisy-chained" RS-232 situation which is often seen in, for example, POS systems.

RS-423

RS-423 is a minor variation on RS-422 that requires fewer wires. It really is pretty much that simple.

RS-485

RS-485 is very similar to RS-422, but with the significant enhancement that a modification to the underlying signaling (they rummaged in the couch cushions for another state and made it tri-state signaling) allows multi-drop that is not just multi-receiver but also multi-transmitter.

This means you can have a proper network of devices that can all send messages to each other using RS-485. This is a powerful feature, as you can imagine, and results in RS-485 being used as the underlying medium for a large number of different protocols and devices.

Unholy Matrimony

Because these standards are so similar, it is not at all unusual to mix and match. RS-232 to RS-422 and RS-485 translators are readily available and are often used in situations where two devices use RS-232 but you want a longer cable than you can reliably get to work with RS-232's simple non-differential signaling.

Of course, one of the great things about networks is that you can put anything over them if you are willing to make enough compromises. And so naturally, RS-232 over IP is also fairly common, especially where legacy equipment has been modernized by putting it on the internet and making it a Thing.

Because the principle of RS-232 signaling is so simple, it is extremely similar to a variety of other serial communications formats. For example, the UARTs often present in microcontrollers are largely the same as RS-232 except for using different signaling voltages (usually 0 and 3.3 or 0 and 5). This means that you can generally convert your UART's output to RS-232 with some discrete components to herd electrons and some software work to support whichever of the grab bag of serial features you want to implement.

"Universal" serial bus

The DE9 serial port largely disappeared on personal computers due to the rising dominance of USB, which is electrically not really that dissimilar from RS-232 but comes with a hugely more complex protocol and set of standards. This added complexity has the huge upside that you can generally plug a USB device into your computer without having to spend a lot of time thinking about the details of the interconnect.

Fortunately, USB-C, Thunderbolt, and Lightning have come along to fix that.

Coming up, I will talk a bit (and complain a lot) about the world of interconnects that replaced the trusty fake RS-232. Some of them also have fun alphanumeric names, like Generic Brand IEEE 1394 Product.

[1] These connectors are often referred to as DB9, which is technically incorrect for reasons that are not especially interesting. For our purposes, know that DE9 and DB9 are the same thing, except for DB9 is wrong.

[2] Parallel ports basically offered this same capability but on steroids. Since there were 8 Tx pins that would just have the values corresponding to whatever byte you sent to the port, they could basically be used as 8 pins of GPIO if you looked at it right. I had a board for a while with eight relays that switched 120vac based on the Tx pin values on a parallel port. This was a very outdated way of doing this by the time I had it, but it sure was fun.