a newsletter by J. B. Crawford

pairs not taken

So we all know about twisted-pair ethernet, huh? I get a little frustrated with a lot of histories of the topic, like the recent neil breen^w^wserial port video, because they often fail to address some obvious questions about the origin of twisted-pair network cabling. Well, I will fail to answer these as well, because the reality is that these answers have proven very difficult to track down.

For example, I have discussed before that TIA-568A and B are specified for compatibility with two different multipair wiring conventions, telephone and SYSTIMAX. And yet both standards actually originate within AT&T, so why did AT&T disagree internally on the correspondence of pair numbers to pair colors? Well, it's quite likely that some of these things just don't have satisfactory answers. Maybe the SYSTIMAX people just didn't realize there was an existing convention until they were committed. Maybe they had some specific reason to assign pairs 3 and 4 differently that didn't survive to the modern era. Who knows? At this point, the answer may be no one.

There are other oddities to which I can provide a more satisfactory answer. For example, why is it so widely said that twisted-pair ethernet was selected for compatibility with existing telephone cabling, when its most common form (10/100) is in fact not compatible with existing telephone cabling?

But before we get there, let's address one other question that the Serial Port video has left with a lot of people. Most office buildings, it is mentioned, had 25-pair wiring installed to each office. Wow, that's a lot of pairs! A telephone line, of course, uses a single pair. UTP ethernet would be designed to use two. Why 25?

The answer lies in the key telephone system. The 1A2 key telephone system, and its predecessors and successors, was an extremely common telephone system in the offices of the 1980s. Much of the existing communications wiring of the day's commercial buildings had been installed specifically for a 1A2-like system. I have previously explained that key telephone systems, for simplicity of implementation, inverted the architecture we expect from the PBX by connecting many lines to each phone, instead of many phones to each line. This is the first reason: a typical six-button key telephone, with access to five lines plus hold, needed five pairs to deliver those five lines. An eighteen button call director would have, when fully equipped, 17 lines requiring 17 pairs. Already, you will see that we can get to some pretty substantial pair counts.

On top of that, though, 1A2 telephones provided features like hold, busy line indication (a line key lighting up to indicate its status), and selective ringing. Later business telephone systems would use a digital connection to control these aspects of the phone, but the 1A2 is completely analog. It uses more pairs. There is an A-lead pair, which controls hold release. There is a lamp pair for each line button, to control the light. There is a pair to control the phone's ringer, and in some installations, another pair to control a buzzer (used to differentiate outside calls from calls on an intercom line). So, a fairly simple desk phone could require eight or more pairs.

To supply these pair counts, the industry adopted a standard for business telephone wiring: 25-pair cables terminated in Amphenol connectors. A call director could still require two cables, and two Amphenol connectors, and you can imagine how bulky this connection was. 25-pair cable was fairly expensive. These issues all motivated the development of digitally-controlled systems like the Merlin, but as businesses looked to install computer networks, 25-pair cabling remained very common.

But, there is a key difference between the unshielded twisted-pair cables used for telephones and the unshielded twisted-pair we think of today: the twist rate. We mostly interact with this property through the proxy of "cable categories," which seem to have originated with cable distributors (perhaps Anixter) but were later standardized by TIA-568.

Some of these categories are not, in fact, unshielded twisted-pair (UTP), as shielding is required to achieve the specified bandwidth. The important thing about these cable categories is that they sort of abstract away the physical details of the cable's construction, by basing the definition around a maximum usable bandwidth. At that maximum bandwidth, the cable must meet defined limits for attenuation and crosstalk.

Among the factors that determine the bandwidth capability of a cable is the twist rate, the frequency with which the two wires in a pair switch positions. The idea of twisted pair is very old, dating to the turn of the 20th century and open wire telephone leads that used "transposition brackets" to switch the order of the wires on the telephone pole. More frequent twisting provides protection against crosstalk at higher frequencies, due to the shorter spans of unbalanced wire. As carrier systems used higher frequencies on open wire telephone leads, transposition brackets became more frequent. Telephone cable is much the same, with the frequency of twists referred to as the pitch. The pitch is not actually specified by category standards; cables use whatever pitch is sufficient to meet the performance requirements. In practice, it's also typical to use slightly different pitches for different pairs in a cable, to avoid different pairs "interlocking" with each other and inviting other forms of EM coupling.

Inside telephone wiring in residential buildings is often completely unrated and may be more or less equivalent to category 1, which is a somewhat informal standard sufficient only for analog voice applications. Of course, commercial buildings were also using their twisted-pair cabling only for analog voice, but the higher number of pairs in a cable and the nature of key systems made crosstalk a more noticeable problem. As a result, category 3 was the most common cable type in 1A2-type installations of the 1980s. This is why category 3 was the first to make it into the standard, and it's why category 3 was the standard physical medium for 10BASE-T.

In common parlance, wiring originally installed for voice applications was referred to as "voice grade." This paralleled terminology used within AT&T for services like leased lines. In inside wiring applications, "voice grade" was mostly synonymous with category 3. StarLAN, the main predecessor to 10BASE-T, required a bandwidth of 12MHz... beyond the reliable capabilities of category 1 and 2, but perfectly suited for category 3.

This brings us to the second part of the twisted-pair story that is frequently elided in histories: the transition from category 3 cabling to category 5 cabling, as is required by 100BASE-TX "10/100" ethernet.

On the one hand, the explanation is simple: To achieve 100Mbps, 100BASE-TX requires a 100MHz cable, which means it requires category 5.

On the other hand, remember the whole entire thing about twisted-pair being intended to reuse existing telephone cable? Yes, the move from 10BASE-T to 100BASE-TX, and from category 3 to category 5, abandoned this advantage. The path by which this happened was not an simple one. The desire to reuse existing telephone cabling was still very much alive, and several divergent versions of twisted-pair ethernet were created for this purpose.

Ethernet comes with these kind of odd old conventions for describing physical carriers. The first part is the speed, the second part is the bandwidth/position (mostly obsolete, with BASE for baseband being the only surviving example), and the next part, often after a hyphen, identifies the medium. This medium code was poorly standardized and can be a little confusing. Most probably know that 10BASE5 and 10BASE2 identify 10Mbps Ethernet over two different types of coaxial cable. Perhaps fewer know that StarLAN, over twisted pair, was initially described as 1BASE5 (it was, originally, 1Mbps). The reason for the initial "5" code for twisted pair seems to be lost to history; by the time Ethernet over twisted pair was accepted as part of the IEEE 802.3 standard, the medium designator had changed to "-T" for Twisted Pair: 10BASE-T.

And yet, 100Mbps "Fast Ethernet," while often referred to as 100BASE-T, is more properly 100BASE-TX. Why? To differentiate it from the competing standard 100BASE-T4, which was 100Mbps Ethernet over Category 3 twisted pair cable. There were substantial efforts to deploy Fast Ethernet without requiring the installation of new cable in existing buildings, and 100BASE-TX competed directly with both 100BASE-T4 and the oddly designated 100BaseVG. In 1995, all three of these media were set up for a three-way faceoff [1].

For our first contender, let's consider 100BASE-T4, which I'll call "T4" for short. The T4 media designator means Twisted pair, 4 pairs. Recall that, for various reasons, 10BASE-T only used two pairs (one each direction). Doubling the number of required pairs might seem like a bit of a demand, but 10BASE-T was already routinely used with four-pair cable and 8P8C connectors, and years later Gigabit 1000BASE-T would do the same. Using these four pairs, T4 could operate over category 3 cable at up to 100 meters.

T4 used the pairs in an unusual way, directly extending the 10BASE-T pattern while compromising to achieve the high data rate over lower bandwidth cable. T4 had one pair each direction, and two pairs that dynamically changed directions as required. Yes, this means that 100BASE-T4 was only half duplex. T4 was mostly a Broadcom project, who offered chipsets for the standard and brought 3Com on board as the principal (but not only) vendor of network hubs.

The other category 3 contender, actually a slightly older one, was Hewlett-Packard's 100BaseVG. The "VG" media designator stood for "voice grade," indicating suitability for category 3 cables. Like T4, VG required four pairs. VG also uses those pairs in an unusual way, but a more interesting one: VG switches between a full-duplex, symmetric "control mode" and a half-duplex "transmission mode" in which all four pairs are used in one direction. Coordinating these transitions required a more complex physical layer protocol, and besides, HP took the opportunity to take on the problem of collisions. In 10BASE-T networks, the use of hubs meant that multiple hosts were in a collision domain, much like with coaxial Ethernet. As network demands increased, collisions became more frequent and the need to retransmit after collisions could appreciably reduce the effective capacity of the network.

VG solved both problems at once by introducing, to Ethernet, one of the other great ideas of the local area networking industry: token-passing. The 100BaseVG physical layer incorporated a token-passing scheme in which the hub assigned tokens to nodes, both setting the network operation mode and preventing collisions. The standard even included a simple quality of service scheme to the tokens, called demand priority, in which nodes could indicate a priority level when requesting to transmit. The token-passing system made the effective throughput of heavily loaded VG networks appreciably higher than other Fast Ethernet networks. Demand priority promised to make VG more suitable for real-time media applications in which Ethernet had traditionally struggled due to its nondeterministic capacity allocation.

Given that you have probably never heard of either of these standards, you are probably suspecting that they did not achieve widespread success. Indeed, the era of competition was quite short, and very few products were ever offered in either T4 or VG. Considering the enormous advantage of using existing Category 3 cabling, that's kind of a surprise, and it undermines the whole story that twisted pair ethernet succeeded because it eliminated the need to install new cabling. Of course, it doesn't make it wrong, exactly. Things had changed: 10BASE-T was standardized in 1990, and the three 100Mbps media were adopted in 1994-1995. Years had passed, and purpose-built computer network cabling had become more common. Besides, despite their advantages, T4 and VG were not without downsides.

To start, both were half-duplex. I don't think this was actually that big of a limitation at the time; half-duplex 100Mbps was still a huge improvement in real performance over even full-duplex 10Mbps, and the vast majority of 10BASE-T networks were hub-based and only half-duplex as well. A period document from a network equipment vendor notes this limitation of T4 but then describes full-duplex as "unneeded for workstations." That might seem like an odd claim today, but I think it was a pretty fair one in the mid-'90s.

A bigger problem was that both T4 and VG were meaningfully more complicated than TX. T4 used a big and expensive DSP chip to recover the complex symbols from the lower-grade cable. VG's token passing scheme required a more elaborate physical layer protocol implementation. Both standards were correspondingly more expensive, both for adapters and network appliances. The cost benefit of using existing cabling was thus a little fuzzier: buyers would have to trade off the cost of new cabling vs. the savings of using less complex, less expensive TX equipment.

For similar reasons, TX is also often said to have been more reliable than T4 or VG, although it's hard to tell if that's a bona fide advantage of TX or just a result of TX's much more widespread adoption. TX transceivers benefited from generations of improvement that T4 and VG transceivers never would.

Let's think a bit about that tradeoff between new cable and more expensive equipment. T4 and VG both operated on category 3, but they required four pairs. In buildings that had adopted 10BASE-T on existing telephone wiring, they would most likely have only punched down two pairs (out of a larger cable) to their network jacks and equipment. That meant that an upgrade from 10BASE-T to 100BASE-T4, for example, still involved considerable effort by a telecom or network technician. There would often be enough spare pairs to add two more to each network device, but not always. In practice, upgrading an office building would still require the occasional new cable pull. T4 and VG's poor reputation for reliability, or moreover poor reputation for tolerating less-than-perfect installations, meant that even existing connections might need time-consuming troubleshooting to bring them up to full category 3 spec (while TX, by spec, requires the full 100MHz of category 5, it is fairly tolerant of underperforming cabling).

There's another consideration as well: the full-duplex nature of TX makes it a lot more appealing in the equipment room and data center environment, and for trunk connections (between hubs or switches). These network connections see much higher utilization, and often more symmetric utilization as well, so a full-duplex option really looks 50% faster than a half-duplex one. Historically, plenty of network architectures have included the use of different media for "end-user" vs trunk connections. Virtually all consumer and SMB internet service providers do so today. It has never really caught on in the LAN world, though, where a smaller staff of network technicians are expected to maintain both sides.

Put yourself in the shoes of an IT manager at a midsized business. One option is T4 or VG, with more expensive equipment and some refitting of the cable plant, and probably with TX used in some cases anyway. Another option is TX, with less expensive equipment and more refitting of the cable plant. You can see that the decision is less than obvious, and you could easily be swayed in the all-TX direction, especially considering the benefit of more standardization and fewer architectural and software differences from 10BASE-T.

That seems to be what happened. T4 and VG found little adoption, and as inertia built, the cost and vendor diversity advantage of TX only got bigger. Besides, a widespread industry shift from shared-media networks (with hubs) to switched networks (with, well, switches) followed pretty closely behind 100BASE-TX. A lot of users went straight from 10BASE-T to switched 100BASE-TX, which almost totally eliminated the benefits of VG's token-passing scheme and made the cost advantage of TX even bigger.

And that's the story, right? No, hold on, we need to talk about one other effort to upon 10BASE-T. Not because it's important, or influential, or anything, but because it's very weird. We need to talk about IsoEthernet and IsoNetworks.

As I noted, Ethernet is poorly suited to real-time media applications. That was true in 1990, and it's still true today, but network connections have gotten so fast that the level of performance overhead available mitigates the problem. Still, there's a fundamental limitation: real-time media, like video and audio, requires a consistent amount of delivered bandwidth for the duration of playback. The Ethernet/IP network stack, for a couple of different reasons, provides only opportunistic or nondeterministic bandwidth to any given application. As a result, achieving smooth playback requires some combination of overprovisioning of the network and buffering of the media. This buffering introduces latency, which is particularly intolerable in real-time applications. You might think this problem has gone away entirely with today's very fast networks, but you can still see Twitch streamers struggling with just how bad the internet is at real-time media.

An alternative approach comes from the telephone industry, which has always had real-time media as its primary concern. The family of digital network technologies developed in the telephone industry, SONET, ISDN, what have you, provide provisioned bandwidth via virtual circuit switching. If you are going to make a telephone call at 64Kbps, the network assigns an end-to-end, deterministic 64Kbps connection. Because this bandwidth allocation is so consistent and reliable, very little or no buffering is required, allowing for much lower latency.

There are ways to address this problem, but they're far from perfect. The IP-based voice networks used by modern cellular carriers make extensive use of quality of service protocols but still fail to deliver the latency of the traditional TDM telephone network. Even with QoS, VoIP struggles to reach the reliability of ISDN. For practical reasons, consumers are rarely able to take any advantage of QoS for ubiquitous over-the-top media applications like streaming video.

What if things were different? What if, instead of networks, we had IsoNetworks? IsoEthernet proposed a new type of hybrid network that was capable of both nondeterministic packet switching and deterministic (or, in telephone industry parlance, isochronous) virtual circuit switching. They took 10BASE-T and ISDN and ziptied them together, and then they put Iso in front of the name of everything.

Here's how it works: IsoEthernet takes two pairs of category 3 cabling and runs 16.144 Mbps TDM frames over them at full duplex. This modest 60% increase in overall speed allows for a 10Mbps channel (called a P-channel by IsoEthernet) to be used to carry Ethernet frames, and the remaining 6.144Mbps to be used for 96 64-Kbps B-channels according to the traditional ISDN T2 scheme.

An IsoEthernet host (sadly not called an IsoHost, at least not in any documents I've seen) can use both channels simultaneously to communicate with an IsoHub. An IsoHub functions as a standard Ethernet hub for the P-channel, but directs the B-channels to a TDM switching system like a PABX. The mention of a PABX, of course, illustrates the most likely application: telephone calls over the computer.

I know that doesn't sound like that much of a win: most people just had a computer on their desk, and a phone on their desk, and despite decades of effort by the Unified Communications industry, few have felt a particular need to marry the two devices. But the 1990s saw the birth of telepresence: video conferencing. We're doing Zoom, now!

Videoconferencing over IP over 10Mbps Ethernet with multiple hosts in a collision domain was a very, very ugly thing. Media streaming very quickly caused almost worst-case collision behavior, dropping the real capacity of the medium well below 10Mbps and making even low resolution video infeasible. Telephone protocols were far more suited to videoconferencing, and so naturally, most early videoconferencing equipment operated over ISDN. I had a Tandberg videoconferencing system, for example, which dated to the mid '00s. It still provided four jacks on the back suitable for 4x T1 connections or 4 ISDN PRIs (basically just a software difference), providing a total of around 6Mbps of provisioned bandwidth for silky smooth real-time video.

These were widely used in academia and large corporations. If you ever worked somewhere with a Tandberg or Cisco (Cisco bought Tandberg) curved-monitor-wall system, it was most likely running over ISDN using H.320 video and T.120 application sharing ("application sharing" referred to things like virtual whiteboards). Early computer-based videoconferencing systems like Microsoft NetMeeting were designed to use existing computer networks. They used the same protocols, but over IP, with a resulting loss in reliability and increase in latency [2].

With IsoEthernet, there was no need for this compromise. You could use IP for your non-realtime computer applications, but your softphone and videoconferencing client could use ISDN. What a beautiful vision! As you can imagine, it went nowhere. Despite IEEE acceptance as 802.9 and promotion efforts by developer National Semiconductor, IsoEthernet never got even as far as 100BASE-T4 or 100BaseVG. I can't tell you for sure that it ever had a single customer outside of evaluation environments.

[1] A similar 100Mbps-over-category 3 standard, called 100BASE-T2, also belongs to this series. I am omitting it from this article because it was standardized in 1998 after industry consolidation on 100BASE-TX, so it wasn't really part of the original competition.

[2] The more prominent WebEx has a stranger history which will probably fill a whole article here one day---but it did also use H.320.

☜ something over New Jersey
office of secure transportation ☞