_____                   _                  _____            _____       _ 
  |     |___ _____ ___ _ _| |_ ___ ___ ___   |  _  |___ ___   | __  |___ _| |
  |   --| . |     | . | | |  _| -_|  _|_ -|  |     |  _| -_|  | __ -| .'| . |
  |_____|___|_|_|_|  _|___|_| |___|_| |___|  |__|__|_| |___|  |_____|__,|___|
  a newsletter by |_| j. b. crawford               home archive subscribe rss

>>> 2020-09-14 a brief history of internet connections (PDF)

There is an ongoing discussion today about the accessibility and quality of consumer internet connections, and how this contributes to economic success---the "digital divide" or the "broadband gap." In terms of quality of service for price, the United States lags behind a number of other countries, which is surprising from the perspective that much of the technology behind internet delivery (and of course the internet itself) was invented in the United States. It's not particularly surprising from the perspective of basically any aspect of the US telecommunications or cable television industry, but that's a topic for a different (more leftist) medium.

I've been thinking a lot about consumer internet service lately. At my home, in one of the US's top 50 cities, I have a nominally gigabit (in practice 500-800 Mbps) service which I pay around $120 for. This is really pretty good for the US. However, I had a pretty rocky journey getting there, including a period where I used a sketchy one-person MVNO to subscribe to AT&T LTE and used antennas on the roof and a used Cradlepoint. I eventually gave up on this and begrudgingly went back to Comcast because said one-person MVNO appeared to manage all of their billing by hand and repeatedly forgot to extend my expiration date when I paid invoices, leading to random disconnects until they saw an email from me and turned it back on. Well, that's all besides the point as well, other than to say that an MVNO run out of a FedEx Office store beats the incumbent telco around here on speed and probably also customer service despite the monthly prods... and how did we get here?

I'm not going to talk about the regulatory story, or at least not yet. I want to talk a bit about the technology that got us to this point.

In the early days, telephones were all there was. Well, telegraph lines as well, sure, but later telegraph systems were essentially an application run over telephone lines as well. It is fairly common knowledge that ARPANET, for example, ran over telephone (if nothing else because there wasn't really anything else to run it over). This leads a lot of people to the mistaken impression that the earliest computer networks used acoustic modems that signaled each other with sound over a telephone line, like we may have used for home internet access in the '90s, but that's not actually the case. Early computer networks made use of leased lines.

The telephone network is, at least in most cases, a circuit switched network. This means that when you pick up your telephone and dial a number, the network creates a continuous circuit between your telephone and the telephone you called, and leaves that circuit in place until it is instructed to tear it down (by the phones being hung back up). Circuit switching was unsuitable for certain applications, though. For example, consider the case of a "squawk box" communications circuit used between a television studio and the network's national control room. Since television programming was already often being distributed to various cable and broadcast outlets using the telephone network (AT&T offered carriage of TV signals to broadcast and cable head-end sites as a standard service), it was obvious to use the telephone service, but these squawk box systems are not a dial telephone, they're more like an intercom. For this and other specialized applications, AT&T offered a "leased line", which was a permanently set up direct connection between two locations. There was no need to dial and the "call" never hung up.

From a very early stage leased lines were often used for digital applications, especially for telegraphy which was a fairly well established technology (using e.g. Baudot) by the time the telegraph networks started looking to pay AT&T for connectivity instead of, or in addition to, owning their own outside plant. Because of the sensitivity of these telegraph systems, such leased lines were often marked for special treatment in various telephone facilities, including the use of as few splices as possible and additional conditioning equipment (e.g. loading coils), all in order to maintain a higher quality connection than normal voice service. As you can imagine, the service was fairly expensive, but it was the obvious way to implement early computer network connections which were viewed as essentially an enhancement on telegrams[1].

Because leased lines required no dialing and offered a greater quality of service than voice lines, the construction of early telephone modems was actually much simplified. The available bandwidth was somewhat greater than voice lines, there was no connection setup to be worried about, and because leased lines were set up once and then well maintained, there was no need to auto-discover or probe what type of signaling the line would support, as v.54 and other later telephone modem standards involved. The ARPANET Interface Message Processors, or IMPs, were basically machines which sent and received telegrams much the same way as WU, but with a computer attached instead of a tape printer (of course this is not entirely true, the IMPs were fairly thick devices and implemented much of the protocol aspect of communications as well).

Leased lines remained the standard in computer data transmission for decades. During the 1970s, AT&T began the conversion of much of the telephone system from analog to digital, with voice calls encoded and decoded at the respective exchange offices. Full conversion of the telephone network to digital would take many years (there were still manual exchanges in use by the Bell System nearly all the way through the '70s, electromechanical switches were in use in North America until at least 2002), the introduction of a digital telephone backbone allowed for the direct provision of digital services. Early standards for digital telephony were not especially well standardized outside of the internal workings of AT&T which often provided the equipment[2], but during the early period of digital telephony long-range networking started to be an important concept in computing. DECnet and IBM SNA were two standards used primarily to allow terminals to connect to mainframes over leased lines, and were extremely common. The high cost of mainframe computers meant that businesses and organizations jumped on the opportunity to own just one but still have terminals located throughout their different offices.

The concept of computer data over the telephone system really took off, though, with the late-'80s introduction of ISDN. It is hard for me to fully describe how fascinated I am by ISDN. In 21st century hindsight, ISDN was a remarkable achievement in being both amazingly forward-looking and hopelessly obsolete from the start. So what even is ISDN? This question is kind of difficult to answer, which reflects both of those achievements!

ISDN stands for Integrated Services Digital Network, and the basic idea of ISDN was to unify the entire telephone system into a digital network which was capable of carrying either voice or data, including multiple channels of each multiplexed over lines of various capacities (ranging from twisted-pair copper lines to individual homes to high-speed fiber-optic trunk lines). ISDN supported all kinds of elaborate use-cases including enterprise telepresence (video conferencing essentially) comparable to the capabilities offered by enterprise telepresence systems today. ISDN supported logical circuit-switching and packet-switching multiplexed together, with circuit-switched connections receiving guaranteed capacity allocations. ISDN was designed to allow you to simply plug your computer into your telephone and have a digital connection, no modem required.

ISDN was hopelessly complex, expensive to implement, and only marginally competitive from a bandwidth perspective at the time of its release.

Let's take a step, for a moment, into a magical world where ISDN was deployed as expected. This magical world actually existed and continues to exist in some places, mostly in Europe where ISDN was generally more successful in the consumer environment. Oddly, though, I spent several years working at a government research laboratory (sure you can figure out which one but I won't just give it to you) with just such an environment, although it was very slowly being replaced, so for some time my desk phone was an AT&T-branded ISDN phone.

In consumer applications (or where I once worked, in each office), a BRI or Basic Rate Interface connection is provided by the telephone provider. A BRI delivers 128 kbps of useful capacity (plus some overhead) over a normal twisted-pair telephone line. The BRI arrives in the form of the "U" interface, which connects to the Network Termination 1 (NT1) which in practice today is a small box that you leave on the floor under your desk and kick repeatedly. The NTU converts the U signaling to S/T signaling, which is then connected by cable to the telephone instrument, which has a terminal adapter built in. Your computer is connected by serial to the terminal adapter.

Because ISDN aspires to unify telephone and data into one concept, ISDN data connections can be established by users much the same as a phone call. Either serial commands or the telephone interface can be used to make a "data call" to another telephone number. If the other number answers, you get a direct serial connection between the two machines at the data channel rate of 64kpbs. Of course, data connections can also be configured into equipment to be connected continuously, as desired.

I often refer to ISDN and other telecom-originated protocols as "very telephone," and when I say that I am referring to the fact that there is a rather different philosophy of design in the telephone industry than in the internet industry. Because the telephone industry needs to handle a great deal of fixed-bandwidth (i.e. 64 kbps) telephone calls, usage is relatively predictable. Because the telephone network historically operated on analog trunks with a fixed number of lines, the telephone industry has always been a proponent of "traffic engineering," the practice of making very intentional decisions about allocation of bandwidth and routing of traffic.

So fundamentally, telephone technology is usually focused on moving exactly reserved bandwidth allocations through an intentional route, while internet technology is more focused on best-effort delivery of packets using whatever route available. The problem, of course, is that the best-effort internet has vastly less overhead (due to unused reserved capacity) and so is generally able to offer more bandwidth at less cost (and admittedly lower reliability) than telephone technologies. This is perhaps the simplest explanation of why telephone-derived technologies such as ISDN and SONET are no longer considered cost-effective channels for internet service: you tend to end up paying for all the bandwidth you aren't using.

To expand a bit, the structure of that ISDN T1 connection reflects these design decisions. A T1 connection runs at an exact line rate of 1.544 Mbps. That rate is divided into exactly 24 channels of exactly 64 kbps. The math doesn't add up because there is reserved space for protocol overhead, as the 64 kbps channel rate is actual payload. On the actual T1 connection, the channels essentially take turns, with each channel in order getting a turn to send a frame. This means that each channel behaves very reliable, but of course unused channels are still occupying bandwidth. The actual signaling technology is referred to as "T-carrier," according to the Bell System convention of identifying each of their transit protocols/media (called carriers) with a letter.

What a wondrous world! Data can be moved over the telephone network in a virtually native format, with a very reliable reserved bandwidth allocation. And all of this uses semantics that telephone users are familiar with. It's delightful to imagine an alternate timeline in which this technology was a huge success and the imagined convergence of telephone and data had happened, truly realizing the power that came from the innovation of viewing telephone calls as merely a particular case of the general principle of reserved-bandwidth bit streams.

This is not to say that ISDN was entirely unsuccessful. For businesses, ISDN became a fairly common data connection, through the form of the ISDN Primary Rate Interface or PRI. The PRI basically describes a higher-bandwidth ISDN connection which is intended for use as a trunk instead of for connection to a single client site. The typical PRI in the US is a T1 connection, which moves data at 1.544 Mbps, or carries 30 telephone calls or so (the details can vary slightly). Of course today 1.5 Mbps is quite a disappointment, but in the early '90s it was a pretty fast connection. That said, T1 connections in practice were mostly used as telephone trunks for PBX systems, and only some times to carry data. It took some time into the '90s for it to become clear that an internet connection is something that businesses needed.

In addition to being somewhat over-complicated for consumers and always rather expensive, ISDN service was basically obsolete by the time it gained significant currency. ADSL had largely overtaken ISDN for internet service by the mid-'90s, offering higher (although less reliable) speeds at a lower price. Businesses continued to prefer T1 for a while longer than expected because of its real, but more importantly perceived, advantages in reliability and stability. I'm sure there are still businesses using an ISDN PRI for their internet service today but it is now a decidedly legacy situation. In more critical applications it has been replaced by fiber, in less critical situations typically by DOCSIS.

In fact, let's talk about that fiber technology. There are multiple fiber optic technologies in use in the telecom industry, and basically all of them have some capability to carry data, but the one with which I am most familiar (and also a very common arrangement for business service) is the SONET loop. I've been going on for a long time already, so we'll talk about that next.

[1] One could argue, probably correctly, that the "packet" as a concept is directly derived from a telegram, being a discrete unit of information that contains header information such as to and from, control information used in the telegraph network, and a payload. There is a distinct resemblance between a Western Union telegram card and a modern network packet. Western Union even supported what we might call "source routing" with some telegrams giving forwarding instructions, and "quality of service" with some telegrams marked for special priority.

[2] The 1968 Carterfone decision allowed the use of third-party equipment directly on telephone lines, so there was no regulatory requirement that telephone data interfaces be provided by the telco. However, the Bell System is where the majority of technological development in digital telephony was being done, and considering how unhappy they were with Carterfone in the first place they were not quite ready to start licensing or opening standards to third-party manufacturers. That said, there was still a wide variety of interesting third-party equipment that existed, it's just that the entire computer data industry was pretty small at this point.