While I have more radio topics to talk about, I think it'd be good to take a
break from the airwaves and get back to basics with computer topics. I've
mentioned before that one of the things I really enjoy are pre-IP network
protocols, from the era when the design of computer networks was still a
competitive thing with a variety of different ideas. One of the most notable
of the pre-IP protocols, as I've mentioned before, is the Xerox Network System
or XNS.
It is an oversimplification, but not entirely wrong, to say that XNS was
created by Bob Metcalfe, the creator of Ethernet, so that he had something to
use Ethernet for. In fact, XNS is an evolution of an earlier protocol (called
PUP but more adorably written Pup) which was designed by Metcalfe and David
Boggs for use with Ethernet as a demonstration. For reasons that are difficult
to understand now but tied to the context of the time, Xerox was not
particularly enthusiastic about Ethernet as a technology and Metcalfe found
himself fighting to gain traction for the technology, including by developing
higher-level protocols which took advantage of its capability.
This bit of history tells us two important things:
The widespread misunderstanding that IP and Ethernet are somehow designed
for each other is quite incorrect---in fact, if Ethernet "naturally" goes with
another protocol and vice versa, that stack is Ethernet and XNS.
As has been seen many times in computer history, XNS's lack of popularity
with its corporate sponsors was, ironically, a major factor in its success.
Xerox's roots in more academic research (Metcalfe and the Xerox PARC) and
Xerox's lack of vigor in commercializing the technology essentially lead to it
being openly published as a research paper and then Xerox not doing a whole lot
else with it (using it only for a couple of less important projects). XNS was
viewed as academic rather than commercial, and that's how it escaped.
Xerox's lack of motivation to pursue the project was not shared by the rest of
the industry. After XNS was published, a number of other software vendors, and
especially designers of Network Operating Systems, picked it up as the basis of
their work. The result is that XNS was used in a variety of different network
systems by different vendors (although not always by that name), and that it
became quite influential in the design of later protocols because of its being
a "common denominator" between many protocols based on it.
IP and XNS are largely contemporaries, the two having been under active
development during the same span of a few years. Both appear to incorporate
ideas from the other, in part because IP originated out of ARPANET which was
one of the biggest network projects of the time and the designers of XNS were
no doubt keeping an eye on it. There were also a couple of personal
relationships between designers of XNS and designers of IP, so it's likely
there were some notes exchanged. This is a powerful part of how these early
standards form, people working in parallel and adopting similar ideas.
So let's talk about XNS. Wikipedia starts its explanation of XNS by saying that
"In comparison to the OSI model's 7 layers, XNS is a five-layer system, like
the later Internet protocol suite." The "later" here is a little odd and
depends on where exactly you set the milestones, but I like this start to the
design explanation because it emphasizes that both XNS and IP have little to do
with the OSI model.
As I like to repeat to myself under my breath on a daily basis, the widespread
use of the OSI model as a teaching device in computer networking is a mistake.
It leads students and instructors of computing alike to act as if the IP stack
is somehow defined by or even correlates to the OSI stack. This is not true.
The OSI model defines the OSI network protocols, which are an independent
network architecture that ultimately failed to gain the traction that IP did.
IP is different from the OSI stack in a number of intentional and important
ways, which makes attempts to describe the IP stack in terms of the OSI model
intrinsically foolish, and worse, confusing and misleading to students.
Anyway, given that the XNS stack has five layers (and NOT seven like OSI
adherents feel the need to tell you), what are those layers?
Physical (not defined by XNS, generally Ethernet)
Internal Transport
Interprocess Communications
Resource Control
Application (not defined by XNS)
Layer 1 of XNS is the internet datagram protocol, or IDP. If this sounds kind
of similar to IP, it is, and beyond just the naming. There are some important
differences though, which are illuminating when we look at the eccentricities
of IP.
To start with, IDP makes use of Ethernet addressing. Sparing the details of
bits and offsets, IDP network addresses consist of the Ethernet (MAC) address
of the interface, a network number (specified by the router), and a socket
number. While the MAC address serves as a globally unique identifier, the
network number is useful for routing (so that routers need not know the
addresses of every host in every network). The socket number identifies
services within a given host, replacing the ports that we use in the IP stack.
That difference is particularly interesting to highlight: IP chooses to
identify only the host, leaving identification of specific services or sockets
to higher-level network protocols like TCP. In contrast, XNS
identifies individual sockets within IDP. As usual it's hard to say
that either method is "better" or "worse," but the decision IP made certainly
leads to some odd situations with regards to protocols like ICMP that do not
provide socket-level addressing.
Another interesting difference is that, while IDP allows for checksums, it does
not require them. This is an allowance for the fact that Ethernet provides
checksums, making bit-errors on Ethernet networks exceedingly rare. In
contrast, IP requires a checksum (but curiously only over the header), which is
effectively wasted computation on media like Ethernet that already provide an
integrity check.
To bring my grousing about IP full circle, these differences reflect two
things: First, IP was designed with no awareness of the addressing scheme that
is now virtually always used at a lower layer. Second, IP has a redundant
integrity scheme. Both are simply results of IP having not been designed for a
lower layer that provides these, while XNS was.
At the next layer, the interprocess communications layer, XNS provides us with
options that will once again look fairly familiar. Sequenced packet protocol
(SPP) provides reliable delivery, while packet exchange protocol (PEP) provides
unreliable delivery. The design of these protocols is largely similar to TCP
and UDP, respectively, but of course with the notable difference that there is
no concept of port numbers since that differentiation is already provided by
IDP.
As more of a special case, there is the XNS error protocol, which is used to
deliver certain low-level network information in a way analogous to (but
simpler than) ICMP. The error protocol enjoys the advantage, compared to ICMP,
of being easily correlated to and delivered to specific sockets, since it has
the socket number information from IDP. This means that, for example, an XNS
implementation of "ping" on Linux would not require root (or rather raw socket)
privileges.
The resource control layer in XNS is somewhat ill-defined, but was implemented
for example by Novell as essentially a service-discovery scheme filling a
similar role to UPnP, mDNS, etc. today. Resource control was not necessary for
the operation of an XNS network, but was useful for autoconfiguration scenarios
and implemented that way by many vendors. We can thus question whether or not
resource control really counts as a "layer" since it was not, in practice,
generally used to encapsulate the next layer, but everyone who teaches with the
OSI model is guilty of far greater sins, so I will let that slide. Sometimes it
is useful to view a protocol as occupying a "lower layer" even if it does not
encapsulate traffic, if it fulfills a utility function used for connection
setup. I am basically making excuses for ARP, here.
Application protocols are largely out of scope, but it is worth noting that
Xerox did design application layer protocols over XNS, which consisted
primarily of remote procedure call. This makes sense, as RPC was a very popular
concept in networking at the time, likely because it was closely analogous to
how terminals interacted with mainframes. Nowadays, of course, RPC tends to
make everyone slightly nauseous. Instead we have REST, which is analogous to
how something, uh, er, nevermind.
XNS is now largely forgotten, as all of the systems that implemented it failed
to compete with IP's ARPANET-born takeover. That said, it does have one curious
legacy still with us today. Routing Information Protocol (RIP), commonly used
as a "lowest common denominator" interior gateway protocol, was apparently
originally designed as part of XNS and later ported to IP.
I promised that I would say a bit about mobile data terminals, and now here we
are. This is an interesting topic to me for two reasons: first, it involves
weird old digital radio and network protocols. Second, MDTs have weird
intersection with both paging and cellular data, such that I would present them
as being a "middle step" in the evolution from early mobile telephony (e.g.
IMTS car phones) to our modern concept of cellular networks as being data-first
(particularly VoLTE where voice is "over the top" of the data network).
To start with, what is a mobile data terminal (MDT)? An MDT is a device
installed in a vehicle and used by a field worker to interact with central
information services. Perhaps the best-known users of MDTs are police agencies,
which typically use MDTs to allow officers in their vehicles to retrieve motor
vehicle and law enforcement records, and sometimes also to write citations and
reports in an online manner (meaning that they are filed in a computer system
immediately, rather than at the end of the shift).
MDTs are not restricted to law enforcement, though. MDTs are also commonly used
by utility companies such as gas and electric, where GIS features are
particularly important to allow service technicians to view system diagrams and
maps. They are also commonly used by public transit agencies, taxis, and other
transportation companies, although these tend to be somewhat more specialized
devices with more limited capabilities---for example, a common MDT in public
transit scenarios is a device which reports position to dispatch, displays the
route schedule to the driver, and allows the driver to send a small number of
preset messages (e.g. "off schedule") to dispatch and see the response.
I'm more interested in the more "general purpose" MDTs which may, but do not
necessarily, run a desktop operating system such as Windows. Today, MDT
typically refers to a Toughbook or similar laptop computer which is equipped
with an LTE modem (sometimes external) and can be locked into a dock which is
hard mounted to the vehicle. Since most modern MDTs are just laptops, they can
typically also be removed from the vehicle and used in a portable fashion, but
that's a fairly new development.
There is also some slight terminology confusion to address before I get into
the backstory: the term "mobile data computer" or MDC is essentially synonymous
with MDT, and you may see it used instead in some cases. Handheld devices, on
the other hand, are largely a Whole Different Thing.
MDTs were, for the most part, invented by Motorola. Early MDTs had vacuum
fluorescent character displays, although they fairly quickly progressed to
CRTs. The classic Motorola MDT has a full keyboard, but is also equipped with a
number of "preset" buttons which send a given message to dispatch with a single
press. Early MDTs ran special-purpose operating systems which were presumably
very simple, and applications for them were largely custom-developed by
Motorola or an integrator.
So how did these things actually communicate? MDTs were a fairly common tool of
various municipal and utility agencies by the end of the 1970s, well before any
kind of cellular data network. Indeed, they may be the first instance of a
wide-area radio data network with more flexible capabilities than paging
systems, and in many ways they worked with infrastructure that was ahead of its
time---and also excessively expensive.
Various MDT data protocols have come and gone, but perhaps the earliest to be
significantly capable and widespread is a Motorola system called MDC-4800
(Motorola tended to prefer the term MDC), introduced in 1980. The "4800" in the
name is for 4800 bits per second, and the protocol, at a low level, is
frequency-shift keying.
Typically, a Motorola MDT would be connected to a "Vehicular Radio Modem,"
although in early MDTs the VRM was not necessarily viewed as a separate
product but rather part of the system. The VRM is essentially a VHF or UHF
two-way radio which has the discriminator output connected to a packet modem.
True to this description, many Motorola VRMs were closely based on contemporary
VHF/UHF radio models.
MDC-4800 moved 256-byte packets and the protocol had support for packet
reassembly into larger messages, although the messages were still fairly
constrained in length. In many ways it is a clear ancestor to modern cellular
data systems, being a packet-based radio data system intended for general
purpose computer applications.
Where MDC-4800 gets particularly interesting, though, is in its applications.
MDC-4800 was directly used by proprietary, semi-custom systems developed for
various MDT users. Much of MDC-4800's ubiquity, though, came from a
collaboration of Motorola and IBM. At the time, being the late '70s into the
early '80s, IBM was in possession of a large fleet of service technicians who
worked out of trucks, a substantial budget, and a limitless lust for solving
problems with their computers. IBM began a partnership with Motorola to
develop a futuristic computerized dispatch and communications system for their
service technicians, which would be based on Motorola MDTs.
In the course of developing a solution for IBM, Motorola developed an
integrated network system called DataTAC. DataTAC expanded on MDC-4800 to build
a multi-subscriber data network operating in the 800MHz band, and Motorola
partnered with various other organizations (mostly telcos) to establish DataTAC
as a generally available service. In the US, the DataTAC service was known as
ARDIS. ARDIS was widely used by MDT users of all stripes including municipal
governments and businesses, but it could also support pagers and in a clear
bridge to the modern era, early BlackBerry devices actually used ARDIS for
messaging and email. ARDIS continued to operate as a commercial service into
the late '90s and was upgraded to subsequent protocols to improve its speed
and reliability.
DataTAC is often recognized as a "1G" cellular technology, for example by the
Wikipedia categories. This is a bit confusing, though, as for the large part
"1G" is synonymous with AMPS which was an analog, voice-only system. I believe
that it is only from a modern perspective that DataTAC would be put in the same
category as AMPS---the perspective that mobile telephony and data would become
a unified service, which was not nearly so obvious in the '80s or even '90s
when these were separate technologies offered by separate vendors as separate
product lines, and generally seen as having completely separate applications.
In a full-circle to my last message, it was pagers that seemed to "bridge the
gap," as relatively sophisticated pagers can and did operate on the ARDIS
network while still feeling like a "phone-ish" item.
ARDIS was later transitioned to using a protocol called RD-LAP, which was also
developed by Motorola for MDT use. RD-LAP was similar to MDC-4800 in many ways
except being faster, and so represented an evolutionary step rather than
revolutionary. However, RD-LAP stands out for having had an impressively long
lifespan, and while largely obsolete it is still seen today in various
municipal agencies that have not found the budget to modernize. RD-LAP is
capable of an impressive 19.2 kbps, which doesn't sound like much but was
impressive for the time.
ARDIS was not alone in being an odd data service in an era largely seen as
before mobile data. ARDIS had a contemporary in Mobitex, which was developed in
Europe but also seen in the US. Mobitex was a centralized network with very
similar capabilities to ARDIS, and was particularly popular for two-way pagers.
Mobitex was also used by BlackBerry, and the fact that BlackBerries used
Mobitex and ARDIS in various models, perhaps the first wide-area radio data
protocols to exist, is a reminder of just how revolutionary the product once
was, considering RIM's total lack of relevance today.
Mobitex also saw significant use for MDTs, although in the US it was less
popular than ARDIS for this purpose. Mobitex seems to have been particularly
popular for in-the-field credit card processing, although I would not be the
least bit surprised if ARDIS credit card terminals were also made.
ARDIS and Mobitex represent an important early stage of modern cellular
networks, but also show some significant differences from the cellular networks
of today. Both systems were available as commercial services with nationwide
networks but were also often deployed on a local scale, especially by municipal
governments and, in some areas, state governments, for public safety and
municipal utility use. This remains surprisingly common today in the case of
municipal LTE (significant spectrum reserved for government use makes it
surprisingly easy for municipalities to launch private LTE networks and many do
so), but for the most part isn't something we think about any more, at least in
the business world. A large part of the popularity of MDC-4800 and RD-LAP in
particular is the fact that they could be deployed on existing business or
municipal band VHF or UHF allocations, making them fairly easy to fit into an
existing public safety or land mobile radio infrastructure.
About those VRMs, by the way: when Motorola began to offer VRM data modems as
independent products, they sported a serial interface so that they could
essentially be used as typical data modems by any arbitrary computer. This was
the transformation from MDTs as dedicated special-purpose devices to MDTs as
general-purpose computers that happen to have a data radio capability. Motorola
themselves made a series of MDTs which were really just Windows computers in
ruggedized enclosures with a touchscreen and a VRM.
Architecturally, MDT systems strongly showed their origins in the early ear of
computing and the involvement of IBM in their evolution. Most of the software
used on MDTs historically and in many cases to this day really just amounted to
a textmode terminal that exchanged messages with a mainframe application, and
often with an IBM ISPF-type user interface full of function keys and a "screen
as message" metaphor[1].
MDTs and the data networks they used are an important but largely forgotten
development in mobile networks... are there others? Of course, quite a number
of them. One that I find interesting and worth pointing out is a technology
that also took an approach of merging telephony together with land mobile
radio technology: iDEN. Also developed by Motorola, iDEN was a cellular
standard ("2G"-ish) that was directly derived from trunking radio technology.
First available in '93, iDEN could carry phone calls much like AMPS (or more
like digital AMPS) but also inherited many of the features of a trunking radio
system, meaning that a group of iDEN users could have an ongoing push-to-talk
connection to a "channel" much like a two-way radio. This was particularly
popular with small businesses, which gained the convenience of two-way radio
dispatch without the cost of the equipment---it was built into their phones.
iDEN is, of course, recognized by most under the name Nextel, the carrier
which deployed a wide-scale iDEN network in the US. Nextel heavily advertised
its PTT functionality not just to businesses but also to chatty consumers.
Nextel television commercials and the distinctive Motorola roger beep are
deeply burned into my brain from many hours of childhood cable television,
and even as a kid I was pretty amazed by the PTT capability.
Nextel was of course merged with Sprint, and while the iDEN service continued
to exist for some time it was not of great interest to Sprint and was
officially terminated in 2013. This is actually rather sad to me, because
modern cellular networks are surprisingly incapable of offering the quality of
PTT service achieved by Nextel---modern PTT services are generally IP based and
suffer from significant latency and reliability problems.
So let's try to come back to the idea that I brought up in the previous post,
that commodification of technology also tends to eliminate special-purpose
technologies which ultimately reduces the utility of technology to many
use-cases. iDEN seems to me actually an exceptionally clear case of this: iDEN
solved a specific problem in a very effective way, making something akin to
two-way radio significantly more accessible to three-person plumbing companies
and teenagers alike. Ultimately, though, iDEN was not able to compete with the
far larger market share and rapidly improving data services of the GSM and
CDMA/IS-2000 family of standards, and it seems that simple market forces
destroyed it. Because the problem that iDEN solved well is actually a rather
hard problem (reliable, real-time delivery of voice with minimal or no
connection setup required, guaranteed bandwidth via traffic engineering being
the big part IP fails to deliver), it's one that has in a large way gone from
solved to unsolved in the last ten years. We have witnessed a regression of
cellular technology, which is particularly prominent to me since I've developed
an odd fascination with "Android radios" that are really just Android phones in
a handheld radio format (with push-to-talk button) and that there aren't really
many good ways to actually use.
What about the decline of Mobitex and ARDIS, though? To my knowledge Mobitex
actually remains available in the US to this day, but I believe ARDIS fizzled
out after Motorola divested the operation. I have a hard time shedding too many
tears for these services, because since they were basically just
packet-switched radio networks, modern cellular networks can outdo them on
essentially all metrics. Mobitex and ARDIS were generally more reliable than
modern cellular data, part of the reason they survived so long, but a lot of
this realistically is just due to their low data rates and low subscriber
counts. A Mobitex weighed down by a city worth of Instagram users would
presumably collapse just as much as LTE does at the state fair.
It is, however, notable that many of these older technologies were quite
amenable to being stood up by a government or company as a private network.
That's something that isn't especially easy for a private company to achieve
today, even if they for reason wanted to (I want to). Most radio data protocols
that are available on an unlicensed basis or that it's reasonably easy to
obtain licenses for are either low-speed or require directional antennas and
short ranges. This is even true in the amateur radio world, although it's
fairly clear there that progress has been held back by the FCC's archaic rules
regarding data rates. I think that this essentially comes down to the strong
competition between cellular carriers meaning that any bandwidth freed for
broadband data applications ends up going for a high value at auction, not to
Joe Schmuck who put in a license application. Perhaps we're better off anyway
as shared networks (the cellular carriers) are presumably always going to be
more economical... but given the reliability and customer relationship issues
that mobile carriers often face it's not clear to me that the business world
wouldn't have more interest in private networks if they were reasonably
achievable.
Low-power unlicensed LTE does present one opportunity, and I'll let you know
when I one day give in and buy a low-power LTE base station for my roof.
[1] By "screen as message" I refer to the interface design common among IBM
and other mainframe applications, and formalized in various standards such as
ISPF, in which the interface is screen-oriented rather than
line-oriented---meaning that the mainframe paints an entire screen on the
terminal, the user "edits" the screen by filling in form fields and etc, and
the terminal sends the entire screen back and waits for a new one. I actually
find this UI paradigm to be remarkably intuitive even in textmode (it is a
direct parallel to "documents" and "forms") and regret that, largely due to
simple technical limitations, the microcomputers that took over in the '90s
mostly discarded this design in favor of a line-oriented command shell. Web
applications are (or at least were, before SPAs) based on a very similar model
which I think shows that it has some enduring value, but it's very hard to find
any significant implementation in textmode today outside of a few Newt-based
Linux tools which suffer from Newt's UX being, frankly, an absolute mess
compared to IBM's carefully planned and researched UX conventions. Besides,
most of us still have 12 function keys, might as well actually use them.
To start with, as a result of a brief and uncomfortable brush with fame I have
acquired a whole lot more readers. As a result, I pledge to make a sincere
effort to 1) pay a little more attention when I mash my way through aspell
check on each post, 2) post more regularly, and 3) modify my Enterprise
Content Management System so that it is slightly more intelligent than cating
text files in between other text files. Because it's a good idea to only make
promises you can keep, I have already done the third and hopefully text now
flows more coherently on mobile browsers and/or exceptionally narrow monitors.
I am also going to start the process of migrating the mailing list to a service
that doesn't, amusingly, repeatedly bungle its handling of plaintext messages
(HTML is somehow easier for MailChimp), but rest assured I will move over the
subscriber list when I do that. I am still recovering from years of trauma with
Gnu Mailman as a mail server admin, so mass email sending is not something I
can approach easily.
I will not, however, make the ASCII art heading represent properly at narrower
than 80 characters. Some things are simply going too far.
So after the topic of odd scanner regulation around cellular phones, I wanted
to talk a little more about cellular phones. The thing is, I have decided not
to try to explain the evolution of current cellphone standards because it is
not something I have any expertise in and frankly every time I try to read
up on it I get confused. The short version is that there are two general
lineages of cellular standards that are (mostly incorrectly) referred to as
CDMA and GSM, but as of LTE the two more or less merged, differentiating CDMA
and GSM phones by what they fall back to when LTE is not available. Then 5G
happened which somehow made the situation much more complicated, and is
probably a government conspiracy to control our minds anyway.
On a serious note, one of the legacy cold war communications systems I am very
interested in is GWEN, the Ground Wave Emergency Network. GWEN was canceled
for a variety of reasons, one of which was up-swell of public opposition founded
primarily in conspiracy theories. The result is that it's hard to do much
research on GWEN today because you keep finding blogs about how cell towers are
actually just disguised GWEN sites being used to beam radiation into our homes.
GWEN itself has a slightly interesting legacy, it was canceled before it
achieved full capability but a number of sites had already been built. The more
coastal of those sites ended up being transferred to the Coast Guard which
reused them as stations for their new Differential GPS network[1], so a large
portion of DGPS sites were just disused GWEN sites. DGPS was replaced with
NDGPS which itself has recently been decommissioned, in part because the FAA's
Wide Area Augmentation System (WAAS) is generally superior and can also be used
for maritime purposes. So the GWEN sites have now died two deaths.
So we can all hopefully agree to call that tangent a segue to the topic I
really want to discuss: less-well-known terrestrial mobile communications
systems. Basically, I want to take the family tree of pagers and cellular
phones and call out a few lesser known members of the order.
Let's talk first about pagers. Pagers have largely died out, but I had the
fortune of working in a niche industry for a bit such that I carried one
around with me. Serious '80s drug dealer vibes. The basic idea of the pager
is very simple: there is a radio transmitter somewhere, and a bunch of people
carry around belt-pack pagers that beep at them when the radio transmitter
sends a message intended for them. In its late forms, it was essentially
a dedicated text-messaging infrastructure, although early pagers delivered no
payload at all (only the fact that a message existed) and later pagers, many
into the '90s, only delivered the phone number of the caller.
By the '90s though not only had alphanumeric pagers come about with full text
message capability, two-way alphanumeric pagers had been introduced where
it was possible to respond. Two-way pagers basically represent a weird
mutation on the way to modern cellphones and so I won't discuss them too
much, I'm more interested in the "pager" as distinguished by being a strictly
one-way device. This is, for example, the reason that I sometimes jokingly
refer to my phone as my pager: I detest typing on it so much that I often use
it was a one-way device, responding to messages I receive later when I'm at a
computer.
Pagers have gone through a number of different technical evolutions, but most
modern pagers run a protocol called POCSAG. One of the reasons for widespread
standardization on POCSAG is that it is not uncommon for institutions to
operate their own private paging transmitters, so standardization is more or
less required to make a sale to any of these users (which today probably
represent most of the paging market). Understanding this requires commenting
a bit on another huge way that pagers are differentiated from cellular
networks.
Modern cellular phones (and really all cellular phones if you use the term
strictly) employ a "presence" or "registration" system. Essentially, as you
walk the mean streets of suburban Des Moines your cellular phone is involved
in an ongoing dialog with base stations, and central systems at your provider
continuously keep track of which base station your cellphone is in contact
with. This way, whenever you get a call or text message or push notification,
the system knows which base station it should use to transmit directly to
your phone---it knows where you are.
Pagers, excepting some weird late-model pager-ish things, don't have any such
concept. The pager itself has no transmitter to advertise its whereabouts
(this is a large portion of why pagers remain in use today). Instead, every
page destined for a pager must be sent via every transmitter that that pager
might be near.
You can imagine that this poses inherent scalability limitations on pagers.
When you purchase pager service from a commercial provider, you generally
have to specify if you want "city," "regional," or "nationwide" service. This
really just determines whether they will transmit your pages in just your
city, throughout a state or other regional area, or nationwide. Nationwide
service is surprisingly expensive considering cellphone competition. Even
then, paging transmitters tend to only be located in more urban areas and so
coverage is poor compared to cellphones.
This limitation of pagers, though, is also an advantage. The simplicity of
the total paging system (just encode messages and send them out a
transmitter, no huge technology stack involved) encourages private paging
systems. In my area, hospitals and universities operate private paging
systems, and government facilities contract them out but still to a local,
small-scale scheme that is effectively private. They're particularly
popular with hospitals because an already-installed paging system is fairly
cheap to maintain, it's guaranteed to work throughout your building if you
put the transmitter on the roof (not something cellphones can always offer
in large, byzantine hospitals), and as long as your staff live reasonably
nearby their pagers will work at home as well[2].
So that's "what a pager is" at a bit of a technical level. More interesting
to me are some pager-adjacent devices, such as the Motorola MINITOR. MINITORs
are so popular with volunteer fire departments that you can pretty reliably
identify volunteer firefighters by the MINITOR on their belts, although the
nineteen bumper stickers and embroidered hat tend to give it away first.
So what is a MINITOR and how does it relate to a pager... this requires
getting a little bit into radio systems and the concept of a coded squelch.
Let's say that you are, example out of nowhere, a fire department. You have
a VHF or UHF FM radio system that you use to communicate between dispatch and
units. When dispatch receives an event they want to notify the units that
should respond, but they don't want to wake up the entire department. One
common way of achieving this is some manner of coded squelch. This is not the
only application of coded squelches (they're often used just as a way to
minimize false-positive squelch opens), but it's one of the most complex and
interesting.
The idea is this: instead of a given radio just opening squelch (enabling
the speaker basically) when it receives a carrier, the radio will only open
the squelch when it receives a specific tone, series of tones, data packet,
or other positive message that that radio is supposed to open squelch. By
programming different tone sequences into different radios, the dispatcher
can now "selective call" by transmitting only tone sequences to open squelch
for the specific units they wish to contact.
There are two major coded squelch systems used in public safety (actually there
are a ton but these are the two most widely seen on analog, non-trunking FM
systems): two-tone, also called Selcall, and Motorola MDC. Two-tone is the
format supported by MINITORs and probably the more common of the two because
it has more cross-vendor support, but it's also much more primitive than MDC.
The concept of two-tone selective calling is very simple and you can probably
guess from the name: Before a voice transmission, essentially as a preamble,
the transmitter sends two tones in sequence, each for about a second. Yes, this
takes a while, especially if calling multiple units, enough that it's not done
on key-up like MDC or many other selective calling schemes. Instead, the
dispatcher's radio console usually has a dedicated button that starts sending
tones and they have to wait until it's good and ready before they talk. It's
not uncommon to hear the dispatcher say something like "wait for tones" or
"tones coming" to warn others that things will be tied up for a bit.
So how does this all relate to paging... the MINITOR and other devices like
it are basically handheld radios with the entire transmit section removed.
Instead, they are only receivers, and they are equipped with a two-tone
decoder. So if you are, say, a volunteer firefighter, you can carry a MINITOR
which continuously monitors the dispatch frequency but only opens squelch if
it receives a two-tone sequence indicating that the dispatcher intends to
activate a given group of volunteers. This is basically a paging system, but
simply "built in" as a side feature of the two-way FM radio system.
I'll also mention MDC briefly. MDC is a more sophisticated system that uses a
short FSK data packet as the selective calling preamble. This transmits quickly
enough that the radios simply send it every time the PTT is pressed. This
allows some more advanced features, for example, every time someone in the
field transmits the dispatcher's console can tell them the ID of the radio that
just transmitted. Auxiliary information in addition to addressing can also be
send in the MDC preambles. MDC is also very popular in public safety and if
you've spent much time with a scanner you'll probably recognize the sound of
the MDC preamble. It's actually very common to mix-and-match these systems,
for example, some fire departments use MDC but also send Selcall tones when
dispatching, often specifically to trigger MINITORS.
Selective calling systems in public safety are often also used to trigger
outdoor warning systems such as sirens, which are of course one of my favorite
things. A surprising number of outdoor sirens used in tornado-prone areas, for
college campus public safety, etc. are just equipped with a radio receiver
monitoring a dispatch frequency for a specific selective call. This can
interact in amusing ways with "mixed" selective calling. I used to work on an
Air Force base with a fairly modern Federal Signal outdoor warning system.
When it played Reveille and Retreat each day it sounded fine, but when they
tested the emergency sirens one day a week you actually heard MDC and then
Selcall tones over the speakers before the siren. My assumption is that
regularly scheduled events like Reveille were played via a Federal Signal
digital system while emergency alerts went out over some force protection
dispatch frequency, and the "siren" speakers opened squelch in reaction to some
Federal Signal-specific preamble that was sent before the preambles used for
mobile radios. As another anecdote, the US military has the charming habit of
referring to all outdoor warning systems as "Giant Voice," which was the brand
name of a long-discontinued Altec Lansing system that had been very popular
with DOD. Other siren systems are triggered using telephone leased-lines, and
of course on modern systems there are options for cellular or other more
advanced data radio protocols.
There are also a number of other selective calling systems in use. Another
example I am aware of is a proposal among amateur radio groups called "long
tone zero," which suggests that persons experiencing an emergency should tune
to a nearby repeater and transmit a DTMF zero for several seconds. The idea is
that other radio amateurs who wish to be helpful but not have their ears glued
to their radios (or more likely be woken up at night) can set up a software or
hardware detector for the zero digit and essentially use it as a
selective-calling scheme, with their radio (presumably with the volume cranked
to eleven) only opening squelch upon receiving a long-tone-zero. It's a clever
idea but to my knowledge not one that is widely enough implemented to be
particularly useful. Of course selective calling is also widely used to open
the squelch on repeaters but I find that less interesting.
A similar scheme that is oddly well-known to the public ear is employed by the
Emergency Alert System and NOAA All-Hazards Radio. Those "emergency weather
radio" receivers you buy at the store from the likes of Midland monitor an
All-Hazards Radio frequency but only open squelch when they receive a preamble
indicating that there is an emergency notification. Historically this was based
on a simple dual-tone scheme (the tones that are now used as the emergency
alert ringtone on most cellphones), but nowadays a digital scheme is used that
allows the radio to know the type of alert and area it applies to. This is
actually how EAS messaging is triggered on many television and radio stations
as well. I will devote a whole post some time to the history of the Emergency
Alert System in its various outdated and modern versions, because it's really
pretty interesting---and frankly I am amazed that incidents of unauthorized
triggering of EAS are not more common, as the measures in place to prevent it
are not particularly sophisticated.
So that's one pager-adjacent thing. Let's talk a bit about a different pager
adjacent thing, and one that I know less about because it's more proprietary
and less frequently heard in the modern world: taxi and freight dispatch.
Several manufacturers used to build taxi dispatch systems that allowed for
individually addressed text messages to specific receivers installed in cabs.
This is basically a pager system but using a larger display, and the systems
were almost always two-way and allowed the cab driver to at least send a
response that they were on the way. The system in use and its technical details
tended to vary by area and it's hard for me to say too much in general about
them, other than that they have been wholly replaced today by cellular systems.
A system similar to taxi dispatch systems is ubiquitous in the freight
trucking industry, but is far more standardized. Qualcomm Omnitracs is an
integrated hardware and service product that places a small computer in the cab
of a truck which both reports telemetry and exchanges text messages between the
driver and dispatcher. The system has been bought and sold (I don't think
Qualcomm even owns it any more) and has been moved from technology to
technology over years, but for most of its lifespan it has relied on a
proprietary satellite network. This gives it the advantage of being more
reliable in between urban areas than cellular, although the fact that the
system is now available in a cellular variant shows that this advantage is
getting slimmer. It's also the reason that a great many semi tractors
feature a big goofy radome, usually mounted behind the roof fairing. You just
don't see that kind of antenna on vehicles very often. Like most satellite
communications networks, Omnitracs relies on the messages being small and
infrequent (very low duty cycle) to make the service affordable to operate.
What I particularly like about the Omnitracs system (which seems to be widely
referred to by truckers as Qualcomm regardless of who owns it now) is that the
long near-monopoly it enjoyed, and probably its relationship to a big
engineering operation in Qualcomm, lead to some very high quality hardware
design compared to what we expect from communications devices today. The system
was always designed to be usable on the road, and featured a dash-mounted
remote control and speech synthesizer and recognition (to hear and reply to
messages) long before these became highly usable on cellphones. The system also
integrates secondary features like engine performance management and even
guided pre-trip checklists. It's an example of what can be achieved if you
really put hardware and software engineering expertise into solving a specific
problem, which has become uncommon now that the software industry has realized
it is cheaper (at the expense of user experience, productivity, etc) to solve
all problems by taping iPads to things. And that's the direction that freight
dispatch is increasingly going today, "integrated products" that consist of a
low-end Android tablet in a dashboard mount running some barely stable app that
is mostly just a WebView. And taxi dispatch barely even exists now because
silicon valley replaced the entire taxi with an iPhone app, which if you
think about it is kind of amazing and also depressing.
These two topics get very close to the world of mobile data terminals, and
that's what I'll talk about next. MDTs are car-mounted computers often used by
first responders and utility crews, and while nowadays "MDT" usually just means
a Panasonic Toughbook with an LTE modem (maybe for a municipal LTE network),
historically it referred to much more interesting systems that paired a
Panasonic Toughbook with a VHF/UHF data modem that relied on some eccentric
protocols and software stacks. One thing has never changed: Panasonic Toughbooks
are way overpriced, even on the government surplus market, which is why I still
don't have one to take apart.
So I'll talk a bit about MDTs and the protocols they use next, since in many
ways they're more the ancestors of our modern smartphones than actual phones.
So, is there any big conclusion we can draw from looking at these largely
"pre-cellular" (but still present today) wireless systems? I don't know. On the
one hand, in some ways these confirm one of my theses that increasing
commodification of software and hardware tends to make technology solutions
less fit for purpose rather than more. That is, technology devices today
are better only in a certain way, and worse in other ways in that
increasing abstraction, complexity, and unification of design tends to
eliminate features which are specific to a given application (everything is an
iOS and/or Android app now, and half of those are really just websites) and
increase complexity for users (what was once a truck dispatch system is now an
Android tablet with all the ways that can go wrong).
At the same time, these effects tend to drive the cost of these devices down.
So you might say that everything from semi-truck dispatch to restaurant POS (a
favorite example of mine) is now more available but less fit for purpose.
This is one of the big themes of my philosophy, and is basically what I mean
when I say "computers are bad," so I hope to explore it more in this blog
newsletter thing. So next time, let's try to look at mobile data terminals
and dispatch systems under that framework---how is it that they have become
cheaper and more available, but at the same time have gotten worse at the
purpose they're intended for? But mostly we'll talk about some old radio data
protocols, because those are what I love.
Postscript: now that I have 100+ email subscribers and probably as many as ten
people who somehow still use RSS (I don't know, I don't really have any
analytics because I'm both lazy and ethically concerned about delivering any
Javascript whatsoever[3]), I'd love feedback. What do people find most
interesting about my rambles? What do they want to hear more about? You can
always email me at me@computer.rip, or hit me up on Matrix at
@jesse:waffle.tech. If you send me an email I like enough I'll throw it in here
sometime like an old-fashioned letter to the editor. Like The Economist, if you
do not begin it "SIR -" I will edit it so that people think you did.
[1] Differential GPS is an interesting technique where a site with a known
location (e.g. by conventional survey techniques) runs a GPS receiver and then
broadcasts the error between the GPS fix and the known good location. The
nature of GPS is that error tends to be somewhat consistent over a geographical
region, e.g. due to orbital perturbations, so other GPS users in the area can
apply the reverse of the error calculated by the DGPS site and cancel out a
good portion of the systematic error. The FAA WAAS system was designed to
enable RNAV GPS approaches, basically aircraft instrument operations by GPS.
The main innovation of WAAS over DGPS/NDGPS is that the correction messages
are actually sent back to orbit to be broadcast by satellites and so are
available throughout North America.
[2] A huge downside to this is that POCSAG lacks any kind of security scheme.
I have personally found more than one hospital campus transmitting patient
name, DOB, clinical information, and occasionally SSN over an unencrypted
POCSAG system. My understanding is that, from a legal and regulatory
perspective, this is basically an accepted practice right now. Maybe congress
will pass legislation against POCSAG decoders.
[3] I am not Richard M. Stallman, although I did once spend much more time than
I am comfortable with in his presence. I'm more like Jaron Lanier, I guess. My
ethical concern about delivering Javascript is not that it removes your control
of your personal computer or whatever, but rather that it feels like the
gateway drug to pivoting to a social network and creating a Data Science
department. When it comes to modern webdev, "DARE To Say No to SPAs and
Websockets." Oh god I'm going to design a T-shirt.
To start: yes, long time no see. Well, COVID-19 has been like that. Some days I
feel accomplished if I successfully check my email. I finally managed to clear
out a backlog of an entire handfull of things that needed thoughtful responses,
though, and so here I am, screaming into the void instead of at anyone in
particular.
That said, let's talk a bit about radios. It is probably unsurprising by now
that I have a long-running interest in radio and especially digital radio
communications---but people who come to radio from all kinds of different
perspectives run into one odd problem: the curious refusal of any receiver
to tune to certain frequencies in the 800-900MHz range.
A lot of people have a general knowledge that this has to do with some kind of
legal prohibition on reception of cellular phones. That's roughly correct, but
to fully explain the matter requires going into some depth on two different
topics: FCC regulation of radio devices, and the development of cellular
phones. The first sounds more boring, so let's hit that one first.
Generally speaking, most electronic products manufactured or imported into the
United States are subject to regulation by the Federal Communications
Commission. Specifically, they generally require an "Equipment Authorization"
from the FCC prior to being marketed. For purposes of this regulatory scheme,
electronic devices can be broadly divided into two categories: intentional
radiators and unintentional radiators.
An intentional radiator is something that is specifically intended to broadcast
a radio signal, like, say, a cellular phone. Intentional radiators must be
certified to comply with the specific Part of the FCC regulations relevant to
the service for which they will be used. For example, cellular phones must be
certified against Part 27, Wireless Communications Service, among others. The
exact process varies by the part and can be involved, but it generally
involves the manufacturer paying a certified test lab to perform certain tests
and complying with various other filing requirements which include placing a
label on the device which specifies its FCC approval. Device manufacturers
must file with the FCC a description of how this label will appear before they
receive approval to market the device, which is why the rough designs of
unreleased devices are sometimes revealed by the rough drawings in these
filings---tech journalists will watch these to get the dimensions of new
iPhones, for example.
By the way, when I say the "FCC Regulations," if you want to follow along at
home these are promulgated as 47 CFR. So Part 27, for example, refers to 47 CFR
27. The ever lovely Cornell LII has the whole thing for your entertainment:
https://www.law.cornell.edu/cfr/text/47. There's some reading for when you need
help falling asleep.
But that's all besides the point, I'm more interested in talking about
unintentional radiators, devices which are not intended to produce RF radiation
but may still do so as a result of the operation of the electronics---this is
generally called a spurious emission, which is basically any RF emitted by
accident. These devices are certified under Part 15 of the FCC regulations[1],
and so are sometimes called "Part 15 devices." Part 15 essentially limits the
type and amplitude of spurious emissions to prevent random devices causing
harmful interference due to defects in their designs.
What would we call a radio receiver, then? It is explicitly a radio device,
but is not intended to transmit anything. As a result, radio receivers are Part
15 devices. Most of Part 15 is very general and doesn't really say anything
specific about radio devices, it just limits spurious emissions and other
design standards. However, 15.121 gets a great deal more specific in
discussing "Scanning receivers.' A scanning receiver is specifically defined
earlier in the regulation as a device capable of tuning to two or more frequency
bands in the range of 30-960Mhz. This has the fun result that nothing for the
GHz range is technically a scanner, but for practical reasons this doesn't
matter too much.
So what's in 15.121? This is:
47 CFR 15.121(a): ... scanning receivers and frequency converters designed or
marketed for use with scanning receivers, shall: (1) Be incapable of operating
(tuning), or readily being altered by the user to operate, within the frequency
bands allocated to the Cellular Radiotelephone Service in part 22 of this
chapter (cellular telephone bands). ... (b) Be designed so that the tuning,
control and filtering circuitry is inaccessible. The design must be such that
any attempts to modify the equipment to receive transmissions from the Cellular
Radiotelephone Service likely will render the receiver inoperable.
The rest of paragraph (a) gives a pretty long clarification of "readily being
altered by the user," and it's amusing to think of a bunch of FCC characters
sitting around a table trying to think up every alteration that is easy.
Jumper wires and reprogramming micro-controllers are both right out.
It gets even better:
47 CFR 15.121(b): ... scanning receivers shall reject any signals from the
Cellular Radiotelephone Service frequency bands that are 38 dB or lower based
upon a 12 dB SINAD measurement, which is considered the threshold where a
signal can be clearly discerned from any interference that may be present.
So, here's this actual weird rule about scanners. Scanners are specifically
prohibited from being able to tune to any bands allocated to the Part 22
Cellular Radiotelephone Service. This raises questions, and as you can imagine
from the way I got here, I am about to spend a long time answering them.
When the FCC says "Cellular Radiotelephone Service," they aren't talking about
cell phones in general. The CRS as I'll call it refers to a very specific
cellular service, and that is AMPS.
AMPS, the Advanced Mobile Phone System, is the most common in the US of the
"1G" cellular services. Most carriers that were around when it was offered
called it "Analog" service, and indeed, AMPS was entirely analog. And, due to
an odd detail of the regulation, large cellular carriers were required to
offer AMPS service until 2008, long after AMPS phones were no longer produced.
You may have had a candy bar phone back when you would occasionally see an "A"
for analog service, but I hope not into the late 2000s.
There are a few things that we might infer from AMPS being an analog service.
One of those things is that it probably did not employ strong encryption. In
fact, AMPS employed no scrambling or enciphering of any kind. Your phone
conversations were just flapping in the wind for anyone to hear. This posed a
major practical problem for carriers in the '90s as it was discovered that it
was not particularly difficult to intercept the call setup process from an AMPS
phone and swipe its identification numbers, allowing you to basically steal
someone else's cellular service. You can imagine that this was popular with
certain criminals with a need for untraceable but convenient communications.
There was also a problem for consumers: their phone conversations could be
fairly easily overheard. There were a number of ways to do this, using any
radio scanner that covered that band for example. One particularly well-known
option was a particular model of phone, the Oki 900, that had an unusually
open design (in terms of modifiability) that led to reverse engineered and
modified firmware being developed that made eavesdropping on other people's
calls just, well, a feature it had.
The scale of this problem was fairly large, and it was fairly well known. For
example, let's turn to my favorite source of late-night reading, newspaper
archives. A lovely piece in the 30 May 1990 issue of The News and Observer,
from Raleigh NC, takes the cheesy headline "Monitoring Megahertz" and goes
into some depth on the issue.
"I've heard men call their wives and tell them they'll be home late, then call
their girl friends," quipped one electronics store owner who had "accidentally"
eavesdropped on cellular calls using a scanner. We've all fat-fingered our
ways into someone else's affairs I'm sure, pun intended. Another person said
"when you look at the fact that there are how many thousands of people out
there who know my name, my mailing address and my salary...I put cellular
eavesdropping down as being no different from that." In the face of technology,
even in 1990, people had begun to abandon their privacy.
Cellular carriers were not so happy about this, viewing it as an embarrassment
to their operation. I have heard before that cellular carriers went so far as
to lobby for banning scanners entirely, although I am not aware of much hard
evidence of this. What they did do was convince congress to stick an extra few
paragraphs onto an otherwise only tangentially related bit of legislation
called the Telephone Disclosure and Dispute Resolution Act of 1992. This has
largely to do with abusive 1-900 numbers, which is its whole own topic in
telephone regulation that I ought to take on sometime. But it also brought
along just a bit more, an extra section that was subsequently amended several
times at the behest of cellular carriers. Let's read part of it, as amended,
and with some editing for readability.
The Commission shall prescribe and make effective regulations denying equipment
authorization for any scanning receiver that is capable of---(A) receiving
transmissions in the frequencies allocated to the domestic cellular radio
telecommunications service, (B) readily being altered by the user to receive
transmissions in such frequencies, or (C) being equipped with decoders that
convert digital cellular transmissions to analog voice audio.
Well, we've made it full circle: we've seen the regulation, and we've seen the
legislation that kicked the FCC to write the regulation. But how does this
translate today? Things get a bit weird there.
You see, the FCC seems to have (sensibly) interpreted the legislation as
applying directly to the Cellular Radiotelephone Service, even though the
legislation actually uses the term "domestic cellular radio communications
service" which seems almost equally lively to have been (1) intended to be more
general in its applicability or (2) a result of someone drafting legislation
having read "Cellular Radiotelephone Service" in the FCC regulations but then
forgetting exactly how it was worded.
The Cellular Radiotelephone Service was allocated 824-849MHz and later
869-894MHz. That's it. You see, all of the digital cellular systems we use
today are considered completely different services from Cellular Radiotelephone
(usually called Wireless Communications Service although the details get
complex). As a result, and to this day, those two sections in the 800MHz band
are verboten to scanners, and nothing else.
And about those frequencies... after the requirement for AMPS service ended,
all US carriers ceased AMPS operations. The old AMPS bands remain allocated for
cellular service, and Verizon and a couple of smaller carriers use the same
frequencies for digital cellular services, which employ encryption and cannot
be intercepted by radio scanners. The prohibition on tuning scanners to these
frequencies no longer makes any sense, especially since this ban has never been
extended to the AWS, PCS, and WCS bands that are more widely used by modern
cellular phones.
My suspicion is that the fact that this regulation was mandated by congress
makes it difficult for the FCC to remove or modify, even though it no longer
makes technical sense. Unless congress finds some time for minutiae we are
unlikely to see a change in this rule.
In general, the whole thing is sort of bizarre. Broadly speaking, it is legal
to listen in on any radio communications in the US, but cellular phones have
repeatedly gotten a special carve-out.
Repeatedly? That's right. The whole AMPS band and scanners rule is the only
specific technical regulation, but the Electronic Communications Privacy Act
of 1986 had actually already made it illegal to intercept or listen in on
cellular calls, and this remains true to the present day... but there was
virtually no enforcement, and that hasn't really changed to this day.
And of course the whole thing has always felt like a farce. The solution to the
poor (or rather nonexistent) security design of AMPS was never legislation, but
cellular carriers and the congress will be damned if they didn't try. In
practice, the rule swept the entire eavesdropping problem under the rug for
some years, allowing carriers to continue operating the insecure AMPS system
for far longer than they should have (...but exactly as long as the FCC
required them to).
Because listening to the modern digital cellular modes wouldn't be particularly
interesting or useful anyway, and this rule doesn't really deter anyone with
the motivation and ability to decode those modes anyway, there are two lasting
impacts of this rather particular rule:
1) SDRs and other receivers made today must implement this particular and
peculiar restriction in order to receive US equipment authorization, which is
probably part of the reason that a lot of SDRs... don't.
2) To comply with the specifics of the regulation about rejection, many
receivers use a notch filter around 850MHz in their frontend. This means that
reception throughout the 800-900MHz range is particularly poor, a real
irritation as various public agencies and private agencies (especially
railroads) use land-mobile radios elsewhere in the 800-900Mhz range.
Basically, more than a decade after any of this made sense, we're all still
hassling with it.
[1] Part 15 is actually a lot more general and unintentional radiators are
specifically discussed under 47 CFR 15.101, but everyone just says Part 15.
Let's talk a bit about how internet is delivered to consumers today. This will be unusually applied material for this venue, and I will be basing it on a presentation I gave a while ago, so it is not entirely original. However, it is something that impacts us in a material way that few technologists are currently very familiar with: last-mile network technology.
In the telecom industry, the term "last-mile" refers generally to the last link to the consumer premises. It may be roughly a mile a long, but as we will see it can be both longer and shorter. The "last mile" is particularly important because most, or depending on how you look at it, all internet service providers employ a "hybrid" network design in which the last mile delivery technology is different from the inner portion of the network. For example, in one of the most common cases, cable internet providers employ what they call a "hybrid fiber-coaxial" network or HFC. This concept of the network being a hybrid of the two technologies is important enough that cable devices these days often label the DOCSIS-side interface the HFC interface. In this type of network, fiber optic cable is used to connect to a "node," which then uses a DOCSIS (television cable) connection to a relatively small number of homes. This reduces the number of homes in a collision domain to allow greater bandwidth, along with other advantages such as the fiber lines being generally more reliable.
This leads us an important point of discussion: fiber to the what? There has been an ongoing trend for years of technology-centric groups wanting fiber internet service. I am unconvinced that fiber service is actually nearly as important as many people believe it to be (DOCSIS 3.1 is capable of similar or better performance compared to GPON), in reality the focus on "fiber" tends to just be a proxy for the actual demand for much higher downstream and upstream bandwidth---the delivery technology isn't really that important. The fixation on fiber has, however, provided the ISP industry an in to create uncertainty for marketing advantage by confusingly branding things as fiber. One manifestation of this is a terminology clash I call "fiber-to-the-what." These terms are increasingly used in consumer and even commercial ISP marketing and can get confusing. Here's a rough summary:
Fiber-to-the-home (FttH): fiber optic delivered to a media converter which is inside the premises of a single consumer. Generally what people mean when they say "fiber internet," and typically delivered using GPON as the technology. In most cases GPON should be considered a last-mile delivery technology and thus distinct from "fiber optics" in the sense of a telecom inside network (e.g. 10GBASE-ER), as it has many of the same disadvantages of non-fiber last-mile technologies such as DOCSIS. However, FttH virtually always means that gigabit downstream is an option, which is basically what people really want.
Fiber-to-the-premises/building (FttH/FttB): Typically applicable to multi-family housing environments, fiber optic is delivered to a central point in the structure and another technology (usually GbE) is used for delivery to individual units. Common in newer apartment buildings. The "fiber" involved may be either GPON or a "proper" full-duplex optical transit technology, for which there are numerous options.
Fiber-to-the-curb (FttC): A rare branding in the US, although cable internet using a "node+zero" architecture is basically FttC. This refers to a situation where fiber optic transport is used to connect a curbside cabinet, and then another transport technology (potentially GbE) connects a small number of homes to the cabinet.
Fiber-to-the-node (FttN): What AT&T meant when they were speciously advertising fiber internet years ago. This is the most common situation today, where a modest number of homes (up to say a couple hundred) are connected to a "node" using some other transport. The node has an optical uplink.
You will see these terms used in discussions of internet service and hopefully this explanation is helpful. As I have aimed towards, something that I would like to convey is that "fiber internet" is not nearly as important as many pro-broadband parties seem to think. Similar quality of service can often be offered by other transport technologies with a lower investment. The limiting factor is generally that cable companies are terrible, not that the delivery technology they employ is terrible.
All of that said, here is a general survey of the last-mile transport technologies currently in widespread use in the United States. Overseas the situation is often different but hard to generalize as it depends on the region---for example, fiber service seems to be far more common in Asia while very-high-speed DSL variants are more common in Europe, at least from what I have seen. I'm sure there are various odd enclaves of less common technologies throughout the world.
DSL
While the term "DSL" is widely used by consumers and providers, it's a bit nonspecific. There are actually several variants of DSL with meaningfully different capabilities. What matters, though, is that DSL refers to a family of technologies which transport data over telephone lines using frequencies above the audible range (which frequencies depends on the variant, but they generally start at around 25kHz). Unlike general landline telephony, the DSL "node" multiplexes over a large set of telephone lines, so DSL is "always connected" without any dialing involved (this is somewhat different from ISDN).
There are a few common elements of DSL technologies. Consumers will have a "DSL modem" which communicates over the telephone line with a "DSL access multiplexer" or DSLAM, which converts from DSL to another transport technology. This depends on the ISP, but most often the actual transport protocol used within DSL networks is ATM, and the DSLAM converts from ATM over DSL to ATM over ethernet. The modem handles ATM signaling so that the connection between the modem and the DSLAM---the actual DSL segment---is transparent and basically a long serial line. Ethernet frames are passed over that link, but because there is no proper addressing within the network PPPoE, or Point-to-Point Protocol over Ethernet (say that five times fast), is used to encapsulate the "payload" ethernet frames onto the DSL network. This is actually running over ATM, so we have a situation you could call PPPoEoA. Of course PPPoA exists but is not generally used with DSL, for reasons I am not familiar with but suspect are historic. This is all a long explanation of the fact that the MTU or maximum packet size on DSL connections is usually 1430, which is your standard Ethernet 1500 minus the PPPoE headers.
It is possible to directly run IP over DSL and there are providers that do this, but it is very uncommon in the United States. To add slightly more complexity, it is common for DSL providers to use VLAN tagging to segregate customer traffic from management traffic, and so DSL modems often need to be configured with both PPPoE parameters (including authentication) and a VLAN tag.
Yes, PPPoE has an authentication component. DSL networks do not generally use "low-level" authentication based on modem identities, but instead the DSLAM accepts any PPPoE traffic from modems but at a higher level internet access is denied unless PPPoE authentication is completed successfully. This means that a DSL subscriber is identified by a username and password. Most DSL providers have an autoconfiguration system in place that allows their rental modems to obtain these parameters automatically, but customers that own their own modems will often need to call support to get a username and password.
DSL providers are generally telephone companies and subject to local loop unbundling regulatory requirements, meaning that it is possible to purchase DSL internet service from someone other than your telephone provider, but if you do so you must still pay your telephone provider a monthly fee for the use of their outside plant. In practice this is rarely competitive.
This all describes the general DSL situation, but there are two fairly different DSL variants in use in the US:
[ SIDEBAR ]
An important note for those who have not picked up on it: for historical reasons, network speeds are given in bits per second rather than bytes. This has a tenuous connection to things like symbol rate and baud rate which can become quickly confusing, so bit rate tends to serve as a good common denominator across technologies. It can be annoying, though, since most other things are quoted in bytes, and so you will often need to divide network rates by eight when doing back of the envelope calculations.
[ END SIDEBAR ]
ADSL
ADSL, or Asynchronous Digital Subscriber Line, is the most common DSL service. The most recent version, ADSL2+, is capable (on paper) of full-duplex operation at 25Mbps down and 3.3Mbps up. It is possible, although not especially common, to bond two lines to double those capabilities. These speeds are rarely obtained in practice. The range of ADSL is generally limited to a few miles and achievable speeds drop off quickly with range. It is very common to see that ADSL speeds offered very clearly drop as you get further from the telephone exchange, as in small towns that may be the location of the only DSLAM. However, it is possible for providers to use various technologies to place DSLAMs "in the field" in curbside cabinets, thus reducing the range to the customer. A more robust technology such as ISDN may be used for upstream transit. There are various names for various different types of devices, but the simplest is a "loop extender" which is basically just an ADSL repeater.
Typical ADSL speed offerings range from 5 to 15Mbps depending on range from the DSLAM. Upstream is uniformly poor and less than one Mbps is common even for downstream speeds on the high end. The downstream/upstream asymmetry is designed into the standard frequency allocations. ADSL has a reputation for high latencies, which has more to do with the typical network architectures of DSL providers than the transport technology, although ADSL does have some inherent overhead.
VDSL
VDSL, Very High Speed Digital Subscriber Line, is now becoming common in urban environments in the US. VDSL, and its latest standard VDSL2, is capable of much higher bandwidths using the same telephone lines as ADSL. Up to 52Mbps downstream and 16Mbps upstream is possible on paper, and pair-bonding to double these rates is common. Use of curbside DSLAMs is also ubiquitous. As a result, common VDSL speed offerings are as high as 80Mbps downstream. The useful range of VDSL is actually shorter than ADSL, and beyond a range of one mile or so ADSL becomes a better option.
VDSL is a relatively new technology. Unfortunately, DSL providers have not generally made it clear which technology they use, although you can infer from the bandwidths advertised. CenturyLink for example is deploying VDSL in many major cities and when they do so they will begin to advertise 80Mbps service, often at a lifetime rate for extra competitive edge.
DOCSIS
The next important technology is DOCSIS. DOCSIS and DSL probably form the top two technologies in use and I suspect DOCSIS is now the leader, although DSL has a decided edge in smaller towns. DOCSIS stands for Data over Cable Service Interface Specification, and to explain it simply it functions by using the bandwidths allocated to television channels on a cable television system to transmit data. DOCSIS is very popular because it relies on infrastructure which generally already exists (although some upgrades to outside plant are required to deploy DOCSIS, such as new distribution amplifiers), and it can offer very high speeds.
DOCSIS consumers use a DOCSIS modem which communicates with a Cable Modem Termination System or CMTS. DOCSIS natively moves IP and authentication is handled within the management component of the DOCSIS protocol based on the identity (serial number) of the modem. Like DSL, modems rented from the ISP generally autoconfigure, while people who own their own modem will need to contact their ISP and provide the modem's serial number for provisioning. Some DOCSIS providers place unrecognized modems onto a captive portal network, similar to many free WiFi access points, where the user can log into their ISP account or complete some other challenge to have their modem automatically provisioned based on the source of their traffic.
The latest standard, DOCSIS 4, is capable of 10Gbps downstream and 6Gbps upstream. In practice, the limiting factor is generally the uplink at the node. DOCSIS also functions over fairly long ranges, with tens of miles generally being practical. However, as consumer bandwidth demands increase DOCSIS providers are generally hitting the limits of the upstream connection used by nodes, and to address the problem and improve reliability they are deploying more nodes. Many major DOCSIS ISPs are moving to a "node+zero" architecture, where the "plus zero" refers to the number of distribution amplifiers. The goal is for all consumers to be directly connected by a relatively short cable run to a node which serves a relatively small number of users. The node uses multi-gigabit fiber for uplink. This forms the "hybrid fiber-coaxial network" and is practically capable of providing 1Gbps or even 2Gbps symmetric service.
Unfortunately, I am not currently aware of a DOCSIS provider which actually offers symmetric Gbps service. Technical challenges related to legacy equipment make it difficult to allocate additional channels to upstream, keeping upstream limited to as low as 20Mbps. Unfortunately the "legacy equipment" involved here is set-top boxes in customer homes, which are very difficult to widely replace even besides cost.
DOCSIS provides relatively rich management capabilities as a core part of the protocol, which is why, for example, DOCSIS providers can usually remotely command the customer modem to reboot as part of troubleshooting even if it isn't ISP-owned. These protocols also allow the ISP to push firmware to the modem, and most ISPs refuse service to modems which are not running an ISP-approved firmware version. This is not entirely selfish as the nature of DOCSIS is that a malfunctioning modem could disrupt service across many users.
Further, DOCSIS ISPs often make use of a higher level management protocol called TR-069 which is based on HTTP interactions between the modem and an ISP-operated management server. TR-069 provides the ISP with much greater ability to configure and manage the modem and enables features like changing WiFi network options through the ISP's mobile app. Appreciable security concerns have been identified related to TR-069 but have been overblown in many reports. Unlike DOCSIS's integral management capabilities (which are comparatively very limited), TR-069 must be explicitly configured on the modem, there is no magical discovery of the management server. As a result, if you own your modem, TR-069 is generally not a factor.
I would assert that, from a purely technical analysis, DOCSIS is generally the best choice in urban areas. While it does have limitations compared to GPON, it is significantly less expensive to deploy (assuming existing cable television infrastructure) and can provide symmetric gigabit. Unfortunately, a set of problems including not insignificantly the immense stinginess of cable providers means that more typical DOCSIS offerings are up to gigabit downstream and 50Mbps upstream. For DOCSIS to reach its potential it is likely that the cable industry will first need to be burnt to the ground.
WISPs
An up-and-coming last-mile technology is the wireless ISP or WISPs. Although there are other options, WISP virtually always implies the use of point-to-point WiFi in the 5GHz band for last-mile delivery. Proprietary extensions or modifications of the WiFi standards are often used to improve performance and manageability, such as overlaid time-division multiplexing to allow closer positioning of antennas without interference.
While WISPs are proliferating due to the very low startup costs (less than $10k with some elbow grease), they face significant technical limitations. In practice WISPs are generally not able to offer better than 40Mbps although there are some exceptions. Weather is usually not a significant challenge but trees are and some areas may not be amenable to WISP service at all. Many WISPs are recent startups with few or no employees familiar with commercial network operations and so reliability and security are highly variable.
Less commonly, some WISPs use non-WiFi technologies. There is limited use of unlicensed low-power LTE for consumer internet service, and then a few proprietary technologies that see scattered use. There may be some potential in 11GHz and other point-to-point microwave bands for WISP use although devices to take advantage of these are fairly new to the market.
Overall, WISPs are exciting due to the flexibility and low startup costs, particularly in more sparsely populated areas, but are generally incapable of meaningfully competing with VDSL or DOCSIS providers in areas where these exist.
GPON
Fiber-to-the-home generally implies the use of a Passive Optical Network or PON, most often in the Gigabit variant or GPON. PONs use time-division multiplexing to allow multiple stations (generally one "main" and multiple "consumer") to signal bidirectionally on a single fiber optic cable. They are called "passive" because each consumer is connected to a "trunk" line using a passive splitter, which is essentially a prism. A GPON consumer has an Optical Network Terminal or ONT which communicates with an Optical Line Terminal or OLT at the service node. PON networks generally use IP natively, so the ONT and OLT are essentially just media converters.
PON networks are half-duplex at a low level, but time slots are usually allocated using a demand-based algorithm and in practice performance is very good for each consumer. Combining PON with wavelength division multiplexing can improve the situation further. The range on GPON goes up to 20km with up to 64 end users on each fiber, some variants allow more of each. Symmetric gigabit service is often offered.
GPON can offer very good service and is inexpensive compared to the fiber technologies used inside of ISP networks, but there is rarely existing infrastructure that can be used and so deploying GPON is a very expensive process. Nonetheless, for the rare ISP which has the capital to compete with the cable company and isn't, well, the cable company, GPON is generally the delivery technology of choice as it offers speeds competitive with DOCSIS without any of the overhead of legacy cable equipment.
As of recently costs for GPON equipment have become very low, but the cost of the equipment is pretty insubstantial compared to the cost of trenching or pole attachment.
Satellite
In more rural areas many people use satellite providers. In this case the consumer has a Very Small Aperture Terminal or VSAT. In modern satellite networks the VSAT is bidirectional and so both uplink and downlink move via satellite (compared to older systems in which uplink was by telephone and downlink by satellite). Satellite service typically offers up to 40mbps or so of bandwidth, but because current satellite internet technologies use geostationary satellites (which are very far away) latency is considerable, e.g. 250ms base and often quite a bit more. Of course there is promising progress in this area involving, distastefully, Elon Musk, but it is unlikely that satellite service will ever be competitive with DOCSIS or GPON in areas where they are available.
And that's the world of the internet, today! Next, let's dive into history again and talk about the cellular telephone and how it got to be the way it is. This is a very complex area where I have pretty limited knowledge of all developments since the '90s, so we will be together trying to tell our 3GPP apart from our GPRS.