_____                   _                  _____            _____       _ 
  |     |___ _____ ___ _ _| |_ ___ ___ ___   |  _  |___ ___   | __  |___ _| |
  |   --| . |     | . | | |  _| -_|  _|_ -|  |     |  _| -_|  | __ -| .'| . |
  |_____|___|_|_|_|  _|___|_| |___|_| |___|  |__|__|_| |___|  |_____|__,|___|
  a newsletter by |_| j. b. crawford               home archive subscribe rss

>>> 2021-07-07 dial 1 800 flowers dot com

A note: Apologies for the long time without content, I have been on a road trip across the southwest and have suffered from some combination of no internet and no motivation. Rest assured, I am back now.

A second note: apologies that computer.rip was down for a half day or so, there was a power interruption at the datacenter and as a Professional DevOps Engineer I am naturally incredibly haphazard about how I run my personal projects. There was a problem with the fstab and the webserver didn't nfs mount the Enterprise Content Management System directory of text files when it rebooted.

You've gathered by now that I'm interested in telephone numbers, and we have to date discussed the basic structure of phone numbers in the NANP, premium rate ("900 numbers"), and some special purpose exchange and NPAs (555, 700, etc). As promised, it's time to come back around to talk about the best known special-purpose NPAs: toll-free numbers.

Toll-free are commonly referred to as 1-800 numbers, although this is a bit anachronistic as toll-free telephone numbers in NANP now span 800, 888, 877, 866, 855, 844, 833, and they'll get to 822 before you know it. Originally, though, they were all in the 800 NPA, and it's said that there is still a degree of prestige conferred upon actual 800 numbers. There's not a lot of actual reason for this, as while 800 numbers are in relatively short supply there are still many fly-by-night operations that hold them. In the end, though, toll-free numbers today serve almost purely as prestige devices because the majority of consumers are using cellular phones with unlimited long-distance calling, and so the number called barely even matters.

Let's teleport ourselves back in time, though, to the wild past of the early '60s. Direct dialing was becoming the norm, even for long distance calls. The majority of telephone owners, though, paid for calls in two basic tiers: calls to the local calling area are effectively free (included in the normal monthly rate for the line), while calls to outside of the local calling area were charged per minute at a stupidly high rate.

This whole issue of "local calling area" is a surprisingly complex one, and perhaps the simplest answer to "what is a local calling area" is "whatever your phone company tells you when you ask, and maybe specified in the front of the phone book." The local calling area in cities sometimes coincided with the NPA (e.g. all calls within the same area code were local), but this was not at all guaranteed and there were many, many exceptions.

The local calling area is better defined in terms of rate centers. A rate center is a geographical area that serves as the smallest organizational unit for telephone tolling purposes. A call to another person within the same rate center will be a local call. A call to another person in a different rate center could be either local or long-distance (toll), depending on the carrier's definition of the local calling area for your rate center. This typically depended on the geography. Further complicating things, the local calling area was not necessarily the same across telephone users within any given person's local calling area.

Let's work an example: You live in Hillsboro, Oregon, so you are in the Beaverton, OR rate center (RC). Beaverton RC has local calling to the Portland, OR rate center. I live in Oregon City, OR, which is in the Clackamas, OR rate center. Clackamas RC has local calling to Portland. We can both call our friend in Portland and it will be a local call. Our friend in Portland can similarly call both of us, as the Portland RC has both Beaverton and Clackamas in its local calling area.

However... Beaverton does not have Clackamas in its local calling area, and neither does Clackamas have Beaverton. To call each other directly would be a long-distance call [1]. This makes some intuitive sense as the distance between the suburbs and the city is smaller than the distance between two suburbs on different sides, and of course residents of the suburbs call residents of the city frequently. However, it has some odd results.

A phone number in the Portland RC is a better phone number than one in Beaverton or Clackamas, because it has a better local calling area: all of the suburbs, rather than just the city and the suburbs to one side.

This is a common situation. Rate centers which are major cities or in general more populous areas are more desirable, because they are local calls for more prospective customers. The problem is that back in the '60s you didn't really get to shop around for a rate center, it was just determined based on wherever your point of service was. This placed businesses based in suburbs at an inherent disadvantage: for people on the other side of town, they would be a long distance call.

The first major method of improving this situation was simply moving one's point of service into the city. One common method was the use of an answering bureau. A business in Beaverton could hire an answering bureau in Portland and list it as their contact number. It would be a local call for all prospective customers, and the business could return calls to customers at their expense. This came at the obvious downside that customers would always have to leave a message when they called, which was irritating---although answering bureaus were very common at the time, especially since prior to mobile phones many small businesses that worked "in the field" (tradespeople for example) would not have anyone in the office to answer calls most of the time.

A later and more complex solution was the use of a foreign exchange service, also called FXS. Under the FXS arrangement, a business in Beaverton would pay the telephone company to essentially run a miles-long jumper from their local loop in a Beaverton exchange to an exchange in Portland. This effectively "moved" their phone service to the Portland office and the Portland rate center. Early FXS were literally this simple, with the telco using a spare pair on a long distance line to splice the customer's line to a line at the other exchange. This service was expensive and has fallen out of use, although the terminology FXS and FXO (which originated as a description for the two ends of an FXS line) have remained stubbornly common in the world of VoIP-analog bridges despite being archaic and confusing [2].

You can see that both of these approaches are unsatisfactory, and there seems to be an obvious solution: businesses should be able to pay more to just expand the local calling area of their phone, without needing awkward hacks like an FXS.

In fact, there had basically been a solution just like this earlier. So-called "Zenith" numbers were special telephone numbers that did not correspond to a normal physical exchange [3]. Instead, when an operator was asked for a Zenith number they understood it to be a special instruction to look up the actual number and connect the call, but if the call was long-distance they would bill it to the callee instead of the caller. This was toll-free dialing just like we have today, but it required manual effort by the operator who, at the time, would fill out billing tickets for calls by hand. The trouble was that this didn't work at all with direct dialing, the only way to call a Zenith number was to dial zero for the operator and read the number. Customers found this annoying and the telephone companies found it expensive, so there was mutual motivation to find an automated solution.

Although surprisingly janky, a sort of solution was developed quickly for outbound calls: WATS, or Wide Area Telephone Service. WATS was introduced in the early '60s as a simple scheme where a business could pay a flat monthly rate to add additional rate centers to their local calling area, for the purpose of outbound calling only. This could save a lot of money for businesses with clients or offices in other towns. It seemed obvious that the problem of calling areas and Zenith numbers could best be approached by taking WATS and setting it to suck instead of blow. And that's exactly what they did.

In 1967, AT&T introduced inward WATS or InWATS. Much like outbound WATS, InWATS allowed a customer to pay a (large) monthly fee to have their number constitute a local call for customers in other rate centers, even nationwide. It was important that consumers understood that these calls would not incur a toll, and for technical reasons it was desirable to be able to route them differently. For this reason, InWATS numbers were assigned to a new NPA: 800.

While InWATS was similar to our modern toll-free system, it had substantial limitations. First, the rates for InWATS numbers were still based on geographical distance to callers, and InWATS customers could choose (in terms of "bands" or "zones", much like in some transit systems) what distance to pay for. This amusingly maintained the situation where it was worthwhile to strategically place telephone numbers, as an InWATS number in the middle of the country could receive calls from nearly the entire country at a lower rate than an InWATS number located on one of the coasts.

More significantly, though, the technical reality of the phone switching system meant that InWATS was implemented by effectively overlaying the geographical NANP routing system on top of the 800 NPA. For most telephone calls, NPAs identify the physical region of the country to which the call should be routed. For calls to the 800 NPA, the NXX (exchange code) identified the physical area of the country, standing in for the NPA since the NPA was already used to indicate InWATS.

The idea that 800 numbers are "non-geographical" is largely a modern one (and they are not technically "non-geographical" numbers in the sense of 700 and 500). With InWATS, toll-free telephone numbers were still just as geographical as before, just using a second-level "sub-numbering" scheme.

Even more maddeningly, much like WATS before it InWATS handled intrastate and interstate calls completely differently (this was quite simply easier from a perspective of toll regulation). So InWATS numbers subscribed for interstate use actually did not work from within the same state as the subscriber, creating an incentive to put InWATS services in states with small populations in order to minimize the number of people who needed to use a special local number [4]. Although I do not have direct evidence, I will speculate that the confluence of these factors is a major reason that several major national enterprises have located their customer service centers in Albuquerque.

InWATS was replaced in the '80s by a new AT&T service which took advantage of digital switching to eliminate many of the oddities of InWATS service. The major innovation of "Advanced 800," rolled out in 1982, was the use of a "mapping database" that allowed 800 numbers to effectively be "redirected" to any local number. Because tolling was handled digitally using much more flexible configuration, calls to these 800 numbers could be toll-free for all callers but still redirect to any local number. This completely divorced 800 numbers from geography, but for the most part is surprisingly uninteresting because it was really only a technical evolution on the previous state.

A more fundamental change in the 800 number situation happened later in the '80s, as the breakup of the bell system and related events substantially eroded AT&T's monopoly on telephone service. Competitive long distance carriers like MCI had to be allowed to enter the toll-free service market, which meant that a system had to be developed to allocate toll-free numbers between carriers and allow mapping of toll-free numbers to corresponding local (or actual routing) numbers across carrier boundaries.

Two things happened at once: the simple technical reality of needing to manage toll-free numbers across carriers required a more sophisticated approach, and competitive pressures encouraged AT&T to invest in more features for their toll-free service offering. These changes added up to flexible routing of toll-free calls based on various criteria. Further, while 800 numbers were initially distributed between inter-exchange carriers (IXCs, like AT&T, MCI, Sprint, etc) based on number allocation ranges, the inherent "stickiness" of toll-free numbers posed a challenge. Toll-free numbers are often widely published and used by repeat customers, so businesses do not want to change them. This prevents a competitive carrier trying to win their business away, and created a desire for number portability much like had been achieved for local numbers.

This issue broke for toll-free numbers basically the same way it did for local numbers. The FCC issued an order in 1993 stating that it must be possible to "port" toll-free numbers between inter-exchange carriers. Unlike local numbers, though, there was no inherent or obvious method of allocating toll-free numbers (the former geographical and carrier mappings were not widely known to users). This encouraged a completely "open" approach to toll-free number allocation, with all users pulling out of a shared pool.

If this sounds a touch like the situation with DNS, you will be unsurprised by what happened next. A new class of entity was created which would be responsible for allocating toll-free numbers to customers out of the shared namespace, much like DNS registrars. These were were called Responsible Organizations, which is widely shortened to RespOrgs.

The post-1993 system works basically like this: a business or other entity wanting a toll-free number first requests one from a RespOrg. The RespOrg charges them a fee and "assigns" the telephone number to them by means of reserving it in a shared database called SMS/800 (the SMS here is Service Management System, unrelated to the other SMS) [5]. The RespOrg updates SMS/800 to indicate which inter-exchange carrier the toll-free number should be connected to. Whenever a customer calls the toll-free number, their carrier consults SMS/800 to determine where to connect the call. The inter-exchange carrier is responsible for routing it from that point on.

In practice, this looks much simpler for many users as it's common (particularly for smaller customers) for the RespOrg to be the same company as the inter-exchange carrier. Alternately, it might be the same company or a partner of a VoIP or other telephone service provider. Many people might just use a cheap online service to buy a toll-free number that points at their local (mobile or office perhaps) number. They don't need to know that behind the scenes this involves a RespOrg, an inter-exchange carrier, and routing within the inter-exchange carrier and service provider to terminate the call.

The situation of DNS registrars has been subject to some degree of abuse or at least suspicious behavior, and the same is true of RespOrgs. It is relatively easy to become a RespOrg, and so there's a pretty long list of them. Many RespOrgs are providers of various types of phone services (carriers, VoIP, virtual PBX, etc.) who have opted to become a RespOrg to optimize their ability to assign toll-free numbers for their customers. Others, though, are a bit harder to explain.

Perhaps the most infamous RespOrg is a small company called PrimeTel. War-dialers and other telephone enthusiasts have long noted that, if one dials a selection of random toll-free numbers, you are likely to run into a surprising number of identical recordings. Often these are phone sex line solicitations, but sometimes they're other types of content that is uninteresting except for the fact that it appears over and over again on large lists of telephone numbers. These phone numbers all belong to PrimeTel.

Many words have been devoted to the topic of PrimeTel and most notably an episode of the podcast Reply All. I feel much of the mystique of the issue is undeserved, though, as I believe that one fact makes PrimeTel's behavior completely intuitive and understandable: 47 CFR ยง 52.107 forbids the hoarding of toll-free numbers.

That is, toll-free numbers are a semi-limited resource with inherent value due to scarcity, particularly those in the 800 NPA as it is viewed as the most prestigious (unsurprisingly, PrimeTel numbers are more common in 800 than in other NPAs). This strongly suggests that it should be possible to make money by speculatively registering toll-free numbers in order to resell them, as is common for domain names. However, the FCC explicitly prohibits this behavior, largely by stating that toll-free numbers cannot be held by a RespOrg if there is not an actual customer for which the number is held.

So PrimeTel does something that is pretty obvious: in order to speculatively hold toll-free numbers, it acts as customer for all of those numbers.

Since it's hard to come up with a "use" for millions of phone numbers, PrimeTel settles for simple applications like sex lines and other conversation lines. It helps that PrimeTel's owners seem to have a historic relationship to these kinds of operations, so it is a known business to them. Oddly, many of the PrimeTel "services" don't seem to actually work, but that's unsurprising in light of the fact that PrimeTel is only interested in the numbers themselves, not in making any profit from the services they connect to. From this perspective, it's often better if the services don't work, because it reduces PrimeTel's expenses in terms of duration that callers stay on the line.

The case of PrimeTel is often discussed as an egregious example of speculating on (often called warehousing) toll-free numbers, although they are not the only RespOrg accused of doing so. The surprising thing is that the FCC has never taken action against PrimeTel, but, well, the FCC has a reputation for never taking action on things.

Ultimately the impact is probably not that large. It's easy to obtain toll-free numbers in the "less popular" toll-free NPAs such as 844. I have observed that some telecom vendors have zero availability in 800, but that seems to come down to a limitation of the RespOrg relationships they have as the VoIP trunk vendor I use (which is itself a RespOrg) consistently shows tens of 800 numbers available. I tend to like 888s, though. 800 wouldn't get you anything on a slot machine.

In a future post, I will dig a little more into the issue of number portability as it's a major driver of some of the complexity in the phone system. Another topic adjacent to this that bears further discussion is the competitive inter-exchange carriers, which are a major part of the broader story of telephone and technology history.

[1] I had originally tried to construct this example in New Mexico, but this state is so sparsely populated that there are actually very few situations of this type. The Albuquerque RC spans nearly the entire central region of the state, and essentially all calls between RCs are long-distance calls in NM. NM still illustrates oddities of the distance tolling scheme, though, as there are rate centers that clearly reflect history rather than the present. Los Alamos and White Rock are different rate centers despite White Rock being effectively an annexed neighborhood of Los Alamos. They each have each other in their local calling areas.

[2] A related concept to an FXS line was the DISA, or Direct Inward System Access. A DISA was a system, typically a feature of a key system or PBX, that allowed someone calling into a phone system to be connected to an outside line on that same phone system. This made it so that an employee of a company in Portland, at home in Beaverton, could call the Portland office and then access an outside line to make a call... from the Portland rate center. A number of businesses installed these because they could save money on calling between offices (by "bouncing" calls through a city office to avoid long-distance tolls), but as you can imagine they were highly subject to abuse. I used to run a DISA on a telephone number in the Socorro rate center so that I could use "courtesy" local-only phones on the college campus to make long distance calls (at my expense still, but that expense was miniscule and it was useful when my phone was dead).

[3] Why Zenith? The answer is fairly simple. The letter Z was sufficiently rare as the start of a word that it was not included on most telephone dial labels. So, in the time when direct-dialing of calls was done by using the first letters of the exchange name, a customer seeing a "ZEnith" number would quickly realize that "ZE" was not something they could dial, which would direct them to call the operator. By the same token, of course, there are not many words to use as exchange names that satisfy this requirement, so Zenith became pretty standard.

[4] This situation somewhat persists today in an odd way. Toll free numbers cannot be the recipients of collect calls, but there is no international toll free scheme. Take a look at the back of your credit card, most major banks will list a toll-free number for use within the US, but a local number for international use, because they will accept collect calls on that number. International toll-free calling remains an unsolved problem except that the internet is increasingly eliminating the need.

[5] SMS/800 is actually operated by a company called Somos, under contract for the FCC. Somos is also currently the NANP Administrator (NANPA), meaning it is responsible for managing the allocation of NPAs and other elements of administering NANP. There's a whole little world of the "telephone-industrial complex." For example, the role of NANPA formerly belonged to a company called Neustar, formerly a division of Lockheed Martin, which still manages cross-carrier systems such as the STIR/SHAKEN certification authority. Neustar has hired executives away from SAIC/Leidos which has had critical roles in both telephone and internet administration at various points. The whole world of grift on the DoD is tightly interconnected and extends well to grift on other federal agencies.


>>> 2021-06-19 The Visi On Vision

First, after lengthy research and development I have finally followed through on my original vision of making Computers Are Bad available via Gopher. Check it out at gopher://waffle.tech/computer.rip.

Let's talk a bit more about GUIs. I would like to begin by noting that I am intentionally keeping a somewhat narrow focus for this series of posts. While there were many interesting GUI projects across a range of early microcomputer platforms, I am focusing almost exclusively on those GUIs offered for CP/M and DOS. I am keeping this focus for two reasons: First, these are the microcomputer platforms I am personally most interested in. Second, I think the landscape of early CP/M and DOS GUIs are an important part of the history of Windows, because these are the GUIs with which Windows directly competed. A real portion of the failure of Windows 1 and 2 can be attributed to Microsoft's lackluster effort compared to independent software vendors---something quite surprising from the modern perspective of very close coupling between the OS and the GUI [1].

Let's talk, then, about my personal favorite GUI system, and one of the most significant examples of stretching the boundary between operating system and application by implementing basic system features on top of an OS that lacks them... but first, we need to take a step back to perhaps the vintage software I mention most often.

VisiCalc is, for most intents and purposes, the first spreadsheet. There were "spreadsheet-like" applications available well before VisiCalc, but they were generally non-interactive, using something like a compiled language for formulas and then updating data files offline. VisiCalc was the first on the market to display tabular data and allow the definition of formulas within cells, which were then automatically evaluated as the data they depended on changed. It was the first time that you could change one number in a spreadsheet and then watch all the others change in response.

This is, of course, generally regarded as the most powerful feature of a computer spreadsheet... because it allows for the use of a spreadsheet not just as a means of recording and calculation but as a means of simulation. You can punch in different numbers just to see what happens. For the most part, VisiCalc was the first time that computers allowed a user to "play with numbers" in a quick and easy way, and nearly overnight it became a standard practice in many fields of business and engineering.

Released in 1979, VisiCalc was one of the greatest innovations in the history of the computer. VisiCalc is widely discussed as being the "killer app" for PCs, responsible for the introduction of microcomputers to the business world which had formerly eschewed them. I would go one further, by saying that VisiCalc was a killer app for the GUI as a concept. VisiCalc was one of the first programs to truly display the power of direct manipulation and object-oriented interface design, and it wasn't even graphical. It ran in text mode.

We have already, then, identified VisiCalc's creator Dan Bricklin and his company VisiCorp [2] as a pioneer of the GUI. It is no surprise, then, that this investment in the GUI goes beyond just the spreadsheet... and yet it would surprise many to hear that VisiCorp was also the creator of one of the first complete GUIs for DOS, one that was in many ways superior to GUIs developed well after.

By 1983, VisiCorp had expanded from spreadsheets to the broader world of what we would now refer to as productivity software. Alongside VisiCalc were VisiTrend/VisiPlot for regression and plotting [3], word processor VisiWord, spell checker VisiSpell, and proto-desktop database VisiFile. The problem was this: each of these software packages were fully independent, any interoperation (such as spell checking a document or plotting data) requiring saving, launching a new program, and opening.

Of course this was a hassle on a non-multitasking operating system, although multitasking within the scope of a user was sufficiently uncommon at the time that it was not necessarily an extreme limitation. Nonetheless, the tides were turning in the direction of integrated software suites that allowed simultaneous interoperation of programs. In order to do this effectively, a new paradigm for computer interface would be required.

In fact this idea of interoperation of productivity software is an important through-line in GUI software, with most productivity suite developers struggling with the same problem. It tended to lead to highly object-oriented, document-based, componentized software. Major examples of these efforts are the Apple Lisa (and the descendent OpenDoc framework) and Microsoft's OLE, as employed in Office. On the whole, none of these have been very successful, and this remains an unsolved problem in modern software. There is still a great deal of saving the output of one program to open in another. I will probably have a whole message on just this topic in the future.

In any case, VisiCorp realized that seamless interoperation of Visi applications would require the ability to run multiple Visi applications easily, preferably simultaneously. This required a GUI, and fortunately for VisiCorp, the GUI market was just beginning to truly take off.

In order to build a WIMP GUI there are certain fundamental complexities you must address. First, GUI environments are more or less synonymous with multitasking, and so there must be some type of process scheduling arrangement, which had been quite absent from DOS. Second, both multitasking and interprocess communication (which is nearly a requirement for a multitasking GUI) all but require virtual memory. Multitasking and virtual memory management are today considered core features of operating systems, but at this point in time they were unavailable on many operating systems and so anyone aiming for a windowed environment was responsible for implementing these themselves.

Released late 1983, VisiCorp's Visi On GUI environment featured both of these. Multitasking was not at all new and as far as I can tell Visi On multitasking was cooperative (it is very possible I am wrong on this point, it is hard to find a straight answer to this question), so the multitasking capability was not especially cutting edge. What was quite impressive is Visi On's implementation of virtual memory complete with page swapping, which made it practical to have multiple applications running even if they were heavy applications like VisiCorp productivity tools.

Beyond its implementation of multitasking and virtual memory, Visi On was a graphics mode application (i.e. raster display) and supported a mouse. The mouse was used to operate a fundamentally WIMP UI with windows in frames, drop-down menus at the top of windows, and a cursor... fundamentally similar to both pioneering GUIs such as the Alto and the environments that we use today. Visi On allowed multiple windows to overlap, which sounds simple but was not to be taken for granted at the time.

Perhaps the most intriguing feature of Visi On is that it was intended to make software portable. Visi On applications, written in a language called Visi C, targeted a virtual machine called the Visi Machine. The Visi Machine could in theory be ported to other architectures and operating systems, making Visi On development a safer bet for software vendors and adoption of Visi On software a safer bet for users. This feature was itself quite innovative, reminiscent of what Java aimed for much later.

For the many things that Visi On was, there were several things that it was not. For one, Visi On did not embrace the raster display as much as even other contemporary GUIs. There was virtually no use of icons in Visi On. Although it ran in graphics mode it was, visually, very similar to VisiCorp's legacy of text-mode software with GUI-like features.

One of the most significant limitations of Visi On is reflective of the basic problem with GUI environments running on existing operating systems. Visi On was not capable of running DOS software.

This sounds sort of bizarre considering that Visi On itself was a DOS application. Technically, it makes sense, though. DOS was a non-multitasking operating system with direct memory addressing and no hardware abstraction. As a result, all DOS programs were essentially free to assume that they had complete control of the system. DOS applications would freely write to memory anywhere they pleased, and never yielded control back to the system [4]. In short, they were terrible neighbors.

While some GUI systems found ways to coexist with at least some DOS applications (notably, Windows), Visi On did not even make the attempt. Visi On was only capable of running applications specifically built for it, and all other applications required that the user exit Visi On back to plain old DOS. If you wonder why you have never heard of such a revolutionary software package as Visi On, this is one major reason: Visi On's incompatibility with the existing stable of DOS applications made it unappealing to most users, who did not want to live a life of only VisiCorp products.

The other big problem with Visi On was the price. Visi On was expensive to begin with, retailing at $495. It had particularly high system requirements in addition. Notably, the use of virtual memory and swapping required something to swap to... Visi On required a hard drive, which was not yet common on PCs. All in all, a system capable of running Visi On would be a huge expense compared to typical PCs and even other GUI systems that emerged not long after.

Visi On had a number of other intriguing limitations to boot. Because it was released for DOS 2 which used FAT12, it could only be run on a FAT12 system even as DOS 3 made the jump to FAT16... among the many things Visi On had to implement to enable multitasking was direct interaction with the storage. VisiCorp required a Mouse Systems mouse, which was standard as of release but was soon after obsoleted (for most purposes) by the Microsoft mouse standard, so even obtaining a mouse that worked with Visi On could be a hassle.

In the end, Visi On's problems were at least as great as its innovations... cost of a working system most of all. Visi On was the first proper GUI environment to market for the IBM PC, but many others followed very quickly after, including Microsoft's own Windows (which was, debatably, directly inspired by Visi On). More significantly at the time, the Macintosh was released shortly after Visi On. The Macintosh was a lemon in many ways, but did gain appreciable market share by fixing the price issues with the Lisa (admittedly partially through reduced functionality and a less ambitious interface).

The combination of Visi On's high price, limitations, and new competition were too much for VisiCorp to bear. Perhaps VisiCorp could have built on its early release to remain a technical leader in the space, but there were substantial internal issues within VisiCorp that prevented Visi On receiving care and attention after its release. It became obsolete very quickly, and this coincided with VisiCalc encountering the same trouble: ironically, Lotus 1-2-3 was far more successful in taking advantage of the raster display (by being available for common hardware configurations unlike Visi On), which lead to VisiCalc itself becoming obsolete.

Shortly after release, in 1984, VisiCorp sold Visi On to CDC. CDC didn't really have much interest in the software, and neither enhanced it nor marketed it. Visi On died an ignominious death, not even a year after its release... and that was the end of the first GUI for the IBM PC. Of course, there would be many more.

[1] Of course you may be aware that non-NT Windows releases (up to Millennium Edition) similarly consisted basically of Windows running as an application on DOS, although the coupling became tighter and tighter with each release. This is widely viewed as one of the real downfalls of these operating systems because they necessarily inherited parts of DOS's non-multitasking nature, including an if-in-doubt-bail-out approach to error handling in the "kernel." Imagine how much worse that was in these very early GUIs!

[2] The Corporate Entity Behind VisiCalc went through various names through its history, including some acquisitions and partnerships. I am always referring to the whole organization behind VisiCalc as VisiCorp for simplicity and because it's the best name out of all of them.

[3] This view of regression and plotting as coupled features separate from the actual spreadsheet is still seen today in spreadsheets such as Excel, where regression and projection are mostly clearly exposed through the plotting tool. This could be said to be the main differentiation between spreadsheets and statistical tooling such as Minitab: spreadsheets do not view operations on vectors as a core feature. Nonetheless, Excel's inability to produce a simple histogram without a plugin for decades was rather surprising.

[4] There were DOS applications that produced a vestige of multitasking, called TSRs for Terminate and Stay Resident. These were not multitasking in any meaningful way, though, as the TSR had to set an interrupt handler and hope the running application did not change it. The TSR could only gain control via an interrupt. When the interrupt occurred, the TSR became the sole running task. Of course, these limitations made the "multitasking-like" TSRs that existed all the more interesting.


>>> 2021-06-12 ieee 1394 for grilling

To begin with, a reader emailed me an objection to my claim that Smalltalk has never been used for anything. They worked at an investment bank you have heard of where Smalltalk was used for trading and back office systems, apparently at some scale. This stirred a memory in me---in general the financial industry was (and to some extent is) surprisingly interested in "cutting edge" computer science, and I think a lot of the technologies that came out of first-wave artificial intelligence work really did find use in trading especially. I'd be curious to hear more about this from anyone who worked in those environments, as I know little about finance industry technology (despite my interest in their weird phones). Also, I am avoiding naming this reader out of respect for their privacy and because I neglected to ask them if it's okay to do so before going to publish this. So if you email me interesting facts, maybe do me a favor and mention whether or not you mind if I publish them. I'm bad at asking.

And now for something completely different.

Years ago, at a now-shuttered Smith's grocery store in my old home of Socorro, New Mexico, I did a dramatic double-take at a clearance rack full of Firewire. This Firewire was basically a steel cable used like a skewer but, well, floppy. The name got a chuckle out of me and this incident somehow still pops into my mind every time I think about one of my "favorite" interconnects: IEEE 1394.

IEEE 1394 was developed as a fast serial bus suitable for use with both storage devices and multimedia devices. It was heavily promoted by Apple (its original creator) and present on most Apple products from around 2000 to the switch to Thunderbolt, although its popularity had decidedly waned by the time Thunderbolt repeated its mistakes. FireWire was never as successful as USB for general-purposes usage. There are various reasons for this, but perhaps the biggest is that FireWire was just plain weird.

What's it called?

IEEE 1394 was developed by several groups in collaboration, but it was conceived and championed by Apple. Apple refers to it by the name FireWire, and so do most humans, but Apple held a trademark on that name. While Apple made arrangements to license the trademark to a trade association for use on other implementations in 2002, long after that most PC manufacturers continued to use the term IEEE 1394 instead. I am not clear on whether or not this was simple aversion to using a name which was strongly associated with a competitor or if these implementations were somehow not blessed by the 1394 Trade Association.

In any case, you will probably find the terms FireWire and IEEE 1394 used with roughly equal frequency. For further confusion, Sony uses the term i.LINK to refer to IEEE 1394 on their older products including cameras and laptops. Wikipedia says that TI also refers to it as Lynx, but I haven't seen that name personally and cursory internet research doesn't turn up a whole lot either.

The lack of a single, consistent brand identity for FireWire might be seen as its first major mistake. My recollection from FireWire's heyday is that there were indeed people who did not realize that FireWire devices could be used with non-Apple computers, even though "IEEE 1394" interfaces were ubiquitous on PCs at the time. I think this must have negatively impacted sales of FireWire peripherals, because by the time I was dealing with this stuff the only storage peripherals being sold with FireWire were being marketed exclusively to Apple users by historically Apple-associated brands like Macally and LaCie.

What does it look like?

Further contributing to compatibility anxiety was the variety of physical connectors in use. The major FireWire connectors in use were (most commonly) called Alpha, Sony, and Beta. The difference between Alpha and Beta was one of speed, as Alpha was designed for FireWire 400 (400Mbps) and Beta for FireWire 800 (800Mbps). Even this change, though, required the use of so-called "Bilingual" cables with Alpha on one end and Beta on the other.

The Sony standard, which worked only with FireWire 400, was smaller and so popular on mobile or otherwise low-profile devices. A number of laptops also used this smaller connector for reasons I'm not completely clear on (the Alpha connector is not significantly larger than USB).

The result was that practical use of FireWire frequently required adapters or asymmetric cables, even more so than USB (where the device connector was inconsistent) since both ends had a degree of inconsistency involved. The hassle was minor but surely didn't help.

Just to make things more fun, FireWire could be transported over twisted pair (UTP) and efforts were made towards FireWire over single mode fiber. I'm not aware of any significant use of these, but the idea of running FireWire over UTP will become significant later on.

Is it cooler than USB?

Unlike USB and other contemporary peripheral interconnects, FireWire had complex support for management and configuration of the bus. Unlike USB which was 1:1 computer to device, FireWire supported arbitrary groups of up to 63 devices in a tree. While there is a "root node" with some centralized responsibility in the operation of the bus, any device can send data directly to any other device without a copy operation at the root node.

This meant that FireWire was almost more a network protocol than a mere peripheral interconnect. In fact, it was possible to transport Ethernet frames over FireWire and thus use it as an IP network technology, although this wasn't especially common. Further supporting network usage, FireWire supported basic traffic engineering in the form of dedicated bandwidth for certain data streams. This was referred to as isochronous mode, and its ability to guarantee a portion of the bus to real-time applications is reflective of one of FireWire's major strengths (suitability for multimedia) and reminds me of just how uncommon this is in common computer systems, which makes me sad.

Despite the common perception in the computing industry that opportunistic traffic management is better^wmore fun^w^weasier to implement, FireWire's allocated bandwidth capability turned out to be one of its most important features, as it fit a particular but important niche: camcorders.

The handheld camcorders of the early 2000s mostly used DV (digital video), which recorded a digital stream onto a magnetic tape (inexpensive random-access storage was not sufficiently durable or compact at the time). In order to transfer a video to a computer, the tape was played back and the contents of the tape sent directly back to the computer, which recorded it. USB proved incapable of meeting the task.

It's not quite as simple as USB being too slow; USB2.0 could meet the data rate requirements. The problem is that USB (until USB 3.0) was polling-based, and so reliable transfer of digital video from a tape relied on the computer polling sufficiently frequently. If it didn't---say because the user was running another program during the transfer---the video would be corrupted. It turns out that, for moving digital media at original quality, allocated bandwidth matters.

Note that FireWire is effectively acting as a packetized video transport in this scenario, just with some extra support for a control channel. This is very similar to later video technologies such as HDMI.

Did the interesting features become a security problem?

The more complicated something is, the more likely it is that someone will use it to steal your credit card information. FireWire is no exception. Part of FireWire's performance advantage was its support for DMA, in which a FireWire device can read or write information directly from a computer's memory. This was a useful performance optimization, especially for high-speed data transfer, because it avoided the need for extra copies out of a buffer.

The problem is that memory is full of all kinds of things that probably shouldn't be shared with every peripheral. FireWire was introduced before DMA was widely seen as a security concern, and well before memory management units that provided security protections on DMA. On many real FireWire devices, access to physical memory was completely unrestricted. Every FireWire device was (potentially) a memory collection device.

What happened to FireWire?

Consumer adoption was always poor outside of certain niche areas such as the DV video transfer use case. I suspect that a good portion of the issue was the higher cost of FireWire controllers (due to their higher complexity), which discouraged FireWire in low-cost peripherals and cemented USB as a more, eh, universal solution. Consumer perceptions of FireWire as being more complex than USB and somewhat Apple specific were likely an additional factor.

That said, the final nail in FireWire's coffin was probably a dispute between Apple and other vendors related to licensing costs. FireWire is protected by a substantial patent portfolio, and in 2002 Apple announced a substantial $1-per-port licensing fee for use of the technology. Although the fee was later reduced, it was a fiasco that took much of the wind out of FireWire's sails, particularly since some major partners on FireWire technology (including Intel) saw it as a betrayal of previous agreements and ended their active promotion of FireWire.

In summation, FireWire seems to have fallen victim to excessive complexity, costly implementation, and licensing issues. Sound familiar? That's right, there's more commonality between FireWire and ThunderBolt than just the name.

While Apple stopped supporting FireWire some years ago, it continues to see a few applications. IEEE 1394 was extended into embedded and industrial buses and is used in the aerospace industry. It also continues to have some use in industrial automation and robotics, where it's used as a combined transport for video and control with machine vision cameras. That said, development of the FireWire technology has basically stopped, and it's likely these uses will fade away in the coming years.

Last of all, I have to mention that US cable boxes used to be required to provide FireWire ports. The reason relates to the conflict of cable providers and cable regulators in the United States, which will be its own post one day.


>>> 2021-06-07 building the first guis

The modern GUI, as we understand it, can be attributed almost entirely to the work of Douglas Engelbart.


In fact, it is rather surprising to me that so much can be attributed to one person. I have said before that the technology industry moved so quickly that nearly every significant innovation can be attributed to multiple, parallel efforts. In fact this is probably true of the GUI, but any parallel efforts have been deeply forgotten in comparison to Engelbart's pioneering work.

In 1968, Engelbart presented to a conference a demonstration of a project he had built while at SRI. Generally based on Vannevar Bush's [1] conceptual design for the "memex," Engelbart's effort put together nearly all of the major aspects of a modern GUI system. There was a mouse, there were windows, buttons, hyperlinks, menus, everything you could want. The GUI, to a remarkable degree, was just invented all at once.

Of course Engelbart was not precognizant. He made a number of missteps, many of which would be repeated by the XPARC work on the Alto which was closely based on Engelbart's demonstration. Most amusingly, Engelbart found it unlikely that computer users would want to use a mouse with one hand when the keyboard requires both. As a solution he proposed (and used) a one-handed, chord-based keyboard. Despite the best efforts of many dweebs, one handed text entry has never caught on [2].

More profoundly, though, Engelbart failed to anticipate the complete lack of interest in actually implementing the concepts he demonstrated. Despite the amazing impact of his demonstration on the audience, the technology was complex and difficult to build, and bore little resemblance to the text-mode, command-oriented environment which was the respected norm in business computing.

Engelbart invented the modern GUI in 1968. It would not be available on the market until 1981.

From our comfortable position today it is hard to imagine how this could be. GUIs seem to be the obvious progression in computer interfaces. Yet, during Engelbart's work his vision was regarded as largely academic, not practical. GUIs as a concept were closely tied to cybernetics and artificial intelligence, fields which attracted a great deal of graduate students but very few actual users. The GUI was cool, it was interesting, but it was not practical.

This situation is perhaps most exemplified by Smalltalk. Smalltalk was developed at XPARC (that's the Xerox Palo Alto Research Center) as a teaching language, and was best known for being an early object-oriented language and for its frequent implementation in highly GUI-centric virtual machines. Major implementations like Squeak couple Smalltalk with graphical development and debugging environments which are surprisingly cutting edge, and yet completely unused.

You see, Smalltalk, despite its innovations, has basically always been constrained to academia. Most CS students are exposed to Smalltalk at some point (probably in a programming language theory course), but no one actually uses it for anything. The situation was largely the same for all graphical environments through the course of the '70s and to a good degree into the '80s.

Many new technologies fall into this trap to some degree, being the subject of a great deal of excited research but never bridging the gap into wide-scale implementation. For example, basically the entire field of computer usability.


What unstuck the GUI and pushed it into the world of industry? Basically Steve Jobs, although he too suffered a few false starts. The Lisa was technically advanced but a commercial failure, the Macintosh was a commercial success but relatively primitive. Nonetheless, the Macintosh was essentially the next major step from Engelbart's demo, and it established many of the norms for GUIs for years to come.

The relative success of the Macintosh compared to the costly but significantly superior Lisa is a rather unfortunate situation. For the most part, the Lisa was the more innovative and capable machine. The Macintosh was essentially a compromise, stripping out the most interesting features of the Lisa to achieve a low price and more gentle learning process. To be quite honest, the Macintosh sucked, which is why we far more often talk about its various successors.

I will probably devote an entire post to this, because I want to do the topic justice and did not intend to take it on here. But the Lisa was a document interface, while the Macintosh was a program interface.

This is actually the same paradigm we discussed in a previous post, of functional vs object-oriented user interfaces. Graphical operating systems that we use today are nearly entirely functional, with the operating system's role fundamentally being the launching and management of programs. It might be hard to picture anything else. But most early GUI research actually did envision something else, a fully object-oriented interface that is nearly entirely structured around documents and data. The Lisa was document-oriented, and Microsoft made various efforts towards a document-oriented Windows experience. But document-oriented interfaces were ultimately unsuccessful, and none survive today [3].

Despite the disappointing compromise of the Macintosh, it set the trend for most GUI systems to follow. The Macintosh interface was WIMP (Windows, Icons, Menu, Pointer), it had drag-and-drop file management (although it opened a new window for every folder the user descended into, an especially irritating element of early GUI operating systems that was fortunately cast off by the new millennium), and it used icons on a desktop as the primary entry point at menus at the top for access to commands.


In the eyes of most, the next major step from the Macintosh was Microsoft Windows. Windows was introduced in its first version only a year after the Macintosh and a few years after the Lisa. Early releases of Windows, and to a degree all releases of Windows outside of NT, were simply applications which ran on top of DOS. This was a logical decision at the time, to build GUIs on top of a better established foundation, but it also imposed significant limitations.

In part as a result, the early versions of Windows were primitive and simply not that interesting. They were correspondingly unsuccessful, which is why you virtually never hear any mention of Windows 1.0 or 2.0.

The reason for the poor performance of Windows 1 and 2 is actually a surprisingly interesting and surprising one. It wasn't because Windows was inferior to the Macintosh; this was a factor to a degree but the Apple world was already highly differentiated from the PC world and the PC world had a formidable hold in the business world that ought to have conferred a big advantage on PC software.

It was more that early releases of Windows failed because they were inferior to other DOS GUIs.

The '80s PC world

Before we can get into the history of PC GUIs, we ought to devote some discussion to the context in which they were developed. Although IBM and others developed multiple operating systems for various generations of their personal computers, and thus for their many clones, by the '80s there was a high degree of consolidation on CP/M (for non-IBM small computers) and DOS (for IBM small computers and their clones). CP/M bears mentioning more so than other non-IBM operating systems of the time because, as a result of happenstance, CP/M was highly influential on the design of DOS which was intended to have a high degree of similarity to ease transition from one to the other.

We could almost say that DOS was a new version of CP/M, but the process was politically and technically rocky and various features of CP/M fell off the truck on the way to DOS. In the same way, some features of CP/M were carried into DOS even though they probably shouldn't have been. A number of DOS's oddities can be attributed to its origin as Microsoft Imitation CP/M Product.

So of the early non-Apple GUIs, most (but not all!) were intended to run on top of CP/M or DOS.

The thing is, CP/M and DOS were both primitive operating systems by modern standards. CP/M and DOS were not multi-tasking. They did not employ virtual memory, but instead addressed all memory directly. As a natural result of these two prior facts, they provided no isolation between running programs, and so the primitive "multitasking-like" behavior that could be implemented was very prone to problems.

If we were presented with this situation today, we might declare that development of a GUI environment on top of these operating systems is simply impossible. And yet...

[1] If the name Vannevar Bush is familiar to you, there could be any number of reasons as he had a prominent career. Perhaps most notably, as director of the OSRD, he was a major figure in the early development of nuclear weapons.

[2] The obvious solution to this problem, of integrating the mouse into the keyboard, was popular on '90s laptops but is largely forgotten today. A small group of trackstick devotees have managed to keep them on "business" laptops, a great benefit to myself. I cannot imagine life without a trackstick mouse, the only civilized way to move the cursor with both hands on the home row.

[3] In fact, Apple launched several different independent GUI operating systems in a span of a few years in the early '80s, the Macintosh being the only one that survived. One day I will write about these.


>>> 2021-06-02 a history of powerpoint

A brief interlude from the topic of GUIs to talk about perhaps one of the most infamous of all GUI programs, Microsoft PowerPoint.

PowerPoint is ubiquitous but often criticized in most industries, but I have never seen more complete use and abuse of PowerPoint than in military. I was repeatedly astounded by how military programs invested more effort in preparing elaborately illustrated slides than actually, well, putting content in them. And that, in a nutshell, is the common criticism of PowerPoint: that it allows people to avoid actual effective communication by investing their effort in slides.

Nonetheless, the basic idea of using visual aids in presentations is obviously a good one. The problem seems to be one of degrees. When I competed in expository speech back in high school my "slides" were printed on a plotter and mounted on foam core. More so than the actual rules of the event, this imposed an economy in my use of visual aids. Perhaps the problem with PowerPoint is simply that it makes slides too easy. When all you need to do is click "new slide" and fill in some bullet points, there's nothing to stop the type of presenter who has more slides than ideas.

Of course that doesn't stop the military from hiring graphic designers to prepare their flowcharts, but still, I think the basic concept stands...

As my foam core example suggests, the basic idea of presenting to slides is much older than PowerPoint. I've quipped before that Corporate Culture is what people call their PowerPoint presentations. Most of the large, old organizations I've worked for, private and government, had some sort of "in-group" term for a presentation. For example, at GE, one presents a "deck." Many of these terms are anachronistic, frozen references to whichever presentation technology the organization first adopted.

Visual aids for presentations could be said to have gone through a few generations: large format printed materials, transparent slides, and digital projection. Essentially all methods other than projection have died out today, but for a time these all coexisted.

Printed materials can obviously be prepared by hand, e.g. by a sign painter, and this was the first common method of presenting to slides. Automation started from this point, with the use of plotters. As I have perhaps mentioned before the term "plotter" is a bit overloaded and today is often used to refer to large-format raster printers, but historically "plotter" referred to a device that moved a tool along vectors, and it's still used for this purpose as well.

Some of the first devices to create print materials from a computer were pen plotters, which worked by moving a pen around over the paper. HP and Roland were both major manufacturers of these devices (Roland is still in the traditional plotter business today, but for vinyl cutting). And it turns out that presentations were a popular application. The lettering produced by these devices was basic and often worse than what a sign painter could offer (but requiring less skill). What really sold pen plotters was the ability to produce precise graphs and charts directly from data packages like VisiCalc.

The particularly popular HP plotters, the 75 series, had a built-in demo program that sold this capability by ponderously outlining a pie chart along with a jagged but steeply rising line labeled "Sales." Business!

These sorts of visual aids remained relatively costly to product though until projection became available... large-format plotters, board to make things rigid, etc. are not cheap. Once you buy a single projector for a conference room, though, projection becomes a fairly cheap technology, even with the methods of producing slides.

The basic concept of projection slide technology is to produce graphics using a computer and then print them onto a transparent material which serves as film for a projector. There are a lot of variations on how to achieve this. Likely the oldest method is to produce a document using a device like a plotter (or manual illustration, or a combination) and then photographically expose it on film using a device that could be described as an enlarger set to suck rather than blow. Or a camera on a weird mount, your choice.

In fact this remained a very common process for duplication for a very long time, as once a document was exposed on film photochemical methods can be used to produce printing plates or screens or all kinds of things. There is a terminological legacy of this method at least in the sciences, where many journals and conferences refer to the final to-be-printed draft of a paper as the "camera-ready" version. In the past, you would actually mail this copy to them and they (or more likely their printing house) would photograph it using a document camera and use the film to create the plates for the printed journal or proceedings.

If you've seen older technical books or journals, you may have seen charts and math notation that were hand-written onto the paper after it was typewritten (with blank spaces left for the figures and formulas). That's the magic of "reprographics," a term which historically referred mostly to this paper to film to paper process but nowadays gets used for all kinds of commercial printing. This is closely related to the term "pasting up" for final document layout, since a final step before reprographic printing was usually to combine text blocks, figures, etc produced by various means into a single layout. Using paste.

For presentations, there are a few options. The film directly off the document camera may be developed and then mounted in a paper or plastic slide to be placed in a projector. If you are familiar with film photography, that might seem a little off to you because developed film is in negative... in fact, for around a hundred years "reversal films" have been available that develop to positive color, and they were typically used to photograph for slides in order to avoid the need for an extra development process. Kodachrome is a prominent example. Reversal films are also sometimes used for typical photography and cinematography but tended to be more complex to develop and thus more expensive, so most of us kept our terrible 35mm photography on negatives.

This approach had the downside that the slide would be very small (e.g. from a 35mm camera), which required specialized projection equipment (a slide projector). The overhead projector was much more flexible because the "film frame," called the platen, was large enough for a person to hand-write on. It served as a whiteboard as well as a projector. So more conference rooms featured overhead projectors than slide projectors, and there was a desire to be able to project prepared presentations on these devices.

This concept, of putting prepared (usually computer-generated) material on a transparent sheet to be placed on an overhead projector, is usually referred to as a "viewgraph." Viewgraphs were especially popular in engineering and defense fields, and there are people in the military who refer to their PowerPoint presentations as viewgraphs to this day. There are multiple ways to produce viewgraphs but the simplest and later on most common was the use of plastic sheets that accepted fused toner much like paper, so viewgraphs could either be printed on a laser printer or made by photocopying a paper version. When I worked for my undergraduate computer center around a decade ago we still had one laser printer that was kept stocked with transparency sheets, but people only ever printed to it by accident.

In fact, these "direct-print" transparencies were a major technical advancement. Before the special materials were developed to make them possible, overhead transparencies were also produced by photochemical means and use of a document camera and enlarger. But most large institutions had an in-house shop that could produce these with a quick turnaround, and they were still popular even before easy laser printing.

Not all projection slides were produced by photographing or copying a paper document, and in fact this method was somewhat limited and tended not to work well for color. By the '70s photosetting had become practical for the production of printing plates directly from computers, and it was also used to produce slides and transparencies. At the simplest, a photosetter is a computer display with optics that focus the emitted light onto film. In practice, many photosetters were much more complicated as they used shifting of the optics to expose small sections of film at a time, allowing for photosetting at much higher resolution than the actual display (often a CRT).

Donald Knuth originally developed TeX as a method of controlling a photosetter to produce print plates for books, and some of TeX's rougher edges date back to its origin of being closely coupled to this screen-to-film process. The photosetting process was also used to produce slides direct from digital content, and into the early '00s it was possible to send a PowerPoint presentation off to a company that would photoset it onto Kodak slides. Somewhere I have a bin of janitorial product sales presentations on slides that seem to be this recent.

The overhead projector as a device was popular and flexible, and so it was also leveraged for some of the first digital projection technology. In fact, the history of electronic projection is long and interesting, but I am constraining myself to devices often seen in corporate conference rooms, so we will leave out amazing creations like the Eidophor. The first direct computer projection method to become readily available to America's middle management was a device sometimes called a spatial light modulator (SLM).

By the 1980s these were starting to pop up. They were basically transparent LCD displays of about the right size to be placed directly onto the platen of an overhead projector. With a composite video or VGA interface they could be used as direct computer displays, although the color rendering and refresh rate tended to be abysmal. I remember seeing one used in elementary school, along with the 8mm projectors that many school districts held on to for decades.

All of these odd methods of presentation basically disappeared when the "digital projector" or "data projector" became available. Much like our modern projectors, these devices were direct computer displays that offered relatively good image quality and didn't require any of the advanced preparation that previous methods had. Digital projectors had their own evolution, though.

The first widely popular digital projectors were CRT projectors, which used a set of three unusually bright CRT tubes and optics. CRT projectors offered surprisingly good image quality (late-model CRT projectors are pretty comparable to modern 3LCD projectors), but were large, expensive, and not very bright. The tubes were often liquid cooled and required regular replacement at a substantial cost. As a result, they weren't common outside of large meeting rooms and theaters.

The large size, low brightness, and often high noise level of CRT projectors made them a bit more like film projectors than modern digital projectors in terms of installation and handling. They were not just screwed into the ceiling, rooms would be designed specifically for them. They could weigh several hundred pounds and required good maintenance access. All of this added up to mean that they were usually in a projection booth or in a rear-projection arrangement. Rear-projection was especially popular in institutional contexts because it allowed a person to point at the screen without shadowing.

Take a close look at any major corporate auditorium or college lecture hall built in the '70s or '80s and there will almost certainly be an awkward storage room directly behind the platform. Originally, this was actually the projection booth, and a transparent rear-projection screen was mounted in the wall in between. Well-equipped auditoriums would often have both a rear projection and front projection capability, as rear projection required mirroring the image. Anything that came in on film would often be front-projected, often onto a larger screen, because it was simpler and easier. Few things came in on film that someone would be pointing at, anyway.

You may be detecting that I enjoy the archaeological study of 1980s office buildings. We all need hobbies. Sometimes I think I should have been an electrician just so I could explain to clients why their motor-variac architectural lighting controller is mounted in the place it is, but then they'd certainly have found an excuse to make me stop talking to them by that point.

The next major digital projection technology on the scene was DLP, in which a tiny MEMS array of mirrors flip in and out of position to turn pixels on and off. The thing is, DLP technology is basically the end of history here... DLP projectors are still commonly used today. LCD projectors, especially those with one LCD per color, tend to produce better quality. Laser projectors, which use a laser diode as a light source, offer even better brightness and lifespan than the short arc lamps used by DLP and LCD projectors. But all of these are basically just incremental improvements on the DLP projection technology, which made digital projectors small enough and affordable enough to become a major presence in conference rooms and classrooms.

The trick, of course, is that as television technology has improved these projectors are losing their audience. Because I am a huge dweeb I use a projector in my living room, but it is clear to me at this point that the next upgrade will be to a television. Televisions offer better color rendering and brightness than comparably priced projection setups, and are reaching into the same size bracket. An 85" OLED television, while fantastically expensive, is in the same price range as a similarly spec'd projector and 100" screen (assuming ALPR here for more comparable brightness/color). And, of course, the installation is easier. But let me tell you, once you've installed an outlet and video plate in the dead center of your living room ceiling you feel a strong compulsion to use it for something. Ceiling TV?

So that's basically the story of how we get to today. Producing a "deck" for a meeting presentation used to be a fairly substantial effort that involved the use of specialized software and sending out to at least an internal print shop, if not an outside vendor, for the preparation of the actual slides. At that point in time, slides had to be "worth it," although I'm sure that didn't stop all kinds of useless slides to impress people with stars on their shoulders.

Today, though, preparing visual aids for a presentation is so simple that it has become the default. Hiding off to the side of slides is seen as less effort than standing where people will actually look at you. And god knows that in the era of COVID the "share screen" button is basically a trick to make it so people don't just see your webcam video when you're talking. That would be terrible.

There are many little details and variations in this story that I would love to talk about but I fear it will turn into a complete ramble. For example, overhead based projection could be remarkably sophisticated at times. You may remember the scene at the beginning of "The Hunt for Red October" (the film) in which Alec Baldwin gives an intelligence briefing while unseen military aids change out the transparencies on multiple overhead projectors behind rear-projection screens. This was a real thing that was done in important enough contexts.

Slide projectors were sometimes used in surprisingly sophisticated setups. I worked with a college lecture hall that was originally equipped with one rear projection screen for a CRT projector and two front projection screens, both with a corresponding slide projector. All three projectors could be controlled from the lectern. I suspect this setup was rarely used to its full potential and it had of course been removed, the pedestals for the front slide projectors remaining as historic artifacts much like the "No Smoking" painted on the front wall.

Various methods existed for synchronizing film and slide projectors with recorded audio. A particularly well-known example is the "film strip" sometimes used in schools as a cheaper substitute for an actual motion picture. Late film strips were cassette tapes and strips of slides, the projector advanced the slide strip when it detected a tone in the audio from the cassette tape.

Okay, see, I'm just rambling.

<- newer                                                                older ->