http://invisible-island.net/personal/
Copyright © 2015-2022,2023 by Thomas E. Dickey
Around the time I was finishing up as a graduate student, I reflected that I could measure my experience in things that I'd done twenty of:
For each category except the last, I could count 20 instances.
I counted only 16 operating systems.
With a nod to the coincidence of 16/20 versus IBM 1620, my idea
was that 20 of something was enough to demonstrate
experience.
At the time, I did not write down these lists. For operating systems, I recall these possibilities:
Along the way, there were several small systems which I wrote,
beginning in 1974.
I may have counted those, though it seems that would have
increased the count past 16.
You may have noticed the absence of "Unix". While this initial period
was before "Windows", it did not predate "Unix".
Actually, while I had some experience reading its manuals (and
source code) starting in 1977, I
did not consider myself a user.
In my initial exposure to Unix, I was told that it was provided to the university for a nominal charge, although licensing costs for commercial use was expensive. But (although money changed hands), it was not a product, because AT&T had been forbidden since 1956 to enter that or any other business area which was not "tariffed" (regulated). To my way of thinking, they were poaching on other companies because of the uneven offering (the costs for research versus commercial use), and pointed out that the effect was to undermine products such as RT-11. (CyberTelcom gives a summary of the legal aspects).
Besides this point of disagreement, I was not a fan of the C language. PL/M was more readable. If I had remained where I was, I might have combined the two issues, making a PL/M compiler for the PDP-11.
so you see, I passed my goal of 20 operating systems in the mid-1980s.
There was still no Unix, though that was in the background: an earlier settlement with the Department of Justice had led to AT&T's divesting its International Western Electric to IT&T in the mid-1920s, as well as agreeing to not compete in this area so that their geographic areas of business did not overlap—much. There is a sanitized version of that on AT&T's website. Compare with the account on Harvard Business School's website.
The breakup of AT&T beginning in 1984 had no immediate effect on me, other than the opportunity to choose a different company for long-distance calls. As you may recall, AT&T management treated it as a victory, having a billion dollars in cash and an opportunity to show the computer industry how to run its business.
After I left ITT in 1985, I spent almost two years on a project which involved developing and porting networking software (including a driver which I ported for each machine). The end product used a custom network card via a Multibus interface on a workstation based on SVr2 (which was released in March 1984, according to Peter Salus):
There was at least one more, which I have forgotten (of our ten machines, two were NCR Towers, leaving one unaccounted for).
At the same time, though we were developing a network card, we had no connection to either arpanet or milnet, no Unix source for reference. We had only point-to-point local networking.
I moved on, late in 1987, to the Software Productivity Consortium (SPC). This was a few months after they began staffing up, and funding (from 14 member companies) was good. There also were many computers:
Everyone had a workstation. Some were Apollos (I had a
DN3000), while others had Sun workstations.
Initially the latter were Sun3's; those were replaced with
Sun4's over the next year.
While at the SPC, I used Apollo sr9.5 to sr10.4, which included the introduction of Apollo's dual-resident 4.3BSD and SVr3 kernels (along with the native Aegis). I count this only as one operating system.
There also were several servers for specialized use:
There also were a few Macintosh Powerbooks (I created an application using Hypercard in 1992).
I developed applications which ran on each of those systems, for various projects. Additionally, I continued developing ded, wrote diffstat, and started contributing to cproto, tin, and vile:
My interest in cproto was for making it generate lint libraries for the X Window system, in support of a project which used Ada bindings for X (which were manually constructed, and unreliable).
SPC had gotten onto Usenet. I found problems with tin which the administrator (not a developer) was not inclined to fix.
A good text-editor to my mind would be vi with multiple windows.
Not all of that was in C. I used Interleaf Lisp for one project, and Scheme (lisp-like) for another.
While developing the latter, I was working with Hal Pierson (who had worked at Bell Labs). We did not have the same viewpoint about the breakup of the Bell System (I viewed it as more of a good thing than otherwise).
On Usenet, I encountered other Bell Labs people who had not gotten that message. I sent mail to Mark Horton pointing out that his “book review” was libel, and advising not to do that in a public place. Horton's reply was
It's our network
Not any more. That has not been true for a long time.
That took me into early 1994; I left SPC.
After leaving SPC, it got more complicated. Until that point, anytime that I was working in a project of more than a half-dozen people, there was a separate organization which managed the hardware and general-purpose software. This changed for two reasons:
Before I joined the project, it had been in development for several years, using different platforms. In my encounter with its resident hacker versus configuration management, he and another developer had been consolidating fixes from three platforms including Ultrix and ISC Unix (as part of the project's refocusing on POSIX solutions).
Issues with configuration management aside, builds produced several of their own issues:
Each aspect had its own problems to solve, by using
platform-specific makefile definitions included into generic
makefiles.
The group working on the application interface appreciated my
help in resolving problems making their ports work
consistently.
Here is a list of operating systems which I used in paid work (1994–2000):
Outside of my paid work, I used most of the previous list, plus many more.
Initially (in March 1994), I started by setting up a new PC with
I chose Slackware partly due to a misunderstanding. Guy Cox had been involved with Linux since the previous fall, and I heard him mention “SLS” — but when I went to download Linux, there was no “SLS” but I saw Slackware, and decided I had misheard him.
While the Windows and MS-DOS were purchased, I installed Slackware by downloading the roughly 50 floppy disks needed (I read in Linux.com that it was 51). I also had another 10–15 floppy disks with other useful things (including my source archives for ded, but also other programs such as spiff, which I had encountered at the summer 1988 Usenix conference).
A few months later, I redid this, adding OS/2 Warp
3 (released October 1994).
At the same time, I updated Slackware to the current release
1.1.2, using CDROMs for both.
See The
Linux “Linux-DOS-Win95-OS2” mini-HOWTO for a
procedure.
This was a multiboot configuration (using Partition Magic);
the installation order was important.
If I installed MS-DOS after other systems, it would helpfully
reformat every partition on the hard disk.
An upgrade to MS-DOS 6.22 fixed that
problem, but was more than once a pitfall.
However, I find no mention of this in late 2015, e.g., on
Microsoft's site.
It was not the first PC which I had setup; in 1990 I had setup MS-DOS 4.01 (and later — perhaps 1992 or 1993 — written some demonstration programs in QBasic). But this was my development machine.
The development machine—and a dial-in connection to ClarkNet—worked well enough at first. ClarkNet ran Solaris 2.4, and I had a shell account which gave me a way to do test-builds (within a disk quota). I stayed with ClarkNet until early 2000 before Verio's disregard for shell users prompted a move to RadixNet. (In this case, Keith Lynch's comments about RadixNet were helpful). RadixNet discontinued service in 2013.
About a year into this process, I encountered Eric Raymond while looking for someone with whom to discuss my fixes for ncurses. We started adding those in April 1995. During the initial discussion, in March, Raymond mentioned the Linux Counter. To humor him, I registered (number 14208), though there was little point in doing this after I had been using Linux for a year: for a while, low registration numbers were deemed to have some prestige value.
Before installing other systems, there was an extended period where I worked on my programs, exchanged email with other developers, and on occasion worked with packagers and their bug reporting systems.
Looking at my email, it seems that until I got heavily involved with ncurses in 1996, I mostly relied on bug reports from others to deal with FreeBSD, etc. My email from 1997 refers to my building tin using FreeBSD 2.1.5 (which had been released in mid-1996).
There were cases where I used others' systems, e.g., my use of DEC's test-drive system to work on flist during 1995.
Reviewing an email discussion with a NetBSD developer in 1998, I noted that I was able to install FreeBSD in 1996 because it came on a CDROM, but that it was not the case for NetBSD or OpenBSD. He commented that
There is a NetBSD CD now, actually, although the arrangements for buying it can be tricky. :)
Until a few years passed, they expected people to ftp tar-files from their servers and (somehow) install those. No ISOs, no network booting. Finally, late in 2000, my email to Clark Morgan indicated that I was in the middle of installing a new development machine with these systems:
I've had little to say because much of my time is going to configure a new machine ... But since it's my machine, I've been setting it up as the successor to my current development machine (several versions of Linux, FreeBSD, NetBSD, OpenBSD, QNX, BeOS, WinME, Win2K), which takes a lot of time to set up.
That said, here are systems and versions which I used during the period 1994–2000 on my own (or guest) machines:
I also installed Plan9, but it did not work well enough to use.
Likewise, I considered installing Solaris 8 (and still have the media), but decided not to, because it required two primary partitions (one for booting, one for running—not mentioned in Max Berger's Linux+Solaris HOWTO). The BSDs each also required a primary partition, and since they were more suited to a multiboot environment on a PC, I installed them.
Unlike all of the other systems which I installed, the Openserver and Solaris systems had no network capability due to poor device support. To get Solaris 7 to install, I had to reduce the clock to 200Mz. Others encountered similar issues, e.g., this page by Dan Kegel.
After 2000, there was less to explore. I consider this an era of consolidation.
One of the reasons that I started working with Slackware in 1994 was that I wanted to become more knowledgable about networking. As a developer, I would not have that, because networking was done by administrators. The two do not mix, often.
That led into a lot of places. But in one area (aside from learning how to configure sendmail), I decided it would be too much work. That was running mailing lists. For those who do not know, doing that really demands keeping servers running all the time, which is expensive in both time and money.
So I use mailing lists run by others. Initially (during the 1990s), those were set up by others. Paul Fox set up a mailing list for vile on his own machine. Zeyd Ben Halim set up a mailing list for ncurses on netcom.com. Bob Izenberg set up a mailing list for lynx on sig.net. After several years, I find myself running those mailing lists.
As discussed on the vile-users mailing list and other places, I found that ClarkNet was no longer a viable host:
From: "T.E.Dickey" <dickey@clark.net> Subject: Re: ftp-login at www.clark.net To: lindig@eecs.harvard.edu (Christian Lindig <lindig@eecs.harvard.edu>) Date: Thu, 4 May 100 16:17:32 -0400 (EDT) Cc: vile-users@foxharp.boston.ma.us (Paul Fox's list) > > > I have noticed that the official vile FTP site > > ftp://www.clark.net/pub/dickey/vile/ > > no longer accepts anonymous ftp logins. Have I missed something? my ISP has decided to improve my shell account by removing some of the services that I'm paying for: anon-ftp is the most visible, but most of the shell tools will be crippled or missing. (I'm actually still hanging out on the old machine while I'm working to setup an account with a new vendor - once they nuke this box there won't be much usable on the new machine other than the shell prompt). > -- Christian > > -- > Christian Lindig Harvard University - EECS > lindig@eecs.harvard.edu 33 Oxford St, MD 242, Cambridge MA 02138 > phone: (617) 496-7157 http://www.eecs.harvard.edu/~lindig/ > -- Thomas E. Dickey dickey@clark.net http://www.clark.net/pub/dickey
I moved my website from ClarkNet to Heller Information Services in June 2000. Later, in May 2001, I registered my domain invisible-island.net so that I could have an MX record, needed for sending mail from my local network.
Some packagers, finding that invisible-island.net resolved to dickey.his.com, continued using the latter, but the “official” way to address my website has been through my domain.
The vile-users mailing list, of course, was set up by Paul Fox (in October 1996). He continued operating that until January 2006, when he set up the current (as of November 2022) mailing list on nongnu.org. Paul Fox and I are the moderators of this list.
The lynx-dev mailing list also on nongnu.org was set up in September 2003 by Bob Izenberg to replace an older list on sig.net (which was apparently set up sometime in 1996, according to Grobe's page). After he set up this list, I imported the older mailing list's archives from Russell McOrmond's archive on flora.ca. Bob Izenberg moderated this for a few years, but was less responsive. I became the list moderator in October 2006.
The bug-ncurses mailing list has been in the same place, more or less, since February 13, 1998. That was set up by Florian La Roche to replace the ncurses mailing list provided for me by Keith Bostic on bsdi.com in April 1997. However, when switching to mailman in September 2000, the bug-ncurses mailing list archives were lost. I became the list moderator in 1999.
Up until around 2000, I could anticipate being directly involved with ports of my programs. There were some exceptions:
In May 1998, someone at DataFocus contacted me about updating the NutCracker port of ncurses (1.9.9g) to 4.0.
I declined (not enough time), but offered to advise. End of discussion.
A few others have been involved in porting programs to z/OS, e.g., see Lynx (Paul Gilmartin) and VMS XTerm (David Mathog).
I've provided some advice, collected the resulting source. But I've had no hands-on experience with those ports.
NutCracker was (one of several) “layers” or “environments” or (my preferred term) “client systems”
an add-on to an operating system which helps it to imitate another type of system.
DOS Extenders (such as I used in the early 1990s with vile for MS-DOS) are arguably relevant. Later, I used EMX on OS/2. That area evolved into things like DJGPP and Cygwin:
Paul Fox used DJGPP in the early 1990s for vile (commenting to me in 1995 that I should do that too).
He preferred the DOS extender (cwsdpmi) written by Charles Sandmann, whom I encountered later when porting back to VMS.
I did get involved, using DJGPP a few times for building vile (e.g., when making releases), but copying files to/from the MS-DOS partition was a nuisance.
Although some Lynx developers found it a suitable platform, making networking work for that in Windows NT was daunting.
However, I found DJGPP useful for testing cross-compiling, going as far as constructing a cross-compiler (with gcc 3.3) and using that from 2003 to 2010 to investigate how to port ncurses to Windows.
This was the subject of discussion for several years. In 1998, I discussed Cygwin beta19 for its suitability with vile and ncurses, concluding that it was not ready.
Doug Kaufmann made changes to Lynx to support Cygwin, though continuing to be involved with its DJGPP configuration.
Finally, around 2000, Cygwin took off, with (relatively) few bug reports.
In the early 2000s, there were a few other Windows add-ons of interest:
Rather than gcc, UWIN used the Microsoft compiler. I was interested because the initial version included a later version of the AT&T curses library and terminal database.
I was able to set up UWIN in 2002-2003 with Windows 2000. However, a Windows upgrade broke UWIN, and no subsequent version installed successfully. Later versions of UWIN (see source code) included ncurses' terminal database, making it less interesting.
This was more successful than UWIN, becoming Services for Unix, and becoming available in Windows 7 Ultimate. I have a virtual machine for that, which I used to test/fix a terminal description for ncurses (see FAQ).
After a pause of several years, Microsoft introduced Windows Services for Linux. This was initially something like Cygwin, able to work with the host's filesystem, but later changed to a virtual machine which lacks that ability. I have (Windows) virtual machines with both of those flavors.
My interest in DJGPP came to an end when Juergen Pfeifer proposed using MinGW for developing a port of ncurses to Windows. That led into MinGW-64 and MSys2. As of November 2022, MinGW appears to be dormant (except as a convenient cross-compiling environment on some Linux hosts and FreeBSD), but its successors are still viable.
I prefer developing on the native systems rather than client systems. Some of those I can manage on my own computers. For others (cost is an object), I have used systems managed by others.
Besides hosting the Lynx webpage and PRCS archives (1999-2015) at lynx.isc.org, the ISC machine was used for development:
The OSF1 machine was only a loaner, and ISC has its costs to account for. I had my own FreeBSD machines, over which I had more control.
Those were Unix machines. I still had an interest in VMS, e.g., working on the port of tin to that platform. John Malmberg suggested using eisner.org in October 2000 to work on the socketshr library which tin used. I found that helpful, though the 5Mb limit on diskspace made it difficult to use for larger programs such as vile.
As mentioned in the discussion of flist, I found the HP test-drive machines useful from 2003 to 2008. Here is a script which I used to organize my connections to those machines:
#!/bin/sh
# $Id: hp-testdrive,v 1.19 2006/12/13 23:58:41 tom Exp $
export DIALOGOPTS=--single-quoted
USR="tedickey"
PWD="OBSOLETE"
OUT=/tmp/$$
HP="testdrive.hp.com"
trap "rm -f $OUT" 0 1 2 5 15
dialog --title "HP Test-Drive" \
--radiolist "Choose a machine ($USR/$PWD)" 0 0 0 \
td192.$HP "*HP HP-UX 11i 11.11 rp2470 2 PA-RISC 8700, 750 MHz" off \
td191.$HP "*HP HP-UX 11i v2 rp3410 2 PA-RISC 8900, 800 MHz" off \
td164.$HP "HP HP-UX 11i v2 Integrity rx1620 2 Itanium II, 1.6GHz" off \
td176.$HP "HP HP-UX 11i v2 Integrity rx1620 2 Itanium II, 1.4GHz" off \
td183.$HP "HP OpenVMS 8.3 Integrity rx1620 2 Itanium II, 1.6 GHz" off \
td184.$HP "HP OpenVMS 8.3 Integrity rx1620 2 Itanium II, 1.6 GHz" off \
td140.$HP "Debian GNU Linux 3.1r2 ProLiant DL360 G2 2 Pentium III, 1.4 GHz" off \
td156.$HP "Debian GNU/Linux 3.1r2 Integrity rx2600 2 Itanium II, 900MHz" off \
td153.$HP "Mandriva Corp Server 4.0 2 Dual Core Xeon 3.6GHz" off \
td163.$HP "Red Hat Ent Linux AS 4.0 ProLiant BL20p 2 Xeon, 3.0GHz" off \
td177.$HP "Red Hat Ent Linux AS 4.0 Integrity rx1620 2 Itanium II, 1.6 GHz" off \
td178.$HP "Red Hat Ent Linux AS 4.0 16 Itanium II 1.5GH" off \
td189.$HP "Red Hat Ent Linux AS 4.0 ProLiant DL145 2 AMD Opteron 2.2GHz" off \
td185.$HP "Red Hat Ent Linux AS 4.0 ProLiant DL560 4 Xeon,3.0GHz" off \
td158.$HP "Red Hat Ent Linux AS 5.0 2 Xeon 1.4GHz" off \
td159.$HP "Red Hat Ent Linux AS 5.0 2 AMD Opteron 2.4GHz" off \
td160.$HP "Red Hat Ent Linux AS 5.0 2 Xeon 1.4GHz" off \
td162.$HP "*SuSE Enterprise Server 10 ProLiant BL20p 2 Xeon, 3.0GHz" on \
td186.$HP "SuSE Enterprise Server 10 ProLiant DL140 2 Xeon, 3.2GHz" off \
td190.$HP "SuSE Enterprise Server 10 ProLiant DL145 2 AMD Opteron 2.2GHz" off \
td179.$HP "SuSE Enterprise Server 10 ProLiant DL585 4 AMD Opteron 2.2GHz" off \
td187.$HP "SuSE Enterprise Server 10 Integrity rx1620 2 Itanium II, 1.6 GHz" off \
td150.$HP "FreeBSD 5.4 Integrity rx1620 2 Itanium II, 1.6 GHz" off \
td152.$HP "*FreeBSD 5.4 ProLiant DL360 G2 2 Pentium III, 1.4 GHz" off \
td183.$HP "VMS 8.3/Oracle Rdb T7.2 Integrity rx1620 HP OpenVMS 8.2" off \
td146.$HP "XC on CP6000 with SFS Integrity rx2600 Red Hat Ent Linux AS 3.0" off \
$* 2>$OUT
retval=$?
choice=`cat $OUT`
case $retval in
0)
echo "'$choice' chosen."
echo connect $USR $PWD or tPCeRFl
exec telnet -l $USR $choice
;;
*)
echo "Cancel pressed."
exit 1
;;
esac
As you might expect, I ignored the Linux and FreeBSD machines, using only the OpenVMS and HP-UX machines. I used the former to provide updates for some of my programs to the 8th (and last) OpenVMS freeware CDs:
At the time it went away, I was working on Lynx, improving the port to VMS.
Although the DEC VMS machines are no longer available, there are still a few vendor-Unix machines that I am able to use for test-builds. Since October 2000, Albert Chin has allowed me to use his machines at The Written Word. Some are no longer there, but over the years I have used these systems:
He also mentioned some Linux systems once or twice, but I have my own.
Even managing my own computers, some operating systems were not viable. Over the years, I used different programs (sometimes combining them) for multibooting a PC to support these systems:
During the early 2000s, I managed to get 15 operating systems installed on a PC, and 10 installed on two other machines (total 35). This was all using MBR partitioning. That required some planning, because some would only install in a primary partition (MBR allowed only three), while some (such as QNX) had constraints on the disk size.
Around 2005-2006, I read that someone had installed about a hundred systems on one machine. Probably that was talking about GPT, but I did not encounter it directly until 2010.
Even allowing for accommodating the hardware, some systems are harder to keep working:
At the beginning of 2010, I got directly involved with virtual machines.
In my $dayjob, there was limited involvement from 2004-2006. After that, virtual machines gradually became an alternate way to do development. Starting around 2010, this became routine, influencing the way I did development outside my $dayjob.
Along the way (2006-2007), I investigated Microsoft's Virtual PC and VMware Server, setting up both on my own machines, but finding both lacking, unable to support the operating systems that I was interested in. The former was unable to run OS/2, OpenBSD or QNX, and the latter was unable to run X in OpenBSD. OpenBSD (see above) was one of the systems which required a primary partition. Putting it into a virtual machine would solve that problem.
Virtual PC's strength was for Windows, but I already had different versions of that running (Windows 2000, XP, 7). OS/2 support was an afterthought on the part of Microsoft's developers.
But in 2010, I found myself using virtual machines most of the time. On my own machines, what happened was that one of my users for vile reported a bug for a 64-bit machine. I owned only 32-bit hardware, but resolved to see if I could get a 64-bit machine (October 2009).
I was surprised to find that I could get a 64-bit laptop for a reasonable cost. It came bundled with Windows 7 Home (which I replaced with Professional), and then set out to repartition the disk.
That was my first exposure to GPT. I used a tool (probably System Commander) to shrink the Windows partition to a third of the disk, leaving space for several machines in the extended partition. Then I installed a Linux system (probably Debian), allocating a 56Gb partition. On booting into it, that showed only 8Gb. Going a little further, fdisk complained about the configuration, and I reset it (to MBR). As a result, Windows did not boot. Nor could fdisk change the disk back to GPT. Nor could I reinstall Windows on that disk.
I set that aside, for about a week, to see how to get Windows back on the machine. That came by an unexpected route. During the “snomageddon” in early Februrary, I had some spare time to read. One of the things I read was a discussion of software (such as VMware) for running virtual machines. The book described Citrix XenServer, and commented that it had a free license for personal use. The next day, I downloaded the software and requested a license. I received that on February 12.
XenServer worked with certain machines which had appropriate kernel support. That included Windows, Linux, and a few others such as FreeBSD.
I was able to install ten machines during March/April 2010:
Windows got the largest partition (120Gb on a 300Gb drive), and the others split up the remainder equally. Citrix provided a Windows GUI for installing virtual machines, but for just starting up and shutdown, ssh into the Xen server worked well enough.
Up until that point, I maintained my local network by editing /etc/hosts as needed, but found that becoming too much work. Since mid-2012, I have used bind9, which (since I keep the DNS files in RCS) lets me keep track of when I set up a new machine, or remove an old one.
Later, by early 2014, I had updated (reusing the existing partitions, installing new operating systems rather than upgrading) a few:
Incidentally, there was no limit on the number of virtual machines that I could install with that free license. Much later (in 2022), I see that it now has a limit: two.
Aside from renewing the license once a year, I used XenServer without incident for more than four years.
However, my Windows 7 virtual machine failed to boot in June 2014 due to a disk error. Rather than attempt to reinstall Windows 7 with XenServer, I repurposed that laptop (for CentOS 7) and got a newer laptop to run Windows 8.
I installed Windows 8 because I had read that it provided Microsoft's Hyper-V:
It turned out that Hyper-V was provided in Windows 8.1, so I upgraded to be able to run it.
I also wanted to use VirtualBox as a second method on that laptop. But I found that I could not use both on the same host. (Much later, in 2022, this has changed slightly with Windows 11).
I found that Hyper-V was not ready for use. A Windows VM ran well enough, but a CentOS VM did not.
The mouse did not work. That was a known problem (not fixed at that point in time).
While XenServer was still viable, the laptop's Windows 8.1 operating system was intact. There was no point in discarding that. So I installed VirtualBox 4.3.12 and forgot about Hyper-V for the time being.
Windows laptops were not the only hardware that I used.
Initially (in the 1990s), I bought desktop systems, and later (around 2010), acquired a few Apple Mac mini's. The desktop machines (which were large and noisy) had several operating systems installed using some flavor of multiboot. The Mac mini machines are small and quiet. I have used those for virtual machines, since January 2011 (initially with Parallels, and later adding VMware Fusion).
In August 2012, I decommissioned my older desktop machines, repurposing the newer ones as file servers. At that point, I had about as many virtual machines as multiboot:
That is, there were 46 operating systems installed via multiboot. On the machines hosting virtual machines, I had another 40 (30 on Parallels, 10 on XenServer).
To supplant the desktop machines, I got another Mac mini. I installed VMwareFusion 4.1.2 on this machine, to diversify. Later, in March 2015, I installed VirtualBox 4.3.26, which is still free for use on both MacOS and Windows as of December 2022. The cost of Parallels and VMware has gone up somewhat.
Installing virtual machines is simpler than setting up multiboot systems. It helps a lot that the hardware costs have gone down drastically over the time that I have been developing:
My college's IBM 1620 model 2 was bought (used) for $17,000 in 1970.
A few years later, a new DEC MINC cost about the same amount.
While at ITT ATC (in 1983 or 1984), development of System 12 slowed down after a release. But the various departments at the ATC shared in the costs for the data center. I was told that the accounting department proposed getting rid of the mainframe and just using PCs. That was not practical — then.
By 1990, PCs had come down in price and began to have enough memory and disk to be viable for development.
Networking was still slow, or costly. In 1996, it took 5 hours to download the 30Mb of XFree86 source code on a 56Kb modem.
In 2001, I was able to get a cable connection for the Internet (an upgrade for television consumers) which was ten times faster. Its speed was not uniform (and would pause unexpectedly for a long time). But that 30Mb would take only 15 minutes.
The cable connection sufficed for several years before being replaced by fiber optics (from the telephone company, who also would like me to watch television). My fiber optic connection is reliable (except in power outages), and does not pause. It is 50 times faster than the cable connection. That 30Mb would take less than a minute (sometimes only a few seconds).
Doing this doubled my telephone bill. Ten years previously, the difference would have been five to ten times as much.
With a high-speed consumer network connection, I can download a CD- or DVD-ISO in a few minutes. Installing an operating system takes longer than the download.
Because these machines all are networked, and have addresses assigned in my bind9 configuration, I can get a rough idea of how many installs I have done, using rcshist on the DNS file, piping that through diffstat,
DNS-forward | 875 ++++++++++++++++++++++++++++++++---------------------------- 1 file changed, 481 insertions(+), 394 deletions(-)
As you can see, there has been a lot of churn, because those addresses come from a small pool.
Not all systems install readily using any of Parallels, VirtualBox or VMware (much less Hyper-V). These have not been satisfactory:
Hurd is rather fragile. A Debian developer packaged it. The result could not handle networking.
Plan9 installed properly only on one of my 1990s machines.
Schillix was able to boot, but its command-line utilities dumped core. Often.
On the other hand, with virtual machines I was able to experiment and (after several tries) got Minix to work for me. The reissue (more or less) of BeOS as Haiku works well in a virtual machine. So there is some progress.
I read the initial BSTJ papers about Unix in 1978 (the R&D center's library had a complete set of BSTJ), but found those to be unsatisfying because their authors made little attempt to relate their work to other systems with which they were familiar. It was left for others to provide research, comparisons and analysis. Ritchie's CACM paper is one of the better ones:
The UNIX Time-Sharing System (D. M. Ritchie and K. Thompson, CACM, 1974).
During the 1980s, there were several books published (Prentice-Hall and Wiley come to mind initially, later O'Reilly), largely dealing with how the Unix system was designed, how to program for it. I found Bach's book
The Design of the Unix Operating System (Mauric Bach, Prentice-Hall, 1986).
to be helpful in showing how to compute the number of blocks used by a file (based on the inodes), for ded. Later, that information was provided by the stat function in st_blocks. My change-log indicates that was still not prevalent in 1994. So I allowed for computing it (see source).
But for surveys of various operating systems, there was not much. Now, those tend to be retrospectives of Unix systems, e.g.,
A History of UNIX before Berkeley: UNIX Evolution: 1975-1984
(Ian F. Darwin, Geoffrey Collyer, 1984).Twenty Years of Berkeley Unix — From AT&T-Owned to Freely Redistributable
(Marshall Kirk McKusick, Open Sources, O'Reilly, 1999).
While all of that was going on, I was developing programs on a stand-alone PDP-11 running RT-11. It was interesting to read about Unix, which was apparently well-funded, since its developers could afford to make their system rely upon terminals that would print lowercase characters (perhaps the Model 37 mentioned in Teletype Machines (Columbia University Computing History)). When I modified the RT-11 assembler to allow for mixed-case in comments, I did that to take advantage of the GT-40 display on the system where I could work at night. The DECwriter I and II terminals that I had used for printing did not handle that (but that changed in 1977 with a DECprinter I LA180).
Later (in the late 1990s), as was the case for Bach and inodes, I found Richard Stevens' books (e.g., Addison-Wesley) useful in a few cases (such as the discussion of pseudo-terminals in seeing how to revise the -S option for xterm). But for a while, Prentice-Hall was the dominant publisher for Unix-related topics.
Not all of their books were suitable. For some (such as Rochkind's Advanced Unix Programming), the presentation was too shallow to delve into.
In between (in depth versus the opposite), I encountered a few others in the Prentice-Hall series. For example, I bought a copy of Portable C and Unix System Programming in 1987 at a mall bookstore. Several years later, I noticed that Eric Raymond stated “J.E.Lapin” was really just him (i.e., no co-authors), but found that dubious because it was not written in his usual (bombastic) style. Rather, I concluded that there may have been at least one other author (in retrospect, perhaps Jon Tulk, who was a member of the X3J11 committee) who was no longer around to dispute this, or else a good technical editor to tone it down. In later comments, Raymond repeated that assertion. The review and Roland Buresund's page agree with my assessment.
Raymond is of course the accepted author for other works for which I have found no use. For example, an anonymous answer and quote from Why did Unix become open source/free? (in contrast to my experience dealing with Raymond):
From The Art of Unix Programming (emphasis added):
After the [1974] paper, research labs and universities all over the world clamored for the chance to try out Unix themselves. Under a 1958 consent decree in settlement of an antitrust case, AT&T (the parent organization of Bell Labs) had been forbidden from entering the computer business. Unix could not, therefore, be turned into a product; indeed, under the terms of the consent decree, Bell Labs was required to license its nontelephone technology to anyone who asked. Ken Thompson quietly began answering requests by shipping out tapes and disk packs — each, according to legend, with a note signed “love, ken”.
There is much more relevant information in that chapter; its title is "Origins and History of Unix, 1969-1995". Highly recommended reading (along with the rest of the book!) :)
I recall that Eric Raymond stated in one of his webpages that he had used a hundred different operating systems. Perhaps. He may have counted.
I passed my goal of 20 operating systems long ago, but I stopped counting long ago.