Network diagrams are a popular way of visualizing social and corporate relationships. Network theory has been used to model telecommunications performance and especially, the Internet. Communications networks increase in value as the number of connections increases. Metcalfe’s Law attempts to quantify the increased value.
Optimizing Metcalfe’s Law
For a network with n members, Metcalfe’s Law posits that the total value of that network is proportional to
n * (n-1). Metcalfe’s Law as applied to the Internet, and even to the telephone network, is only valid if all connections have equal value. This is incorrect. Some internet connections are hardly used and contribute limited value. Of course, there are reasons to connect everyone that are not based on monetary value! Rural electrification is an example.
Andrew Odlyzko’s article about Metcalfe’s Law (IEEE Spectrum, 2006) was written with a keen awareness of the 2000 dotcom bubble. Odlyzko demonstrated how Metcalfe’s Law’s applicability could be limited by the equal value assumption, among others. I read it, and wondered: What is the Internet’s optimal number of nodes and connections? When did the value of a larger Internet network start diminishing?
At some point, ISPs (Internet Service Providers) stopped charging users for access, as the business of delivering content became more valuable than providing greater network connectivity. AOL charged for service until 2002 or so.
I thought it would be helpful to begin with a timeline of Internet growth, by number of sites connected and corresponding events, as a starting point for determining incremental value. I searched for a streamlined history, but the best that I could find is provided by The Computer History Museum, and it isn’t quite linear. It also has a lot of technical detail that isn’t relevant for verifying Metcalfe’s Law. I decided to construct a timeline of dates and nodes, from which connectivity can be determined. I am writing this partly for myself, for reference purposes. (I don’t know how to value connectivity, not yet.)
One node was a computer at UCLA in Los Angeles, run by researchers known as the Network Working Group. Their job was to develop a protocol, which eventually became known as the collection of programs comprising NCP, or Network Control Protocol. (IMP is Interface Message Processor.)
The other node on the two-node Internet was a computer at the Stanford Research Institute (SRI) in Palo Alto.
On October 29, 1969, the first Internet transmission occurred, i.e. the first ever login to a remote host via the ARPANET. It was documented in a handwritten log.
In 1970, two more nodes were added to the ARPANET, at the University of California, Santa Barbara. This was done in order to deal with the problem of screen refresh over the net.
The ARPANET began the year with 14 nodes in operation.
UCLA’s Network Working Group completed the Telnet protocol and made progress on the file transfer protocol (FTP) standard. By the end of the year, the ARPANET had 19 nodes.
Two years later, the ARPANET had grown. 30 institutions were connected to the network. Users included corporations and consulting firms (e.g. Xerox PARC and MITRE) as well as government sites like NASA’s Ames Research Laboratories, the National Bureau of Standards, and the U.S. Air Force’s research facilities. Packet-switching was found to be a viable technology.
Other network connectivity programs were developed, such as Packet Radio sites and a satellite connection enabling sites in Norway and the UK to connect. ARPANET, PRnet (Packet Radio), and SATNET all had different interfaces, packet sizes, conventions and transmission rates. Linking them together was a problem. In response, Robert Kahn and Vint Cerf designed a net-to-net connection protocol. In September 1973, they presented their first paper on the new Transmission Control Protocol (TCP).
At Xerox PARC, Bob Metcalfe (remember Metcalfe’s Law?) was working on a wire-based system for Local Area Networks (LANs). It became Ethernet.
The ARPANET geographical map now had 61 nodes. Daily traffic was about 3 million packets.
The Domain Name System (DNS) was developed and recommended for the [email protected] addressing system. The number of computers connected via these hosts was much larger. Growth accelerated with the commercialization of Metcalfe’s Ethernet.
DNS was introduced across the Internet, with the domains of .gov, .mil, .edu, .org, .net, and .com.
The 56 Kbps backbone between NSF centers led to the creation of regional feeder networks. With the backbone, these regionals started to build a hub and spoke infrastructure.
In the beginning of 1986, there were 2000 networks.
USENET newsgroups such as ‘alt.sex’ and ‘alt.drugs’ were still not allowed.
The NSF, realizing the rate and commercial significance of the growth of the Internet, signed an agreement with Merit Networks, IBM and MCI. The NSF started to implement its T1 backbone between super-computing centers.
USENET newsgroups became available for the PC.
In early 1987, the number of hosts passed 10,000. Network management started to become a major issue. SNMP was chosen as a protocol for remote management between routers.
As of January 1988, there were 30,000 hosts. The upgrade of the NSF backbone to T1 was completed. The Internet started to become more international with the connection of Canada, Denmark, Finland, France, Iceland, Norway and Sweden.
Later in 1988, the Morris worm burrowed into 6,000 of the 60,000 hosts now on the network. DARPA formed the Computer Emergency Response Team (CERT) to deal with future incidents.
MCI Mail and CompuServe connected their commercial email systems to the Internet and each other for the first time. This was the start of commercial Internet services in the United States.
In Switzerland at CERN, Tim Berners-Lee proposed a hypertext system that would run across the Internet on different operating systems. This was the World Wide Web. I stopped here.
* Some images and content sourced from History of the internet from 1980 through the end of the decade via the Computer History Museum