The Internet: Past, Present, And Future

Read Complete Research Material

THE INTERNET: PAST, PRESENT, AND FUTURE

The Internet: Past, Present, and Future

The Internet: Past, Present, and Future

Some thirty years ago, the RAND Corporation, America's foremost Cold War think-tank, faced a strange strategic problem. How could the U.S. authorities successfully communicate after a nuclear war?

Post-nuclear America would need a command-and-control network, linked from city to city, state to state, base to base. However, no matter how thoroughly that network was armoured or protected, its switches and wiring would always be vulnerable to the impact of atomic bombs. A nuclear attack would destroy any conceivable network.

How would the network itself be commanded and controlled? Any central authority, any network central citadel, would be an obvious and immediate target for an enemy missile. The centre of the network would be the very first place to be destroyed. RAND mulled over this grim puzzle in deep military secrecy, and arrived at a daring solution. The RAND proposal was made public in 1964. In the first place, the network would 'have no central authority.' Furthermore, it would be 'designed from the beginning to operate while in tatters.'

The principles were simple. The network itself would be assumed to be unreliable at all times. It would be designed from the beginning to transcend its own unreliability. All the nodes in the network would be equal in status to all other nodes, each node with its own authority to originate, pass, and receive messages. The messages themselves would be divided into packets, each packet separately addressed. Each packet would begin at some specified source node, and end at some other specified destination node. Each packet would wind its way through the network on an individual basis.

The particular route that the packet took would be unimportant; only final results would count. Basically, the packet would be passed from node to node to node, more or less in the direction of its destination, until it ended up in the proper place. If large portions of the network had been destroyed, it simply wouldn't matter; the packets would still stay digitally airborne, spread wildly across the field by whatever nodes happened to survive. This rather haphazard delivery system might be 'inefficient' in the usual sense, but it would be extremely rugged.

During the 1960s, this concept of a decentralized, packet-switching network was pondered by RAND, MIT and UCLA. The National Physical Laboratory in Great Britain setup the first test network on these principles in 1968. Shortly afterward, the Pentagon's Advanced Research Projects Agency decided to fund a larger, more ambitious project in the U.S. The nodes of the network were to be high-speed supercomputers (at the time). In the fall of 1969, the first such node was installed in UCLA. By December 1969, there were four nodes on the infant network, which was named ARPANET, after its Pentagon sponsor. The four computers could transfer data on dedicated high-speed transmission lines. They could even be programmed remotely from the other nodes. Because of ARPANET, scientists and researchers could share one another's computer facilities by ...
Related Ads