The
Initial Internet-ting Concepts
The original ARPANET grew into the
Internet. Internet was based on the idea that there would be multiple
independent networks of rather arbitrary design, beginning with the ARPANET as
the pioneering packet switching network, but soon to include packet satellite
networks, ground-based packet radio networks and other networks. The Internet
as we now know it embodies a key underlying technical idea, namely that of open
architecture networking. In this approach, the choice of any individual network
technology was not dictated by a particular network architecture but rather
could be selected freely by a provider and made to inter work with the other
networks through a meta-level "Internet working Architecture". Up
until that time there was only one general method for federating networks. This
was the traditional circuit switching method where networks would interconnect
at the circuit level, passing individual bits on a synchronous basis along a
portion of an end-to-end circuit between a pair of end locations. Recall that
Kleinrock had shown in 1961 that packet switching was a more efficient
switching method. Along with packet switching, special purpose interconnection
arrangements between networks were another possibility. While there were other
limited ways to interconnect different networks, they required that one be used
as a component of the other, rather than acting as a peer of the other in
offering end-to-end service.
In an open-architecture network, the
individual networks may be separately designed and developed and each may have
its own unique interface which it may offer to users and/or other providers.
including other Internet providers. Each network can be designed in accordance
with the specific environment and user requirements of that network. There are
generally no constraints on the types of network that can be included or on
their geographic scope, although certain pragmatic considerations will dictate
what makes sense to offer.
The idea of open-architecture
networking was first introduced by Kahn shortly after having arrived at DARPA
in 1972. This work was originally part of the packet radio program, but
subsequently became a separate program in its own right. At the time, the
program was called "Internet ting". Key to making the packet radio
system work was a reliable end-end protocol that could maintain effective
communication in the face of jamming and other radio interference, or withstand
intermittent blackout such as caused by being in a tunnel or blocked by the
local terrain. Kahn first contemplated developing a protocol local only to the
packet radio network, since that would avoid having to deal with the multitude
of different operating systems, and continuing to use NCP.
However, NCP did not have the
ability to address networks (and machines) further downstream than a
destination IMP on the ARPANET and thus some change to NCP would also be
required. (The assumption was that the ARPANET was not changeable in this
regard). NCP relied on ARPANET to provide end-to-end reliability. If any
packets were lost, the protocol (and presumably any applications it supported)
would come to a grinding halt. In this model NCP had no end-end host error
control, since the ARPANET was to be the only network in existence and it would
be so reliable that no error control would be required on the part of the
hosts. Thus, Kahn decided to develop a new version of the protocol which could
meet the needs of an open-architecture network environment. This protocol would
eventually be called the Transmission Control Protocol/Internet Protocol
(TCP/IP). While NCP tended to act like a device driver, the new protocol would
be more like a communications protocol.
Four ground rules were critical to
Kahn's early thinking:
- Each distinct network would have to stand on its own and no internal changes could be required to any such network to connect it to the Internet.
- Communications would be on a best effort basis. If a packet didn't make it to the final destination, it would shortly be re-transmitted from the source.
- Black boxes would be used to connect the networks; these would later be called gateways and routers. There would be no information retained by the gateways about the individual flows of packets passing through them, thereby keeping them simple and avoiding complicated adaptation and recovery from various failure modes.
- There would be no global control at the operations level.
Other key issues that needed to be
addressed were:
- Algorithms to prevent lost packets from permanently disabling communications and enabling them to be successfully re-transmitted from the source.
- Providing for host-to-host "pipe lining" so that multiple packets could be en route from source to destination at the discretion of the participating hosts, if the intermediate networks allowed it.
- Gateway functions to allow it to forward packets appropriately. This included interpreting IP headers for routing, handling interfaces, breaking packets into smaller pieces if necessary, etc.
- The need for end-end check sums, reassembly of packets from fragments and detection of duplicates, if any.
- The need for global addressing
- Techniques for host-to-host flow control.
- Interfacing with the various operating systems
- There were also other concerns, such as implementation efficiency, inter network performance, but these were secondary considerations at first.
Kahn began work on a
communications-oriented set of operating system principles while at BBN and
documented some of his early thoughts in an internal BBN memorandum entitled
"Communications Principles for Operating Systems". At this point he realized it would be necessary to
learn the implementation details of each operating system to have a chance to
embed any new protocols in an efficient way. Thus, in the spring of 1973, after
starting the inter netting effort, he asked Vint Cerf (then at Stanford) to work
with him on the detailed design of the protocol. Cerf had been intimately
involved in the original NCP design and development and already had the
knowledge about interfacing to existing operating systems. So armed with Kahn's
architectural approach to the communications side and with Cerf's NCP experience,
they teamed up to spell out the details of what became TCP/IP.
The give and take was highly
productive and the first written version of the resulting approach was distributed at a special
meeting of the International Network Working Group (INWG) which had been set up
at a conference at Sussex University in September 1973. Cerf had been invited
to chair this group and used the occasion to hold a meeting of INWG members who
were heavily represented at the Sussex Conference.
Some basic approaches emerged from
this collaboration between Kahn and Cerf:
- Communication between two processes would logically consist of a very long stream of bytes (they called them octets). The position of any octet in the stream would be used to identify it.
- Flow control would be done by using sliding windows and acknowledgments (acks). The destination could select when to acknowledge and each ack returned would be cumulative for all packets received to that point.
- It was left open as to exactly how the source and destination would agree on the parameters of the windowing to be used. Defaults were used initially.
- Although Ethernet was under development at Xerox PARC at that time, the proliferation of LANs were not envisioned at the time, much less PCs and workstations. The original model was national level networks like ARPANET of which only a relatively small number were expected to exist. Thus a 32 bit IP address was used of which the first 8 bits signified the network and the remaining 24 bits designated the host on that network. This assumption, that 256 networks would be sufficient for the foreseeable future, was clearly in need of reconsideration when LANs began to appear in the late 1970s.
The original Cerf/Kahn paper on the
Internet described one protocol, called TCP, which provided all the transport
and forwarding services in the Internet. Kahn had intended that the TCP
protocol support a range of transport services, from the totally reliable
sequenced delivery of data (virtual circuit model) to a data gram service in
which the application made direct use of the underlying network service, which
might imply occasional lost, corrupted or reordered packets. However, the
initial effort to implement TCP resulted in a version that only allowed for
virtual circuits. This model worked fine for file transfer and remote login
applications, but some of the early work on advanced network applications, in
particular packet voice in the 1970s, made clear that in some cases packet
losses should not be corrected by TCP, but should be left to the application to
deal with. This led to a reorganization of the original TCP into two protocols,
the simple IP which provided only for addressing and forwarding of individual
packets, and the separate TCP, which was concerned with service features such
as flow control and recovery from lost packets. For those applications that did
not want the services of TCP, an alternative called the User Data gram Protocol
(UDP) was added in order to provide direct access to the basic service of IP.
A major initial motivation for both
the ARPANET and the Internet was resource sharing - for example allowing users
on the packet radio networks to access the time sharing systems attached to the
ARPANET. Connecting the two together was far more economical that duplicating
these very expensive computers. However, while file transfer and remote login
(Telnet) were very important applications, electronic mail has probably had the
most significant impact of the innovations from that era. Email provided a new
model of how people could communicate with each other, and changed the nature
of collaboration, first in the building of the Internet itself (as is discussed
below) and later for much of society.
There were other applications
proposed in the early days of the Internet, including packet based voice
communication (the precursor of Internet telephony), various models of file and
disk sharing, and early "worm" programs that showed the concept of
agents (and, of course, viruses). A key concept of the Internet is that it was
not designed for just one application, but as a general infrastructure on which
new applications could be conceived, as illustrated later by the emergence of
the World Wide Web. It is the general purpose nature of the service provided by
TCP and IP that makes this possible.
The initial internetting conceptConcepts
Reviewed by Muhammad Umar
on
May 14, 2015
Rating:
No comments:
Post a Comment