The OSI reference model:
Open Systems Interconnection Reference Model refers to connecting open systems i.e. systems are open for communication with other systems. Their principles are as follows:
A layer should be created where a different level of abstraction is needed.
- Each layer should perform a well defined function.
- The function of each layer should be defined by internationally standardized protocols.
- The layer boundaries should be chosen to reduce the information flow across the interfaces.
- The number of layers should be large enough that some distinct functions not be thrown together in the same layer of necessity and small enough that the architecture does not become unwieldy.
It should be noted that the OSI model itself is not network architecture, since it does not specify the exact services and protocols to be used in each layer. It only tells what each layer should do. It has also produced standards for all the layers, although these are not part of the reference model itself.
- The Physical Layer: It is concerned with transmitting raw bits over a communication channel. The basic objective during the design process is when one side sends a 1-bit; it is received by the other side as a 1-bitm not as 0-bit. Here, the design issues largely deal with mechanical, electrical and procedural interfaces and the physical transmission medium, which lies below the physical layer.
- The Data-link Layer: The main function of the data-link layer is to take a raw transmission facility and transform it into a line that appears free of undetected transmission errors to the network layer. It is done by breaking the input data into data frames (typically a few hundred or a few thousand bytes), transmit the frames subsequently and process the acknowledgement frames sent back by the receiver.
The physical layer merely accepts and transmits a stream of bits without any concern with meaning or structure. It is up to the data-link layer to create and recognize frame boundaries. This can be done by attaching special bit patterns the beginning and end of the frame.
A noise burst on the line can destroy a frame completely. In such cases, the data link layer software on the source machine can retransmit the frame. A duplicate frame could be sent if the acknowledgement frame from the receiver back to the sender were lost. It is up to this layer to solve the problems caused by damaged, lost and duplicate frames.
Another issue that arises in the data link layer is how to keep a fast transmitter from drawing a slow receiver in data. Some traffic regulation mechanism must be employed to let the transmitter known how much buffer space the receiver has at the moment.
3. The Network Layer: It is related with controlling the operation of the subnet. A key design issue is to determine how packets are routed from source to destination. If too many packets are present in the subnet at the time, they will get in each other’s way similar to bottlenecks. It is the duty of the network layer to control such congestion.
Many problems arise when a packets travel form one network to another up to its destination. The addressing used by the second network may differ from the first one; the packet size may be large enough for the second network, protocols may differ and so on. Hence network layer has to overcome all these problems to allow heterogeneous networks to be interconnected.
4. The Transport Layer: Its basic function is to accept data from the session layer, split it up into smaller units if need be, pass these to the network layer and ensure that the pieces all arrive correctly at the other end. Under normal conditions, the transport layer creates a distinct network connection for each transport connection required by the session layer. If the transport connection requires a high throughput, the transport layer might create multiple network connections, dividing the data among the network connections to improve throughput. But if it sounds expensive, the transport layer might multiplex several transport connections on the same network to reduce cost. Overall, the transport layer is required to make the multiplexing transparent to the session layer.
The transport layer also determines what type of service to be provided to the session layer, and ultimately the users of the network. The most popular type of transport connections is an error-free-point-to-point channels that delivers message or bytes in the order in which they were sent.
The transport layer is a true end-to-end layer. Because a program on the source machine carries on a conversation with a similar program on the destination machine, carries on a conversation with a similar program on the destination machine. While in case of lower layer, the protocols are between each machine and its adjacent neighbors; not by the ultimate source and destination machines, which may be separated by any routes. The layers from 1 to 3 are chained, while the layers fro 4 to 7 are end-to-end.
In addition to multiplexing several message streams onto one channel, the transport layer should also establish and delete connections across the network. Thus same kind of naming mechanism is required that describe whom a process on one machine wishes to converse. In the same way there has to be another mechanism to regulate the flow of information so that fast host cannot overrun a slow one. Such mechanism is known as flow control. [Note: it is distinct from flow control between routers]
5. The Session Layer: It allows users on different machines tot establish sessions between them. A session allows ordinary data transport like transport layer do, but also provides enhanced services useful in some applications. A session might be sued to allow a user to log into remote timesharing system or to transfer a file between two machines.
One of the services of the session layer is to manage dialogue control. Session can either allow traffic to go in both directions at the same time, or only in one direction at a time. Such service is known as token management. For same protocols, it is not possible to have same operation at a time. In order to manage these activities, the session layer provides tokens that can be exchanged.
Another session service is synchronization. It avoids the whole transfer process to start again in case of the transfer being aborted in the middle. The session layer provides a way to insert a checkpoint into the data stream, so that after a crash, only the data transfer after the last checkpoint will have to be repeated.
6. The Presentation Layer: It performs certain functions that are requested sufficiently often to warrant t finding a general solution for them, rather than letting each user solve the problems. The lower layers just only move bits reliably from here to there, while the presentation layer is concerned with the syntax and semantics of the information transmitted.
One of a typical instance of presentation service is encoding data in standard way. Most user programs do not exchange random binary bit strings,; instead they exchange things like people’s name, dates, amounts of money, invoices, etc. these items are represented as character strings, integers, floating-point numbers, data structures, etc. since different computers have different codes for representing character strings (ASCII and Unicode), integers and so on, in order to make it possible for computers with different representations to communicate, the data structures are to be defined in an abstract way along with a standard encoding to be used “on the wire”. The presentation layer manages these abstract data structure and converts from the representations used inside the computer to the network standard representation and back.
7. The Application Layer: The application layer contains a variety of protocols that are commonly needed. Let there be hundred types of incompatible terminals in the world with different terminal types, each with different screen layouts, etc. For such environments, “Network Virtual Terminal” is to be defined which deals with different editors and programs. In order to handle each terminal type, a piece of software must be written to map the functions of the network virtual terminal on to real terminal. All the virtual terminal software is in the application layer.
The next function of this layer is file transfer. Different file systems have different file naming conventions, different ways of representing text lines and so on. Transferring a file between two different systems requires handling these and other incompatibilities. Thus electronic mail, remote job entry, directory look up, etc belongs to the application layer.
Numerous networks are currently operating around the world. Networks differ in their history, administration, facilities offered, technical design and user communities.
In mid 1960s, during the height of cold war, the Department of Defense wanted a command and control network that could survive a nuclear war. Traditional circuit-switched telephone networks were proved to be too vulnerable, since the loss of one line or switch would terminate all the conversations. In order to solve this problem, DoD started its research as ARPA (Advanced Research Projects Agency).
Initially ARPA had no scientists or laboratories; in fact, it had nothing more than an office and a small budget. It did its work by issuing grants and contracts to universities and companies whose ideas looked promising to it. Then after some discussions, with various experts, ARPA decided that it should have a packet-switched network, consisting of a subnet and host computers. The subnet would consist of minicomputers called IMPs (Interface Message Processors) connected by transmission lines. For high reliability, each IMP would be connected to at least two other IMPs so that if one IMP is destroyed, messages can be automatically routed along alternative paths. Here, each node of the network had an IMP and a host. A host could send messages up to 8063 bits to its IMP, which would then break up into packets of 1008 bits and forward them independently toward the destination.
Then ARPA started emphasis on the subnet. It selected BBN (a consulting firm) for it. BBN used specially modified Honeywell DDP-316 minicomputers with 12k 16-bits words of core memory as the IMPs. The IMPs were further interconnected by 56kbps leased telephone lines. The software was split into tow parts: subnet and host. The subnet software consisted of the IMP end of the host-IMP connection, IMP-IMP protocol and as source to IMP protocol designed to improve reliability.
Similarly, outside the subnet also required the application software for host-IMP connection and host-host protocol. Later IMP software was changed to allow terminals to connect directly to a special IMP called TIP (Terminal Interface Processor), without having to go through a host. ARPA also funded research on satellite networks and mobile packet radio networks. It was found ARPANET protocols were not suitable for running over multiple networks. This caused the invention of TP/IP model and protocols (1974)/ TCP/IP was specifically designed to handle communication over internetwork, since more and more networks were being hooked up to the ARPANET. ARPA asked to develop a convenient program interface to the network (sockets) and many applications, utilities and management program to make networking easier. (FOR Berkeley UNIX). Previously most universities had 2nd or 3rd VAX computer and a LAN to connect them, but had no networking software, where 4.2 BSD introduced along with TCP/IP sockets and many network utilities, it was adopted immediately. With TCP/IP, it was easy for LAN to connect to the ARPANET. By 1983, the ARPANET was stable and successful with over 200 IMPs and hundreds of hosts. Then DCS (Defense Communication Agency) separated the military portion into a separate subnet, MILNET. During 1980s, additional networks, especially LANs were connected to the ARPANET. As the scale increased, finding hosts became increasingly expensive, so DNS (Domain Name Service) was created to organize machines into domains and map host names onto IP addresses. By 1990, the ARPANET was overtaken by newer networks and so it was shutdown and dismantled. But MILNET continues to operate, however.
As RCP/IP became only the official protocol in January 1983, the number of networks, machines and users connected to the ARPANET grew rapidly. When NSFNET & ARPANET were interconnected, the growth became exponential. In mid 1980s, people began viewing the collection of networks as an internet and later as the Internet. Growth continued, by 1990s, the Internet had grown to 3000 networks and 200,000 computers. In 1992, one millionth was attached, by 1995, there were multiple backbones, hundreds of mid-level networks, tens of thousands of LANs, millions of hosts and tens of millions of users. The most popular reason of the Internet is the TCP/IP reference model and TCP/IP protocol stack.
What does it actually mean to be on the Internet?
A machine is said to be on the Internet if it runs the TCP/IP protocol stack, has an IP address, and has the ability to send IP packets to all the other machines on the Internet. Traditionally, the Internet has four main applications: i) email ii) news iii) remote login iv) file transfer
i) Email: Since the early days of the ARPANET, electronic mail has been popular due to its ability to compose, send and receive. It is almost used by everyone in the Internet and email programs are available on virtually every kind of computer these days.
ii) News: News groups are specialized forums in which users with a common interest can exchange messages. Thousands of news groups exist on technical and non-technical topics.
iii) Remote Login: Using the telnet, rlogin and other programs, users anywhere on the Internet can log into any other machine on which they have an account.
iv) File Transfer: Using FTP programs, it is possible to copy files from one machine on the Internet to another. Vast number of articles, databases and other information are available this way.
World Wide Web:
By early 1990s, the Internet was largely populated by academic government and industries. One new application WWW changed all that and brought millions of new, non-academic users to the Internet. WWW made it possible for a site to setup a number of pages of information containing text, pictures, sound and even video with embedded links to other pages. By clicking a link, the user is suddenly transported to the page pointed by that link.
The Internet backbones operate at megabit speeds. With each increase in network bandwidth, new applications became possible and so the gigabit networks. Gigabit networks provide better bandwidth than megabit networks. Applications of such networks are in the field of tele-medicine and video conferencing or virtual meeting.