Packet Switching Is The Dividing Of: Messages
Packet Switching Is The Dividing Of: Messages
Packet Switching Is The Dividing Of: Messages
Packet switching is the dividing of messages into packets before they are sent, transmitting each packet individually, and then reassembling them into the original message once all of them have arrived at the intended destination. Packets are the fundamental unit of information transport in all modern computer networks, and increasingly in other communications networks as well. Each packet, which can be of fixed or variable size depending on the protocol, consists of a header, body (also called a payload) and a trailer. The body contains a segment of the message being transmitted. The header contains a set of instructions regarding the packet's data, including the sender's IP address, the intended receiver's IP address, the number of packets into which the message has been divided, the identification number of the particular packet, the protocol (on networks that carry multiple types of information, such as the Internet), packet length (on networks that have variable length packets) and synchronization (several bits that help the packet match up to the network). Packets are switched to various network segments by routers located at various points throughout the network. Routers are specialized computers that forward packets through the best paths, as determined by the routing algorithm being used on the network, to the destinations indicated by destination IP addresses in the packet headers. During transport from one host to another, packets may be routed out of order and across a variety of paths to get to the desired end point.
IP Address Definition
An IP address is a unique numeric identifier for a computer or other device on a TCP/IP network. TCP/IP (transmission control protocol/Internet protocol) is the set of protocols (i.e., agreed upon formats) that is used for the Internet as well as for most LANs (local area networks) and other computer networks. In IPv4, the current standard protocol for the Internet, each IP address consists of 32 bits. They are expressed as four sets of numbers, each between 0 and 255, which are separated by periods. Examples are 115.25.3.108 and 127.0.0.1; the latter is the so-called loopback address, which returns messages to the same computer that sent them and is used for testing purposes and by some applications. 32 bits allows the creation of more than four billion (exactly 4,294,967,296) unique addresses. However, in practice, the address space is sparsely populated due to routing issues. Routing, which is usually performed by a dedicated device called a router, is the process of moving packets (i.e., the most basic unit of data transmission) from source to destination. Thus there is some pressure to extend the address range though the use of IPv6, which is the next-generation Internet protocol. IPv4 addresses originally had only two parts, but a later change increased that to three: network, the subnetwork and host, in that order. However, the introduction of CIDR (classless inter-domain routing) now allows addresses to have any number of levels of hierarchy. Within an isolated network, IP addresses can be assigned at random as long as each one is unique. However, for computers connected to the Internet, it is necessary to use registered IP addresses in order to avoid duplicates. A static IP address is an IP address for a computer or other device that remains the same every time the device is connected to the network and does not change unless it is changed manually. A dynamic IP address is one that changes every time a device is connected to the network and which is assigned by the dynamic host
configuration protocol (DHCP). The dynamic assignment of IP addresses can eliminate the need for system administrators to assign them manually and is a way to make more efficient use of the limited number of IP addresses available to individual ISPs (Internet service providers), businesses and other organizations. Users of dial-up connections to the Internet generally receive dynamically generated IP addresses, whereas users of DSL and cable connections typically are assigned one or more static IP addresses. IP address assignments are made by registry organizations, such as ARIN (American Registry for Internet Numbers), in response to requests from ISPs and other organizations for a netblock (a range of consecutive IP addresses). If an organization has exhausted a substantial part of its allocated netblock, it can request another.
Protocol Definition
A protocol is a mutually agreed-upon format for doing something. With regard to computers, it most commonly refers a set of rules (i.e., a standard) that enables computers to connect and transmit data to one another; this is also called a communications protocol. A protocol can be implemented by hardware, software, or a combination of the two. At the very least, it must define the rate of transmission (e.g., in bits per second), whether transmission is to be synchronous or asynchronous, and whether data is to be transmitted in half-duplex or full-duplex mode. Protocols define issues such as data representation (i.e., how data is stored), error control (i.e., error detection and recovery), data compression methods, signaling, authentication, how the sending device will indicate that it has finished, and how the receiving device will indicate it has received the message.
TCP Definition
Transmission control protocol (TCP) is one of the main protocols in TCP/IP (transmission control protocol/Internet protocol), the suite of communications protocols that is used to connect hosts on the Internet and on most other computer networks as well. A protocol is a mutually agreed-upon format for doing something. With regard to computers, it is most commonly used to refer to a set of rules (i.e., a standard) that enables computers to connect and transmit data to one another. This is also referred to as a communications protocol. TCP is a connection-oriented protocol, which means that it establishes and maintains a virtual connection between hosts until such time as the message or messages to be exchanged by the application programs running on them have been exchanged. It divides any message to be transmitted into packets, numbers them, and then forwards them individually to the IP program layer. Although each packet has the same destination IP address, it may get routed differently through the network. TCP uses error correction and data stream control techniques to ensure that packets to arrive at their intended destinations uncorrupted and in the correct sequence, thereby making the point-to-point connection virtually error-free. Packets are the most fundamental unit of data transmission on TCP/IP networks. TCP operates at the transport layer, i.e., the middle layer in the OSI (open systems interconnection) seven layer model. This layer is responsible for maintaining reliable end-to-end communications across the network. IP, in contrast, is a network layer protocol, which is the layer just below the transport layer. Also operating at the transport layer are UDP (user datagram protocol), RTP (real-time transport protocol) and SCTP (stream control transmission protocol).
Router Definition A router is an electronic device and/or software that connects at least two networks and forwards packets among them according to the information in the packet headers and routing tables. Routers are fundamental to the operation of the Internet and other complex networks (such as enterprise-wide networks). A network consists of two or more computers, and typically other devices as well (such as printers and external hard drives), that are linked together so that they can communicate with each other and thereby share files and the devices. Examples of the networks connected by a router can be two LANs (local area networks) or WANs (wide area networks) or a LAN and its ISP's (Internet service provider's) network. A packet is the fundamental unit of information transport in all modern computer networks, and increasingly in other communications networks as well. A packet header is the portion of a data packet that precedes the body (i.e., a portion of the message being transmitted) and which contains source and destination IP addresses as well as control and timing information required for successful transmission.
Bridge Definition
A bridge is a device that connects and controls the flow of data between two LANs (local area networks) or two segments of the same LAN. Bridges provide three main functions: (1) creating a bridging table to keep track of devices on each segment, (2) filtering packets based on their MAC addresses, i.e., forwarding packets whose destination MAC address is on a different segment of the network from their
source and removing packets that do not need to be forwarded to other segments, and (3) dividing a single network into multiple collision domains, thereby reducing the number of collisions on each segment and effectively increasing its bandwidth. Repeaters, whose main function is signal amplification, also connect two different network segments and pass data between them. Bridges incorporate the functionality of repeaters, but they additionally look at the packets and determine whether they should be allowed to pass through or not, whereas repeaters allow all data to pass through. Repeaters operate on the first layer of the OSI model, which provides the means for transmitting raw bits, but it is not concerned with MAC addresses, IP addresses and packets. Bridges operate at the data link layer, the second layer of the OSI seven layer model. Bridges also superficially resemble repeaters in appearance, i.e., it is a small box with two network connectors, but differ in that they also have indicator lights on them. Hubs also connect network segments, but they are just essentially repeaters that can connect more than two segments, with models available ranging from four to several hundred connectors. Bridges use the spanning tree protocol (STP) to decide whether to forward a packet through the bridge and on to a different network segment. STP serves two functions: to determine a main bridge, called a root, which will make all the bridging decisions and deal with all bridging problems, and to prevent bridging loops. Packet Definition
A packet is the fundamental unit of information transport in all modern computer networks, and increasingly in other communications networks as well. Packets are transmitted over packet switched networks, which are networks in which each message (i.e., data that is transmitted), such as an e-mail, web page or program download, is cut up into a set small
segments prior to transmission. Each packet is then sent individually and can follow the same route or a different route to the common destination. Once all the packets forming a message have arrived at the destination, they are automatically reassembled to recreate the original message. Bandwidth Definition
Bandwidth refers to the data transmission capacity of a communications channel. The greater a channel's bandwidth, the more information it can carry per unit of time. The term technically refers to the range of frequencies that a channel can carry. The higher the frequency, the higher the bandwidth and thus the greater the capacity of a channel. This capacity might more appropriately be referred to as throughput. For digital devices, the bandwidth is usually expressed in bits per second (bps), kilobits per second (kbps) or megabits per second (mbps). For analog devices, the bandwidth is expressed in cycles per second, or Hertz (Hz). The required bandwidth can vary greatly according to the type of application. For example, the transmission of simple ASCII text messages requires relatively little bandwidth, whereas the transmission of high resolution video images requires a large amount of bandwidth. A major trend in networks at all levels (i.e., from LANs to the Internet) has been increasing bandwidth. This has been a result of technological advances with regard to both the transmission media and the devices that are used with it, such as transmission circuits, reception circuits and routers. For example, the development of optical fiber cable made possible a huge increase in bandwidth as compared with copper wire cable, and the bandwidth of optical fiber cable continued to increase both as a result of improvements to the optical fiber itself and to the transmitters and other devices used with it. Nevertheless, bandwidth is often insufficient. This is due to such
factors as the continued increase in the numbers of users (especially of the Internet), the growth in the demand for applications which require more bandwidth and the high cost of upgrading some portions of networks (particularly replacing copper wire connections to individual homes and offices with optical fiber). Thus, an important principle in the design of network protocols continues to be the conservation of bandwidth.
CSMA/CD Definition
CSMA/CD (carrier sense multiple access/collision detection) is the most widely used protocol (i.e., set of rules) for determining how network devices respond in the event of a collision. A collision occurs when two or more devices on a network attempt to transmit over a single data channel (e.g., a twisted pair copper wire cable or an optical fiber cable) simultaneously. It is detected by all participating devices, and, after a brief, random, and different interval of time (called a back off delay) has elapsed for each device, the devices attempt to transmit again. If another collision occurs, the time intervals from which the random waiting times are selected are increased step-by-step in a process referred to as exponential back off. CSMA/CD is a modification of pure CSMA. Carrier sense refers to the fact that a transmitting device listens for a carrier wave (i.e., a waveform that carries signals) before attempting to transmit. That is, it first tries to detect the presence of an encoded signal from another device. If a carrier is sensed, the device waits for the transmission in progress to finish before starting its own transmission. Multiple access describes the fact that multiple devices send
and receive on the medium. Transmissions by one node are generally received by all other nodes using the medium. Collision detection is used to improve CSMA performance by terminating transmission as soon as a collision is detected, and reducing the probability of a second collision on the next try. The techniques used for detecting collisions depend on the type of media: in the case of electrical wires, for example, collisions are detected by comparing the transmitted data with the received data. CSMA/CD operates at the physical layer is the bottom level in the OSI (open systems interconnection) seven layer model, which is used to standardize and simplify definitions with regard to computer networks. This layer defines all physical and electrical specifications for devices used to interface to the network, and it deals with data only in terms of raw bits (i.e., it does not recognize MAC addresses, IP addresses and packets). A major feature of CSMA/CD is that it is simple to implement. This has helped make it an international standard and an important part of the ethernet, which is the most widely deployed architecture for LANs (local area networks).
Coaxial cable is a type of cable for high bandwidth data transmission use that typically consists of a single copper wire that is surrounded by a layer of insulation and then by a grounded shield of braided wire or an extruded metal tube. The whole thing is usually wrapped in another layer of insulation and, finally, in an outer protective layer. The grounded metal tube or braided wire shield minimizes electrical interference and radio frequency interference (RFI)
and results in a much greater bandwidth (i.e., data transmission capacity) than does conventional copper wire cable (but less than optical fiber cable). The metal tube type has a greater data transmission capacity but is rigid and thus is used only for specials situations; the braided type is much more flexible and easier to use. Connections to the ends of coaxial cables are usually made with specially designed RF (radio frequency) connectors. In the case of computer networks, BNC (Bayonet NiellConcelman) RF connectors are used. Coaxial cable is commonly used by cable television companies to transport television broadcast signals into customer premises and by consumers to connect television receivers to external antennas. Short coaxial cables are also employed to connect home video equipment and in ham radio systems. Coaxial cable was formerly also widely used in local area networks (LANs). However, now most LANs use twisted pair wiring, optical fiber cable and radio waves because of greater ease of use, lower cost and/or greater capacity. Coaxial cable takes its name from the fact that both conductors (i.e., the central copper wire and the outer metal shield) share the same axis. It is often referred to by those in the electronics industry as coax.
The data link layer is the second layer in the OSI (open systems interconnection) seven-layer reference model. It responds to service requests from the network layer above it and issues service requests to the physical layer below it.
The data link layer is responsible for encoding bits into packets prior to transmission and then decoding the packets back into bits at the destination. Bits are the most basic unit of information in computing and communications. Packets are the fundamental unit of information transport in all modern computer networks, and increasingly in other communications networks as well. The data link layer is also responsible for logical link control, media access control, hardware addressing, error detection and handling and defining physical layer standards. It provides reliable data transfer by transmitting packets with the necessary synchronization, error control and flow control. The data link layer is divided into two sublayers: the media access control (MAC) layer and the logical link control (LLC) layer. The former controls how computers on the network gain access to the data and obtain permission to transmit it; the latter controls packet synchronization, flow control and error checking. The data link layer is where most LAN (local area network) and wireless LAN technologies are defined. Among the most popular technologies and protocols generally associated with this layer are Ethernet, Token Ring, FDDI (fiber distributed data interface), ATM (asynchronous transfer mode), SLIP (serial line Internet protocol), PPP (point-to-point protocol), HDLC (high level data link control) and ADCCP (advanced data communication control procedures). Ethernet Definition
Ethernet is by far the most commonly used local area network (LAN) architecture. A LAN is a network that connects computers and other devices in a relatively small area, typically a single building or a group of buildings.
Ethernet features high speeds, robustness (i.e., high reliability), low cost and adaptability to new technologies. These features have helped it maintain its popularity despite being one of the oldest of the LAN technologies. A key feature of Ethernet is the breaking of data into packets, also referred to as frames, which are then transmitted using the CSMA/CD (carrier sense multiple access/collision detection) protocol until they arrive at the destination without colliding with any other packets. The currently most commonly used form of Ethernet is 100Base-T, also referred to as fast Ethernet, which can accommodate data transfer speeds of up to about 100Mbps (million bits per second). The newer gigabit Ethernet supports data rates of one gigabit (1,000 megabits) per second. Fiber Ethernet uses optical fiber cables to carry data. Optical fiber allows transmission over very long distances (over 2,000 meters), has a very large capacity and is completely immune to electrical interference. However, it is relatively expensive. Wireless Ethernet transmits and data via a low-power microwave radios built into computers and other devices. It allows communication within a radius of approximately 100 meters. OSI Model Definition
The open systems interconnection reference model, commonly referred to as the OSI reference model, OSI seven layer model or OSI model, is a layered, abstract description for communications and computer network protocol design and is the foundation of modern networking. Developed in 1977 for the purpose of standardizing and simplifying definitions relating to computer networks, this model divides the networking process into seven logical layers, each of to which has unique responsibilities and to which are assigned specific services and
protocols (i.e., agreed-upon formats). They are the application, presentation, session, transport, network, datalink and physical layers. In this model, information and control are passed from one layer to the next, starting at the application layer in the transmitting host (i.e., computer connected to the network), proceeding down the hierarchy to the physical layer, then passing over the communications channel to the destination host, where they proceed back up the hierarchy to the application layer. The physical layer, also called layer one, defines all physical and electrical specifications for devices used to interface to the network, including the shape and layout of pins in connectors, voltages, cable specifications and radio broadcast frequencies. It provides the means for transmitting raw bits, but it is not concerned with MAC addresses, IP addresses and packets; rather, these are dealt with by layers higher up in the hierarchy. The data link layer, layer two, is responsible for encoding bits into packets prior to transmission and then decoding the packets back into bits at the destination. It provides reliable data transfer by transmitting packets with the necessary synchronization, error control and flow control. The network layer, layer three, is responsible for routing, which is the moving of packets across the network using the most appropriate paths. It also addresses messages and translates logical addresses (i.e., IP addresses) into physical addresses (i.e., MAC addresses). It is the layer at which IP (Internet protocol) operates. Other protocols in the TCP/IP (transmission control protocol/Internet protocol) suite of protocols, which forms the basis of the Internet and most other modern networks, that also operate in this layer are ICMP (Internet control message protocol), IPsec (Internet protocol security), ARP (address resolution protocol), RIP (routing information protocol), OSPF (open shortest path first) and BGP (border gateway protocol). The transport layer, layer four, is responsible for maintaining reliable end-to-end communications across the network. It provides fullduplex (i.e., simultaneously bidirectional) virtual circuits on which
delivery is reliable, error free, correctly sequenced and duplicate-free. The best known example of a transport layer protocol is TCP (transmission control protocol). Also operating at this layer are UDP (user datagram protocol), RTP (real-time transport protocol) and SCTP (stream control transmission protocol). The function of the session layer, layer five, is to maintain communication between hosts after it has been established and then terminate it when no longer needed. An example of a session layer protocol is SQL (structured query language), which provides a userfriendly text interface to relational database systems. The presentation layer, layer six and the second layer from the top, translates data from programs and protocols in the application layer above it into formats that can be transmitted over networks and used by other applications on other hosts. It is likewise responsible for the delivery and formatting of information to the application layer for further processing or display. The application layer, layer seven, is the top layer of both the OSI and TCP/IP models. It provides services to connect application programs and communications protocols as well as services to ensure effective communication between application programs over the network. The OSI model is roughly, but not strictly, adhered to in the computing and networking industry. Its main feature is the interface between layers, which dictates the specifications on how one layer interacts with another. This allows networking protocols to be developed on a modular basis, that is, so that they can easily interact with the protocols on higher and lower layers, regardless of the company or other organization that developed the protocols. Although protocols are usually developed that reside in a single layer, sometimes they are designed to incorporate elements of multiple, adjacent layers. The seven-layer OSI model is often compared with the five layer TCP/IP model. Development was begun on the former before the latter; and it is more of a theoretical approach, whereas the latter was developed mainly as a practical solution to a specific set of engineering problems. Because of its theoretical approach and despite
its greater number of layers, the OSI model is often considered easier for students of networking to comprehend; it is emphasized for this reason and because understanding it first can make it easier to understand the OSI model. The fact that the real-world Internet and most other networks are based on the TCP/IP model has been attributed to a number of factors, including the facts that early implementations of the OSI model suffered from poor performance and that the TCP/IP model was associated with UNIX, which allowed it to take advantage of the great popularity of that operating system in academia (where the Internet originated). Another objection to the OSI model was that it was poorly designed because the session and presentation layers are nearly empty, whereas other layers contain numerous protocols. Application Layer Definition
The application layer is the top layer of both the seven-layer OSI (open systems interconnection) model and the five-layer TCP/IP (transmission control protocol/Internet protocol) model. It provides services to connect application programs to communications protocols and to ensure effective communication with other application programs over a network. These services include (1) confirming that necessary communication resources exist (e.g.,that there a functioning modem in the sender's computer), (2) ensuring identification of the destination host, (3) authenticating either or both the message sender and recipient, (4) determining protocol rules at the application level and (5) ensuring agreement at both ends about error recovery procedures, data integrity and privacy. Some of the most popular application layer protocols include DHCP (dynamic host configuration protocol), FTP (file transfer protocol), HTML (hypertext markup language), HTTP (hypertext transfer protocol), IRC (Internet relay chat),
NFS (network file system), NNTP (network news transfer Protocol), POP3 (post office protocol 3), SMTP (simple mail transfer protocol), SNMP (simple network management protocol) and telnet. Presentation Layer Definition
The presentation layer, the second layer from the top in the seven-layer OSI (open systems interconnect) model, translates data from programs and protocols in the application layer above it to formats that can be transmitted over networks and used by other applications on other hosts. It is likewise responsible for the delivery and formatting of information to the application layer for further processing or display. Although the presentation layer is concerned with data structure representation, compression and encryption, these activities are sometimes performed at other layers, each offering its own advantages and disadvantages. Also, in many applications and protocols no distinction is made between the presentation and application layers. Examples of presentation layer protocols include ASCII (American standard code for information interchange), EBCDIC (extended binary coded decimal interchange code), MIDI (musical instrument digital interface), MPEG (moving picture experts group), SSL (secure sockets layer), TDI (tabbed document interface), TLS (transport layer security) and XDR (external data representation). Session Layer Definition
The session layer, the fifth layer from the bottom in the seven-layer OSI (open systems interconnect) model, establishes, manages (including providing security) and
terminates connections between applications at each end. The session layer contains fewer protocols and is less used than most of the other layers in the OSI model. In practice, it is often combined with the transport layer directly below it. Session layer protocols are particularly useful for multimedia applications for which it is necessary to coordinate the timing of two or more types of data, such as voice and moving images, with a high degree of precision. Examples include video conferencing and streaming. Transport Layer Definition
The transport layer, the middle (i.e., fourth) layer in the OSI (open systems interconnection) seven layer model, is responsible for maintaining reliable end-to-end communications across the network. It provides full-duplex virtual circuits on which delivery is reliable, error free, sequenced, and duplicate free. The transport layer responds to service requests from the session layer above it and issues service requests to the network layer below it to establish a conversation (i.e., a virtual connection) between two hosts. The network layer, the layer at which IP (Internet protocol) operates, is responsible for routing, which is moving packets across the network using the most appropriate paths. The best known example of a transport layer protocol is TCP (transmission control protocol), which provides a virtually error-free point-to-point connection that allows packets to arrive at their intended destinations uncorrupted and in the correct order Network Layer Definition
The network layer is the third layer from the bottom in the OSI (Open Systems Interconnection) seven layer model. Also called the OSI reference model, this model was originally developed in 1977 in order to standardize and simplify definitions with regard to computer networks. It divides the networking process into seven logical layers, starting at the physical level (i.e., cable and network interface cards) and ascending to the application level (i.e., the layer that interfaces with application programs on computers), specifying services and protocols for each layer. The network layer is the layer at which IP (Internet protocol) operates. Other protocols in the TCP/IP suite of protocols, which forms the basis of the Internet and most other networks, that also operate in this layer are ICMP, IPsec, ARP, RIP, OSPF and BGP. The network layer is responsible for routing, which is moving packets (the fundamental unit of data transport on modern computer networks) across the network using the most appropriate paths. It also addresses messages and translates logical addresses (i.e., IP addresses) into physical addresses (i.e., MAC addresses). This contrasts with the data link layer below it, which is responsible for the device-to-device delivery of packets using MAC addresses. Above the network layer is the transport layer, which is responsible for making certain that packets are delivered in sequence and without errors, loss or duplication. Data Link Layer Definition
The data link layer is the second layer in the OSI (open systems interconnection) seven-layer reference model. It responds to service requests from the network layer above it and issues service requests to the physical layer below it. The data link layer is responsible for encoding bits into
packets prior to transmission and then decoding the packets back into bits at the destination. Bits are the most basic unit of information in computing and communications. Packets are the fundamental unit of information transport in all modern computer networks, and increasingly in other communications networks as well. The data link layer is also responsible for logical link control, media access control, hardware addressing, error detection and handling and defining physical layer standards. It provides reliable data transfer by transmitting packets with the necessary synchronization, error control and flow control. The data link layer is divided into two sublayers: the media access control (MAC) layer and the logical link control (LLC) layer. The former controls how computers on the network gain access to the data and obtain permission to transmit it; the latter controls packet synchronization, flow control and error checking. The data link layer is where most LAN (local area network) and wireless LAN technologies are defined. Among the most popular technologies and protocols generally associated with this layer are Ethernet, Token Ring, FDDI (fiber distributed data interface), ATM (asynchronous transfer mode), SLIP (serial line Internet protocol), PPP (point-to-point protocol), HDLC (high level data link control) and ADCCP (advanced data communication control procedures). The data link layer is often implemented in software as a driver for a network interface card (NIC). Because the data link and physical layers are so closely related, many types of hardware are also associated with the data link layer. For example, NICs typically implement a specific data link layer technology, so they are often called Ethernet cards, Token Ring cards, etc. There are also several types of network interconnection devices that are said to operate at the data link layer in whole or in part, because they make decisions about what to do with data they receive by looking at data
link layer packets. These devices include most bridges and switches, although switches also encompass functions performed by the network layer. Data link layer processing is faster than network layer processing because less analysis of the packet is required. Physical Layer Definition
The physical layer is the bottom layer in the seven layer OSI (open system interconnection) reference model. This model was developed in 1977 in order to standardize and simplify definitions relating to computer networks. It divides the networking process into seven logical layers, starting at the physical layer and ascending to the application layer (which interfaces with application programs on computers). Services and protocols (i.e., agreed-upon formats) are specified for each layer, and each layer has unique responsibilities, including passing information to the layers above and below it. The physical layer defines all physical and electrical specifications for devices used to interface to the network, including the shape and layout of pins in connectors, voltages, cable specifications and broadcast frequencies. It provides the means for transmitting raw bits, but it is not concerned with MAC addresses, IP addresses and packets; rather, these are dealt with by layers higher in the hierarchy. The physical layer performs services requested by the data link layer, which is the layer directly above it. Its major functions and services are: (1) the establishment and termination of connections to a communications medium (e.g., twisted pair cable or optical fiber cable), (2) conversion between the representation of digital data in computers (or other network devices) and the corresponding signals transmitted over the communications medium and (3) participation in the efficient sharing among multiple devices of the communications medium through the use of flow control and
collision resolution (i.e., recovery from simultaneous transmission by two or more devices). Devices that operate at the physical layer include repeaters, hubs, network interface cards (NICs), cables and connectors. Repeaters are used to regenerate electrical signals that have attenuated (i.e., weakened) as a result of distance. A hub is a common connection point for twisted pair or optical fiber connecting devices in a local area network (LAN). Examples of physical layer protocols are CSMA/CD (carrier sense multiple access/collision detection), DSL (digital subscriber line) and RS-232 (which is commonly used in computer serial ports).
GATEWAYS
A gateway is a network point that acts as an entrance to another network. On the Internet, a node or stopping point node or a host (end-point) node. Both the computers of Internet users and the computers that serve pages to users are host nodes, while the nodes that connect the networks in between are gateways. For example, the computers that control traffic between company networks or the computers used by internet service providers (ISPs) to connect users to the internet are gateway nodes. In the network for an enterprise, a computer server acting as a gateway node is often also acting as aproxy server and a firewall server. A gateway is often associated with both a router, which knows where to direct a given packet of data that arrives at the gateway, and a switch, which furnishes the actual path in and out of the gateway for a given packet. On an IP network, clients should automatically send IP packets with a destination outside a givensubnet mask to a network gateway. A subnet mask defines the IP range of a private network. For example, if a private network has a base IP address of 192.168.0.0 and has a subnet mask of 255.255.255.0, then any data going to an IP address outside of 192.168.0.X will be sent to that network's gateway. While forwarding an IP packet to another network, the gateway might or might not perform Network Address Translation. A gateway is an essential feature of most routers, although other devices (such as any PC or server) can function as a gateway.
Most computer operating systems use the terms described above. A computer running Microsoft Windows however describes this standard networking feature as Internet Connection Sharing; which will act as a gateway, offering a connection between the Internet and an internal network. Such a system might also act as a DHCP server. Dynamic Host Configuration Protocol (DHCP) is a protocol used by networked devices (clients) to obtain various parameters necessary for the clients to operate in an Internet Protocol (IP) network. By using this protocol, system administration workload greatly decreases, and devices can be added to the network with minimal or no manual configurations.
ATM
Definition: ATM is a high-speed networking standard designed to support both voice and data communications. ATM is normally utilized by Internet service providers on their private long-distance networks. ATM operates at the data link layer (Layer 2 in the OSI model) over either fiber or twisted-pair cable. ATM differs from more common data link technologies like Ethernet in several ways. For example, ATM utilizes no routing. Hardware devices known as ATM switches establish point-to-point connections between endpoints and data flows directly from source to destination. Additionally, instead of using variable-length packets as Ethernet does, ATM utilizes fixed-sized cells. ATM cells are 53 bytes in length, that includes 48 bytes of data and five (5) bytes of header information. The performance of ATM is often expressed in the form of OC (Optical Carrier) levels, written as "OC-xxx." Performance levels as high as 10 Gbps (OC-192) are technically feasible with ATM. More common performance levels for ATM are 155 Mbps (OC-3) and 622 Mbps (OC-12). ATM technology is designed to improve utilization and quality of service (QoS) on high-traffic networks. Without routing and with fixed-size cells, networks can much more easily manage bandwidth under ATM than under Ethernet, for example. The high cost of ATM relative to Ethernet is one factor that has limited its adoption to "backbone" and other high-performance, specialized networks.
ALOHA
Aloha, also called the Aloha method, refers to a simple communications scheme in which each source (transmitter) in a network sends data whenever there is a frame to send. If the frame successfully reaches the destination (receiver), the next frame is sent. If the frame fails to be received at the destination, it is sent again. This protocol was originally developed at the University of Hawaii for use with satellite communication systems in the Pacific. In a wireless broadcast system or a half-duplex two-way link, Aloha works perfectly. But as networks become more complex, for example in an Ethernet system involving multiple sources and destinations in which data travels many paths at once, trouble occurs because data frames collide (conflict). The heavier the
communications volume, the worse the collision problems become. The result is degradation of system efficiency, because when two frames collide, the data contained in both frames is lost. To minimize the number of collisions, thereby optimizing network efficiency and increasing the number of subscribers that can use a given network, a scheme called slotted Aloha was developed. This system employs signals called beacons that are sent at precise intervals and tell each source when the channel is clear to send a frame. Further improvement can be realized by a more sophisticated protocol called Carrier Sense Multiple Access with Collision Detection (CSMA). In 1070s, Norman Abram son and his colleagues at the University of Hawaii devised a new and elegant method to solve the channel allocation problem. Many researchers have extended their work since then. Although Abranson's work, called the Aloha System, used ground-based radio broadcasting, the basic idea is applicable to any system in which uncoordinated users are completing for the use of a single shared channel.
HDLC
High-level Data Link Control (HDLC) is a bit-oriented protocol for communication over point-to-point and multipoint links. It implements the ARQ mechanisms we discussed in this chapter. Configurations and Transfer Modes HDLC provides two common transfer modes that can be used in different configurations: normal response mode (NRM) and asynchronous balanced mode (ABM). Normal Response Mode In normal response mode (NRM), the station configuration is unbalanced. We have one primary station and multiple secondary stations. A primary station can send commands; a secondary station can only respond. The NRM is used for both point-to-point and multiple-point links, as shown in Figure 11.25. SECTION 11.6 HDLC 341 Figure 11.25 Normal response mode Primary I Command t-----a. Point-to-point Secondary ~ Response I
Secondary Secondary Primary I Command t-----b. Multipoint ~ Response I ~ Response I Asynchronous Balanced Mode In asynchronous balanced mode (ABM), the configuration is balanced. The link is point-to-point, and each station can function as a primary and a secondary (acting as peers), as shown in Figure 11.26. This is the common mode today. Figure 11.26 Asynchronous balanced mode Combined Combined ICommand/response t-----~ Command/response I
Sliding Window Protocols The simplex stop and wait ARQ protocol using sequence numbers and sequence number acknowledgments (described in section 5.2) will ensure an error free communications channel for higher levels. However, waiting for an acknowledgment for each frame in turn is very wasteful and a more efficient alternative is to use a sliding window protocol which enables a number of frames to be transmitted and separately acknowledged. In a sliding window protocol each outbound frame contains a sequence number in the range 0 to some maximum (MaxSeq). If n bits are allocated in the header to store a sequence number the number range would be from 0 to 2" - 1, eg if a 3 bit number is used the sequence numbers would range from 0 to 7. The sender and receiver maintain a window:
is a list of consecutive frame sequence numbers that can be sent by the sender or that have been sent and acknowledgments are waited for. When an ack arrives and all previous frames have already been acknowledged the window can be advanced and a new message obtained from the host to be transmitted with the next highest available sequence number. If a ack arrives for a frame that is not within the window it is discarded, eg an extra ack for a frame that has already been acknowledged. is a list of sequence numbers for frames that can be accepted by the receiver. When a valid frame arrives and all previous frames have already arrived the window is advanced. If a frame arrives that is not within the window it is discarded.
Sending window
Receiving window
1. Station Model:
Stations are also called terminals. The number of independent stations are N, with independent constant arrival rates lambda, and probability of a frame being generated in a time interval of (delta t) is (delta t x lambda). Once a frame has been generated the station does nothing until the frame has successfully been transmitted.
3. Collision Assumption:
If two frames are transmitted simultaneously, they will collide resulting in a false signal. Each station has the ability to detect collision and it must be kept in mind that collided frame must be retransmitted later.
4. Time Management:
Continuous Time: Frame transmission can begin at any instant as there is no master clock diving time into discrete intervals. Slotted Time: Frame transmission start at the beginning of the time slots. A slot may contain 0,1, or more frame corresponding to an idle, successful or collision transmission respectively.
5. Sensing of Channel:
Carrier Sense: A channel can be sensed by station before trying to use it. If a station senses the channel as busy, no station will attempt to use it, until it goes idle. No Carrier Sense: Stations cannot sense the channel before trying to use it. First they transmit and then they came to know where the channel is busy or idle.
Token Ring/IEEE 802.5 Background The Token Ring network was originally developed by IBM in the 1970s. It is still IBMs primary local area network (LAN) technology and is second only to Ethernet/IEEE 802.3 in general LAN popularity. The related IEEE 802.5 specification is almost identical to and completely compatible with IBMs Token Ring network. In fact, the IEEE 802.5 specification was modeled after IBM Token Ring, and it continues to shadow IBMs Token Ring development. The term Token Ring generally is used to refer to both IBMs Token Ring network and IEEE 802.5 networks. This chapter addresses both Token Ring and IEEE 802.5.Token Ring and IEEE 802.5 networks are basically compatible, although the specifications differ in minor ways. IBMs Token Ring network species a star, with all end stations attached to a device called a multi station access unit (MSAU). In contrast, IEEE 802.5 does not specify a topology, although virtually all IEEE 802.5 implementations are based on a star. Other differences exist, including media type (IEEE 802.5 does not specify a media type, although IBM Token Ring networks use twisted-pair wire) and routing information eld size. Figure 9-1 summarizes IBM Token Ring network and IEEE 802.5 specications. Physical Connections
IBM Token Ring network stations are directly connected to MSAUs, which can be wired together to form one large ring (see Figure 9-2). Patch cables connect MSAUs to adjacent MSAUs, while lobe cables connect MSAUs to stations. MSAUs include bypass relays for removing stations from the ring Token Ring Operation Token Ring and IEEE 802.5 are two principal examples of tokenpassing networks (FDDI being the other). Token-passing networks move a small frame, called a token, around the network. Possession of the token grants the right to transmit. If a node receiving the token has no information to send, it passes the token to the next end station. Each station can hold the token for a maximum period of time. If a station possessing the token does have information to transmit, it seizes the token, alters one bit of the token, which turns the token into a start-of-frame sequence, appends the information it wants to transmit, and sends this information to the next station on the ring. While the information frame is circling the ring, no token is on the network (unless the ring supports early token release), which means that other stations wanting to transmit must wait. Therefore, collisions cannot occur in Token Ring networks. If early token release is supported, a new token can be released when frame Transmission is complete. The information frame circulates the ring until it reaches the intended destination station, which copies the information for further processing. The information frame continues to circle the ring and is nally removed when it reaches the sending station. The sending station can check the returning frame to see whether the frame was seen and subsequently copied by the destination. Unlike CSMA/CD networks (such as Ethernet), token-passing networks are deterministic, which means that it is possible to calculate the maximum time that will pass before any end station will be able to transmit. This feature and several reliability features, which are discussed in the section Fault-Management Mechanisms later in this chapter, make Token Ring networks ideal for applications where delay must be predictable and robust network operation is important. Factory automation environments are examples of such applications. Priority System Token Ring networks use a sophisticated priority system that permits certain user-designated, high-priority stations to use the network more frequently. Token Ring frames have two elds that control priority: the priority eld and the reservation eld. Only stations with a priority equal to or higher than the
priority value contained in a token can seize that token. After the token is seized and changed to an information frame, only stations with a priority value higher than that of the transmitting station can reserve the token for the next pass around the network. When the next token is generated, it includes the higher priority of the reserving station. Stations that raise a tokens priority level must reinstate the previous priority after their transmission is complete.
Fault-Management Mechanisms Token Ring networks employ several mechanisms for detecting and compensating for network faults. One station in the Token Ring network, for example, is selected to be the active monitor. This station, which potentially can be any station on the network, acts as a centralized source of timing information for other ring stations and performs a variety of ring- maintenance functions. One of these functions is the removal of continuously circulating frames from the ring. When a sending device fails, its frame may continue to circle the ring. This can prevent other stations from transmitting their own frames and essentially can lock up the network. The active monitor can detect such frames, remove them from the ring, and generate a new token. The IBM Token Ring networks star topology also contributes to overall network reliability. Because all information in a Token Ring network is seen by active MSAUs, these devices can be programmed to check for problems and selectively remove stations from the ring if necessary. A Token Ring algorithm called beaconing detects and tries to repair certain network faults. Whenever a station detects a serious problem with the network (such as a cable break), it sends a beacon frame, which denes a failure domain. This domain includes the station reporting the failure, its nearest active upstream neighbor (NAUN), and everything in between. Beaconing initiates a process called autoreconguration, where nodes within the failure domain automatically perform diagnostics in an attempt to recongure the network around the failed areas. Physically, the MSAU can accomplish this through electrical reconguration.
The reference model was named after two of its main protocols, TCP (Transmission Control Protocol) [12] and IP (Internet Protocol).
Figure 2.1: TCP/IP Network Protocol A detailed description of the reference model is beyond the scope of this document and project. The basic idea of the networking system is to allow one application on a host computer to talk to another application on a different host computer. The application forms its request, then passes the packet down to the lower layers, which add their own control information, either a header or a footer, onto the packet. Finally the packet reaches the physical layer and is transmitted through the cable onto the destination host. The packet then travels up through the different layers, with each layer reading, deciphering, and removing the header or footer that was attached by its counterpart on the originating computer. Finally the packet arrives at the application it was destined for. Even though technically each layer communicates with the layer above or below it, the process can be viewed as one layer talking to its partner on the host, as figure 2.1 shows.
The Application Layer The Transport Layer The Network Layer The Host-to-Network Layer
FTP (File Transfer Protocol) [14] is a protocol that was originally designed to promote the sharing of files among computer users. It shields the user from the variations of file storage on different architectures and allows for a reliable and efficient transfer of data. SMTP (Simple Mail Transport Protocol) [15] is the protocol used to transport electronic mail from one computer to another through a series of other computers along the route. DNS [10] (Domain Name System) resolves the numerical address of a network node into its textual name or vice-versa. It would translate www.yahoo.com to 204.71.177.71 to allow the routing protocols to find the host that the packet is destined for.
Figure 2.2: Interaction with Application, Transport and Internet Layers A message to be sent originates in the application layer. This is then passed down onto the appropriate protocol in the transport layer. These protocols add a header to the message for the corresponding transport layer in the destination machine for purposes of reassembling the message. The segment is then passed onto the internet layer where the Internet Protocol adds a further header. Finally the segment is passed onto the physical layer, a header and a trailer are added at this stage. Figure 2.3 shows the structure of the final segment being sent.
The job of the network layer is to inject packets into any network and have them travel independently to the destination. The layer defines IP (Internet Protocol) for its official packet format and protocol. Packet routing is a major job of this protocol.