0% found this document useful (0 votes)
36 views4 pages

Full Form and Their Uses

Download as docx, pdf, or txt
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 4

ORAL QUESTIONS:

1. Full form and their Uses:


1. OSI: Open System Interconnection. It is a reference model for how application can communicate over a
network.
2. TCP/IP: Transmission Control Protocol/ Internet Protocol. TCP/IP specifies how data is exchanged over the
internet.
3. UDP: User Datagram Protocol. It is an alternative communications protocol to Transmission Control Protocol
(TCP). And used primarily for establishing low-latency and loss tolerating connections between applications
on the Internet.
4. PPP: Point to Point Protocol. It is a communication protocol used to establish a direct connection between
two nodes. It connects two routers directly without any host or any other networking device in between. It
can provide connection authentication, transmission encryption, and compression.
5. HDLC: High Level Link Control Protocol. The protocol uses the services of a physical layer, and provides
either a best effort or reliable communications path between the transmitter and receiver.
6. ARP: Address Resolution Protocol. It is a protocol used by the Internet Protocol, specifically IPv4, to map IP
network addresses to the hardware addresses used by a data link protocol.
7. RARP: Reverse Address Resolution Protocol. It is a protocol by which a physical machine in a local area
network can request to learn its IP address from a gateway server's Address Resolution Protocol (ARP) table
or cache.
8. TELNET: Terminal Protocol. It is a protocol used on the Internet or local area networks to provide a
bidirectional interactive text-oriented communication facility using a virtual terminal connection.
9. DHCP: Dynamic Host Configuration Protocol. It is a client/server protocol that automatically provides an
Internet Protocol (IP) host with its IP address and other related configuration information such as the subnet
mask and default gateway.
10. SMTP: Simple Mail Transfer Protocol. It is an Internet standard for electronic mail (email) transmission.
11. IMAP: Internet Message Access Protocol. It is a mail protocol used for accessing email on a remote web server
from a local client.
12. POP3: Post Office Protocol version 3. It is a standard mail protocol used to receive emails from a remote
server to local client. POP3 allows you to download email message on your local computer and read them
even when you are offline.
13. ARPANET: Advanced Research Projects Agency Network. It was an early packet switching network and the
first network to implement the protocol suite TCP/IP.
14. HTML: Hypertext Mark up Language. It is the standard mark up language for creating web pages and web
applications.
15. HTTP: Hypertext Transfer Protocol. It is an application protocol for distributed, collaborative,
and hypermedia information systems.
16. DNS: Domain Name System. It is used to resolve human-readable hostnames like www.Dyn.com into
machine-readable IP addresses like 204.13.248.115.
17. IEEE: Institute of Electrical and Electronics Engineers
18. ITU: International Telecommunication Union
19. NVT: Network Virtual Terminal.
20. FTP: File Transfer Protocol.
21. NIC: Network Interface Controller/Card.
22. IP: Internet Protocol.
23. IPV4: Internet Protocol Version 4.
24. IPV6: Internet Protocol Version 6.
25. WWW: World Wide Web.
26. IIS: Internet Information Services.
27. MAC: Media Access Control.
28. ISO: International Organisation for Standardization.
29. CRC: Cyclic Redundancy Check.
30. LCP: Link Control Protocol.
31. IPCP: Internet Protocol Control Protocol.
32. CSMA: Carrier Sense Multiple Access.
33. CSMA-CA: Carrier Sense Multiple Access-Collision Avoidance.
34. CSMA-CD: Carrier Sense Multiple Access- Collision Detection.
35. IFS: Inter Frame Space.
36. NAV: Network Allocation Vector.
37. ISM band: Industrial Scientific and Medical Band.
38. ESS: Extended Service Set.
39. LLC: Logic Link Control.
40. ISN: Initial Sequence Number.
41. SCTP: Stream Control Transmission Protocol.
42. MIME: Multiple Purpose Mail Extension.
2. What is Congestion and QOS?
Network congestion in data networking and queuing theory is the reduced quality of service that occurs when a network node is carrying
more data than it can handle.
Quality of Service (QoS) refers to the capability of a network to provide better service to selected network traffic over various
technologies.
3. Policies to overcome Network Congestion.

I. Open Loop Congestion Control:


In Open Loop Congestion Control, policies are used to prevent the congestion before it happens.
i. Retransmission Policy:
The sender retransmits a packet, if it feels that the packet it has sent is lost or corrupted.
However retransmission increases the congestion in the network.
But we need to implement good retransmission policy to prevent congestion.
The retransmission policy and the retransmission timers need to be designed to optimize efficiency and at the same time prevent the
congestion.
ii. Window Policy:
To implement window policy, selective reject window method is used for congestion control.
Selective Reject method is preferred over Go-back-n window as in Go-back-n method, when timer for a packet times out, several packets
are resent, although some may have arrived safely at the receiver.
Thus, this duplication may make congestion worse.
Selective reject method sends only the specific lost or damaged packets.
iii. Acknowledgement Policy:
The acknowledgement policy imposed by the receiver may also affect congestion.
If the receiver does not acknowledge every packet it receives it may slow down the sender and help prevent congestion.
Acknowledgments also add to the traffic load on the network.
Thus, by sending fewer acknowledgements we can reduce load on the network.
To implement it, several approaches can be used:
A receiver may send an acknowledgement only if it has a packet to be sent.
A receiver may send an acknowledgement when a timer expires.
A receiver may also decide to acknowledge only N packets at a time.
iv. Discarding Policy:
A router may discard less sensitive packets when congestion is likely to happen.
Such a discarding policy may prevent congestion and at the same time may not harm the integrity of the transmission.
v. Admission Policy:
An admission policy, which is a quality-of-service mechanism, can also prevent congestion in virtual circuit networks.
Switches in a flow, first check the resource requirement of a flow before admitting it to the network.
A router can deny establishing a virtual circuit connection if there is congestion in the network or if there is a possibility of future
congestion.
II. Closed Loop Congestion Control:
Closed loop congestion control mechanisms try to remove the congestion after it happens.
i. Backpressure
Backpressure is a node-to-node congestion control that starts with a node and propagates, in the opposite direction of data flow.

The backpressure technique can be applied only to virtual circuit networks. In such virtual circuit each node knows the upstream
node from which a data flow is coming.
In this method of congestion control, the congested node stops receiving data from the immediate upstream node or nodes.
This may cause the upstream node on nodes to become congested, and they, in turn, reject data from their upstream node or nodes.
As shown in fig node 3 is congested and it stops receiving packets and informs its upstream node 2 to slow down. Node 2 in turns
may be congested and informs node 1 to slow down. Now node 1 may create congestion and informs the source node to slow
down. In this way the congestion is alleviated. Thus, the pressure on node 3 is moved backward to the source to remove the
congestion.
ii. Choke Packet
In this method of congestion control, congested router or node sends a special type of packet called choke packet to the source to
inform it about the congestion.
Here, congested node does not inform its upstream node about the congestion as in backpressure method.
In choke packet method, congested node sends a warning directly to the source station i.e. the intermediate nodes through which
the packet has traveled are not warned.

iii. Implicit Signaling


In implicit signaling, there is no communication between the congested node or nodes and the source.
The source guesses that there is congestion somewhere in the network when it does not receive any acknowledgment. Therefore
the delay in receiving an acknowledgment is interpreted as congestion in the network.
On sensing this congestion, the source slows down.
This type of congestion control policy is used by TCP.
iv. Explicit Signaling
In this method, the congested nodes explicitly send a signal to the source or destination to inform about the congestion.
Explicit signaling is different from the choke packet method. In choke packed method, a separate packet is used for this purpose
whereas in explicit signaling method, the signal is included in the packets that carry data.
Explicit signaling can occur in either the forward direction or the backward direction.
In backward signaling, a bit is set in a packet moving in the direction opposite to the congestion. This bit warns the source about
the congestion and informs the source to slow down.
In forward signaling, a bit is set in a packet moving in the direction of congestion. This bit warns the destination about the
congestion. The receiver in this case uses policies such as slowing down the acknowledgements to remove the congestion.
4. Policies To Implement QoS :
1. Scheduling:
Packets from different flows arrive at a switch or router for processing. A good scheduling technique treats the different flows
in a fair and appropriate manner. Some of the scheduling techniques used to improve QoS are,

FIFO Queuing: In this queuing technique, the arrival packets are stored in First Come First Serve basis. If the arrival rate is less than the
processing rate, then the queue will fill up and the new arriving packets will not have any space to store in the queue and gets discarded.
Priority Queuing - In priority queuing packets are first assigned to a priority class. Each priority class has its own queue. The packets in
the highest priority queue are processed first. The packets in the lowest priority queue are processed last. This process continues until
the queue is empty.
Weighted Fair Queuing In Weighted Fair Queuing technique, the packets are still assigned to different classes and admitted to different
queues. However, the queues are weighted based on the priority of the queues (higher priority means a higher weight). The system
processes packets in each queue in a round robin fashion with the number of packets selected from each queue based on the
corresponding weight.

2. Traffic Shaping:
Traffic shaping is a mechanism to control the amount and the rate of the traffic sent to the network. There are two techniques under this
mechanism.
Leaky Bucket If the traffic consists of fixed size packets, the process removes a fixed number of packets from the queues. If the traffic
consists of variable length packets, the fixed output rate must be based on the number of bytes or bits.
Token Bucket Leaky bucket algorithm outputs the data in average rate from the burst data, but it does not taken the time when the host
was idle, into account.

3. Admission Control:It is a mechanism used by the networking device like router and switches to accept or reject a flow based on
predefined parameters called flow specification. Before a router accepts a flow for processing, it checks the flow specification to see if its
capacity and its previous commitments to other flows can handle the new flow.

4. Resource reservation: A flow of data needs resource such as buffer bandwidth, CPU time and so on. The QoS is improved if these
resources are reserved beforehand.

5. Write any on IP Address, MAC Address, Socket Address and IPv6 Address.

IP Address:
The IP address is an address bound to the network device, i.e., computer, via software.
Example: 192.168.188.2
MAC Address:
The MAC address is a hardware address, which means it is unique to the network card installed on your PC.
Example: DE-56-0A-DC-E6-88
Socket Address: Socket address is the combinations of IP addresses and port number.
IPv6 Address:
An IPv6 address is represented as eight groups of four hexadecimal digits, each group representing 16 bits (two octets, a
group sometimes also called a hextet. The groups are separated by colons (:).
Example: 2001:0db8:85a3:0000:0000:8a2e:0370:7334

6. How to calculate directed broadcast and limited broadcast IP address?

Make all the host part 1 in directed broadcast IP address.


Example: 10.255.255.220 Class A address
10.225.255.255 Directed broadcast IP address.
Make all the host part 0 in limited broadcast.
Example: 10.255.255.220 Class A address
10.0.0.0.0 Limited broadcast IP address.
7. Special IP address:
1. This Host.
Network bit= 0, Host bit=0
Example: 0.0.0.0.0
2. Host on network.

3. Look Back.
Look Back is use to check connections on both side i.e. sender and receiver.
Network part= 127, Host part=anything.
Example: 127.162.198.255

8. What is ping, pathping, traceroute command?

PING: The ping command helps to verify IP-level connectivity.

PATHPING: Pathping is a TCP/IP based utility (command-line tool) that provides useful information about network latency and network
loss at intermediate hops between a source address and a destination address.

TRACEROUTE: A traceroute is a function which traces the path from one network to another. It allows us to diagnose the source of many
problems.

9. What is default gateway?

A default gateway serves as an access point or IP router that a networked computer users to send information to a computer in another
network or the internet.

You might also like