Unit 5
Unit 5
Unit 5
• Raw sockets let us read and write ICMPv4, IGMPv4, and ICMPv6
packets. The ping program, for example, sends ICMP echo requests and
receives ICMP echo replies
• With a raw socket, a process can read and write IPv4 datagrams with an
IPV4 protocol field that is not processed by the kernel
• With a raw socket, a process can build its own IPv4 header using the
IP_HDRINCL socket option
RAW SOCKET CREATION
• The socket function creates a raw socket when the second argument is SOCK_RAW. The
third argument (the protocol) is normally nonzero
• The IP_HDRINCL socket option :
• If the IP_HDRINCL option is not set, the starting address of the data for the
kernel to send specifies the first byte following the IP header
• If the IP_HDRINCL option is set, the starting address of the data for the kernel to send specifies the
first byte of the IP header.
• bind can be called on the raw socket, but this is rare. This function sets only the local address
• connect can be called on the raw socket, but this is rare. This function sets only the foreign
address
RAW SOCKET INPUT
• If the IP_HDRINCL option is not set, the starting address of the data for the kernel to
send specifies the first byte following the IP header
• If the IP_HDRINCL option is set, the starting address of the data for the kernel to send
specifies the first byte of the IP header.
• The kernel fragments raw packets that exceed the outgoing interface MTU.
PING PROGRAM
Providing access to the datalink layer for an application is a powerful feature that is available with
most current operating systems. This provides the following capabilities:
• The ability to watch the packets received by the datalink layer, allowing programs such
as tcpdump to be run on normal computer systems
• This ability is less useful in switched networks, which have become quite common. This is because
the switch only passes traffic to a port if it is addressed to the device or devices attached to that port
(unicast, multicast, or broadcast).
• The ability to run certain programs as normal applications instead of as part of the kernel.
PACKET CAPTURE USING BPF
DATALINK PROVIDER INTERFACE (DLPI)
LINUX: SOCK_PACKET AND PF_PACKET
There are two methods of receiving packets from the datalink layer under Linux.
• The original method, which is more widely available but less flexible, is to create a socket of
type SOCK_PACKET.
• The newer method, which introduces more filtering and performance features, is to create a
socket of family PF_PACKET.
To do either, we must have sufficient privileges (similar to creating a raw socket), and the third
argument to socket must be a nonzero value specifying the Ethernet frame type.
When using PF_PACKET sockets, the second argument to socket can be SOCK_DGRAM, for
"cooked" packets with the link-layer header removed, or SOCK_RAW, for the complete link-layer
packet. SOCK_PACKET sockets only return the complete link layer packet.
LIBPCAP: PACKET CAPTURE LIBRARY
• Support currently exists for BPF under Berkeley-derived kernels, DLPI under HP-UX
and Solaris 2.x, NIT under SunOS 4.1.x, the Linux SOCK_PACKET and PF_PACKET
sockets, and a few other operating systems. This library is used by tcpdump.
LIBNET: PACKET CREATION AND INJECTION LIBRARY
• libnet provides an interface to craft and inject arbitrary packets into the network. It
provides both raw socket and datalink access modes in an implementation-independent
manner.
• The library hides many of the details of crafting the IP and UDP or TCP headers, and
provides simple and portable access to writing datalink and raw packets. As with libpcap,
the library is made up of quite a number of functions.
STREAMS
The stream head consists of the Any number of modules can be pushed
kernel routines that are invoked onto a stream. When we say "push," we
mean that each new module gets inserted
when the application makes a just below the stream head.
system call for a STREAMS
descriptor (e.g., read, putmsg, ioctl, A special type of pseudo-device driver is a
and the like). multiplexor, which accepts data from
multiple sources. A STREAMS-based
implementation of the TCP/IP protocol
suite, as found on SVR4
STREAMS MESSAGE TYPES GENERATED BY
WRITE AND PUTMSG.
GETMSG AND PUTMSG FUNCTIONS
• int getmsg(int fd, struct strbuf *ctlptr, struct strbuf *dataptr, int *flagsp) ;
• int putmsg(int fd, const struct strbuf *ctlptr, const struct strbuf *dataptr, int flags) ;
User send only control information, only data, or both using putmsg. To indicate the absence of
control information, specify ctlptr as a null pointer or set ctlptr->len to –1. The same technique is
used to indicate no data.
#include <stropts.h>
int ioctl(int fd, int request, ... /* void *arg */ ) ;
The only change from the function prototype is the headers that must
be included when dealing with STREAMS.
There are about 30 requests that affect a stream head. Each request
begins with I_ and they are normally documented on the streamio
man page
REMOTE PROCEDURE CALLS
• Many distributed systems have been based on explicit message exchange between
processes. However, the procedures send and receive do not conceal communication,
which is important to achieve access transparency in distributed systems.
• Information can be transported from the caller to the callee in the parameters and can
come back in the procedure result . No message passing at all is visible to the
programmer. This method is known as Remote Procedure Call, or often just RPC
STEPS IN RPC
• The client procedure calls the client stub in the normal way.
• The client stub builds a message and calls the local operating system.
• The client’s OS sends the message to the remote OS.
• The remote OS gives the message to the server stub.
• The server stub unpacks the parameters and calls the server.
• The server does the work and returns the result to the stub.
• The server stub packs it in a message and calls its local OS.
• The server’s OS sends the message to the client’s OS.
• The client’s OS gives the message to the client stub.
• The stub unpacks the result and returns to the client
RPC DOORS
RPC DOORS
• The two calls that support server multithreading are rpc_control() and svc_done(). The
rpc_control() call is used to set the MT mode, either Auto or User mode.
• If the server uses Auto mode, it does not need to invoke svc_done() at all. In User mode,
svc_done() must be invoked after each client request is processed so that the server can reclaim
the resources from processing the request. In addition, multithreaded RPC servers must call on
svc_run(). Note that svc_getreqpoll() and svc_getreqset() are unsafe in MT applications.
• If the server program does not invoke any of the MT interface calls, it remains in single-
threaded mode, which is the default mode.