Q.1) A Telecommunication Network Is A Collection of Diverse Media Supporting Communication Between End-Point. Explain in Detail
Q.1) A Telecommunication Network Is A Collection of Diverse Media Supporting Communication Between End-Point. Explain in Detail
Q.1) A Telecommunication Network Is A Collection of Diverse Media Supporting Communication Between End-Point. Explain in Detail
Key Management
It goes without saying that the security of any cryptosystem depends upon how securely its keys are
managed. Without secure procedures for the handling of cryptographic keys, the benefits of the use of
strong cryptographic schemes are potentially lost.
It is observed that cryptographic schemes are rarely compromised through weaknesses in their design.
However, they are often compromised through poor key management.
There are some important aspects of key management which are as follows −
Cryptographic keys are nothing but special pieces of data. Key management refers to the secure
administration of cryptographic keys.
Key management deals with entire key lifecycle as depicted in the following illustration.
There are two specific requirements of key management for public key cryptography.
Secrecy of private keys. Throughout the key lifecycle, secret keys must remain secret from all
parties except those who are owner and are authorized to use them.
Assurance of public keys. In public key cryptography, the public keys are in open domain and seen as
public pieces of data. By default there are no assurances of whether a public key is correct, with whom it
can be associated, or what it can be used for. Thus key management of public keys needs to focus much
more explicitly on assurance of purpose of public keys.
The most crucial requirement of ‘assurance of public key’ can be achieved through the public-key
infrastructure (PKI), a key management systems for supporting public-key cryptography.
Public Key Infrastructure (PKI)
PKI provides assurance of public key. It provides the identification of public keys and their distribution.
An anatomy of PKI comprises of the following components.
Public Key Certificate, commonly referred to as ‘digital certificate’.
Private Key tokens.
Certification Authority.
Registration Authority.
Certificate Management System.
Digital Certificate
For analogy, a certificate can be considered as the ID card issued to the person. People use ID cards such
as a driver's license, passport to prove their identity. A digital certificate does the same basic thing in the
electronic world, but with one difference.
Digital Certificates are not only issued to people but they can be issued to computers, software packages
or anything else that need to prove the identity in the electronic world.
Digital certificates are based on the ITU standard X.509 which defines a standard certificate format for
public key certificates and certification validation. Hence digital certificates are sometimes also referred
to as X.509 certificates.
Public key pertaining to the user client is stored in digital certificates by The Certification Authority (CA)
along with other relevant information such as client information, expiration date, usage, issuer etc. CA
digitally signs this entire information and includes digital signature in the certificate.
Anyone who needs the assurance about the public key and associated information of client, he carries out
the signature validation process using CA’s public key. Successful validation assures that the public key
given in the certificate belongs to the person whose details are given in the certificate. The process of
obtaining Digital Certificate by a person/entity is depicted in the following illustration. The CA, after duly
verifying identity of client, issues a digital certificate to that client.
Certifying Authority (CA)
As discussed above, the CA issues certificate to a client and assist other users to verify the certificate. The
CA takes responsibility for identifying correctly the identity of the client asking for a certificate to be
issued, and ensures that the information contained within the certificate is correct and digitally signs it.
Key Functions of CA
The key functions of a CA are as follows −
Generating key pairs − The CA may generate a key pair independently or jointly with the client.
Issuing digital certificates − The CA could be thought of as the PKI equivalent of a passport agency −
the CA issues a certificate after client provides the credentials to confirm his identity. The CA then signs
the certificate to prevent modification of the details contained in the certificate.
Publishing Certificates − The CA need to publish certificates so that users can find them. There are two
ways of achieving this. One is to publish certificates in the equivalent of an electronic telephone
directory. The other is to send your certificate out to those people you think might need it by one means
or another.
Verifying Certificates − The CA makes its public key available in environment to assist verification of
his signature on clients’ digital certificate.
Revocation of Certificates − At times, CA revokes the certificate issued due to some reason such as
compromise of private key by user or loss of trust in the client. After revocation, CA maintains the list of
all revoked certificate that is available to the environment.
Classes of Certificates
There are four typical classes of certificate −
Class 1 − these certificates can be easily acquired by supplying an email address.
Class 2 − these certificates require additional personal information to be supplied.
Class 3 − these certificates can only be purchased after checks have been made about the requestor’s
identity.
Class 4 − they may be used by governments and financial organizations needing very high levels of trust.
Registration Authority (RA)
CA may use a third-party Registration Authority (RA) to perform the necessary checks on the person or
company requesting the certificate to confirm their identity. The RA may appear to the client as a CA, but
they do not actually sign the certificate that is issued.
Certificate Management System (CMS)
It is the management system through which certificates are published, temporarily or permanently
suspended, renewed, or revoked. Certificate management systems do not normally delete certificates
because it may be necessary to prove their status at a point in time, perhaps for legal reasons. A CA along
with associated RA runs certificate management systems to be able to track their responsibilities and
liabilities.
Private Key Tokens
While the public key of a client is stored on the certificate, the associated secret private key can be stored
on the key owner’s computer. This method is generally not adopted. If an attacker gains access to the
computer, he can easily gain access to private key. For this reason, a private key is stored on secure
removable storage token access to which is protected through a password.
Different vendors often use different and sometimes proprietary storage formats for storing keys. For
example, Entrust uses the proprietary .epf format, while Verisign, Global Sign, and Baltimore use the
standard .p12 format.
Hierarchy of CA
With vast networks and requirements of global communications, it is practically not feasible to have only
one trusted CA from whom all users obtain their certificates. Secondly, availability of only one CA may
lead to difficulties if CA is compromised.
In such case, the hierarchical certification model is of interest since it allows public key certificates to be
used in environments where two communicating parties do not have trust relationships with the same CA.
The root CA is at the top of the CA hierarchy and the root CA's certificate is a self-signed certificate.
The CAs, which are directly subordinate to the root CA (For example, CA1 and CA2) have CA
certificates that are signed by the root CA. The CAs under the subordinate CAs in the hierarchy (For
example, CA5 and CA6) have their CA certificates signed by the higher-level subordinate CAs.
Certificate authority (CA) hierarchies are reflected in certificate chains. A certificate chain traces a path of
certificates from a branch in the hierarchy to the root of the hierarchy. The following illustration shows a
CA hierarchy with a certificate chain leading from an entity certificate through two subordinate CA
certificates (CA6 and CA3) to the CA certificate for the root CA.Verifying a certificate chain is the
process of ensuring that a specific certificate chain is valid, correctly signed, and trustworthy. The
following procedure verifies a certificate chain, beginning with the certificate that is presented for
authentication −
A client whose authenticity is being verified supplies his certificate, generally along with the
chain of certificates up to Root CA.
Verifier takes the certificate and validates by using public key of issuer. The issuer’s public key is
found in the issuer’s certificate which is in the chain next to client’s certificate.
Now if the higher CA who has signed the issuer’s certificate, is trusted by the verifier,
verification is successful and stops here.
Else, the issuer's certificate is verified in a similar manner as done for client in above steps. This
process continues till either trusted CA is found in between or else it continues till Root CA.
According to the end-to-end principle, protocol features are only justified in the lower layers of a system
if they are a performance optimization, hence, Transmission Control Protocol (TCP) retransmission for
reliability is still justified, but efforts to improve TCP reliability should stop after peak performance has
been reached.
The concept and research of end-to-end connectivity and network intelligence at the end-nodes reaches
back to packet-switching networks in the 1970s, cf. CYCLADES. A 1981 presentation entitled End-to-
end arguments in system design by Jerome H. Saltzer, David P. Reed, and David D. Clark, argued that
reliable systems tend to require end-to-end processing to operate correctly, in addition to any processing
in the intermediate system. They pointed out that most features in the lowest level of a communications
system have costs for all higher-layer clients, even if those clients do not need the features, and are
redundant if the clients have to implement the features on an end-to-end basis.
This leads to the model of a dumb, minimal network with smart terminals, a completely different model
from the previous paradigm of the smart network with dumb terminals.
In 1995, the Federal Networking Council adopted a resolution defining the Internet as a “global
information system” that is logically linked together by a globally unique address space based on the
Internet Protocol (IP) or its subsequent extensions/follow-ons; is able to support communications using
the Transmission Control Protocol/Internet Protocol (TCP/IP) suite or its subsequent extensions/follow-
ons, and/or other IP-compatible protocols; and provides, uses or makes accessible, either publicly or
privately, high level services layered on this communications and related infrastructure.
In the Internet Protocol Suite, the Internet Protocol is a simple (“dumb”), stateless protocol that moves
datagrams across the network, and TCP is a smart transport protocol providing error detection,
retransmission, congestion control, and flow control end-to-end. The network itself (the routers) needs
only to support the simple, lightweight IP; the endpoints run the heavier TCP on top of it when needed.
A second canonical example is that of file transfer. Every reliable file transfer protocol and file transfer
program should contain a checksum, which is validated only after everything has been successfully stored
on disk. Disk errors, router errors, and file transfer software errors make an end-to-end checksum
necessary. Therefore, there is a limit to how secure TCP checksum should be, because it has to be
implemented for any robust end-to-end application to be secure.
A third example (not from the original paper) is the Ether Type field of Ethernet. An Ethernet frame does
not attempt to provide interpretation for the 16 bits of type in an original Ethernet packet. To add special
interpretation to some of these bits would reduce the total number of Ether types, hurting the scalability of
higher layer protocols, i.e. all higher layer protocols would pay a price for the benefit of just a few.
Attempts to add elaborate interpretation (e.g. IEEE 802 SSAP/DSAP) have generally been ignored by
most network designs, which follow the end-to-end principle.