hw3solutions
hw3solutions
hw3solutions
Homework 3
Due: April 15, 2008, 1:30 PM
Lead TA: Xin Zhang (xzhang1@cs.cmu.edu)
April 8, 2008
Solution: 8000 bytes. In slow start, the sender increases its window for each byte successfully
received.
Solution: 5000 bytes. The sender increases its window by one segment each window.
1
B Label Switching
You are trying to debug a problem with your company’s virtual circuit-based network. A diagram of the network
is shown below. A, B, and C are hosts attached to the network. S1, S2, and S3 are switches configured to act as
label swapping virtual circuit switches.
1
S2 2
4 B
3
A S1 2
4
2
4 S3 C
3
The label swapping tables for the switches are configured as follows. Some of the entries are stale and not
actually in use right now.
Switch Input Port Input Label Output Port Output Label
S1 2 2 3 4
S1 4 2 3 1
S1 4 17 2 2
S2 2 19 4 2
S2 3 1 2 19
S2 3 2 2 15
S2 3 5 4 2
S2 4 2 2 1
S2 4 1 4 1
S3 2 1 1 2
S3 2 2 4 5
S3 4 1 1 1
S3 4 4 1 5
Page 2
Write the sequence of (Switch, Input Port, Input Label) tuples and the destination node and label for
each of these packets. We’ve given you the start node and starting label. The intermediate tuples should
look like (S1, 1, 999) [e.g., switch S1, input port 1, label 999].
(a) Start node A, label 17. Switch tuples:
Solution: B, 1
Solution: B,19
Solution: B,15
Solution: OK: Connection-setup overhead, short-duration interaction NOT OK: Header overhead
(b) Give one reason that streaming multimedia is run over UDP rather than TCP:
Solution: OK: Variable delays from reliability, loss tolerant applications, drastic congestion control
NOT OK: connection setup overhead
Page 3
D Pick the true choices about congestion collapse and backoff
Otto Pilot creates a new network for the 150 PC computers he mounted within his car. Each computer
sends indepenent UDP query/response packets to the other computers in the car when it needs to know or
do something. Requests are retried after a time out that is a fixed, small multiple of the typial response
time. After running the OttoNet for a few days, Otto notices that network congestion occasionally causes a
congestion collapse because too many packets are sent into the network, only to be dropped before reaching
the eventual destination. These packets consume valuable resources.
Suppose each response or request can be fit into one packet. Which of the following techniques is likely
to reduce the likelihood of a congestion collapse? (Circle ALL that apply)
A. Increase the size of the queue in each router from 4 packets to 8 packets. Suppose the timout value
is appropriately adjusted accordingly to the queue length.
Solution: Yes. There are two possibilities for the timeout value. First, suppose that Ben
used the answer to question 9 to set the timeout. Given a fixed timeout, lengthening queues
would increase, not decrease, the chance of congestion collapse. The longer queues may cause
clients to time out and resend their request packets, even though a response may already be on
its way back. Second, suppose that Ben adjusted the timeout for the longer queues. Doubling
queue lengths certainly doesnt prevent congestion collapse, because congestion collapse can
occur with queues of any length. There is no a priori reason to believe that it is less likely
with 8-packet queues than with 4-packet queues. Increasing the size of the queue to 8 packets
might have a positive effect: some packets that would otherwise have been dropped might
eventually reach their destination. However, it might also have a negative effect: packets that
would otherwise have been dropped remain in the system and may cause congestion elsewhere.
Solution: YES. Exponential backoff reduces the injection rate of packets to a level that the
network can tolerate.
C. If a query is not answered within a timeout interval, multiplicatively reduce the maximum rate at
which the client application sends OttoNet query packets.
Solution: YES. If this question had said “current” rather than “maximum” rate, it would
have exactly been exponential backoff. Reducing the maximum rate eventually produces the
same end result.
D. Use a TCP style flow control window (per session) at each receiver to prevent buffer overruns.
Solution: NO. Flow control windows apply to streams of data. OttoNet requests are not
streams, they are independent packets, each one of which may be delivered to a different
server, so a flow control window is not applicable. Moreover, flow control is an end-to-end
mechanism to ensure that a slow receivers buffers dont get overwritten by a fast sender. But
the problem states that the server and client processing are both infinitely fast, so adding flow
control would not accomplish anything.
E Congestion Window
Consider the following plot of TCP window size as a function of time:
Page 4
45
40
35
Congestion Window Size
30
(segments)
25
20
15
10
0
0 5 10 15 20 25 30
Transmission Round
Assuming TCP Reno is the protocol experiencing the behavior shown above, answer the following ques-
tions.
(a) Identify the intervals of time when TCP slow start is operating.
(b) Identify the intervals of time when TCP congestion avoidance is operating (AIMD).
Solution: 6-23
(c) After the 16th transmission round, is segment loss detected by a triple duplicate ACK or by a timeout?
Solution: dupack
(d) What is the initial value of ssthreshold at the first transmission round?
Solution: 32
Solution: 21
Page 5
(f) What is the value of ssthreshold at the 24th transmission round?
Solution: 13
Solution: 7
(h) Assuming a packet loss is detected after the 26th round by the receipt of a triple duplicate ACK, what
will be the values of the congesion-window size and of ssthreshold?
Solution: 4,4
Harry Bovik is given the responsibility of configuring the packet queuing component of a new router. The link
speed of the router is 100 Mbit/s and he expects the average Internet round-trip time of connections through the
router to be 80ms. Harry realizes that he needs to size the buffers appropriately.
You should assume the following:
• You’re dealing with exactly one TCP connection.
• The source is a long-running TCP connection implementing additive-increase (increase window size by 1
packet after an entire window has been transmitted) and multiplicative-decrease (factor-of-two window re-
duction on congestion).
• The advertised window is always much larger than the congestion window.
• The loss recovery is perfect and has no impact on performance.
• The overhead due to headers can be ignored.
Harry argues that because the average RTT is 80ms, the average one-way delay is 40ms. Therefore, the
amount of buffering he needs for high link utilization is 100 Mbit/s * 40 ms or 500 KBytes.
Solution: 10 points.
The buffering is not enough for TCP to fill the link. The window size with this buffering will go
between 1.5BW ∗ RT T and0.75BW ∗ RT T .
Page 6
1.5 BW*RTT
0.75 BW*RTT
Time
The above graph shows the rough behavior of the congestion window over time. The overly simplified
analysis:
During 13 of the connection, the window will be too small, using an average of (0.75 + 1)/2 = 87 of
the capacity. During the other two thirds of the time, it will fill the link. The total utilization will be
23
24 of the capacity, or about 95.8 Mbit/sec.
This analysis is overly simple, because it doesn’t take into account the fact that the RTT increases
by one packet’s worth each RTT when we’re sending a larger window than the buffer can handle. Exact
answer:
In going from a window of 750 packets to a window of 1500 packets, the source sends:
X
1500
i
i=750
packets. During this same amount of wall-clock time, an optimal scheme could have transmitted
X
1500 X
249
i+ i
i=750 i=1
i=750
1
= 249∗250
1+ 2
1500∗1501−749∗750
2
1
= 31125 = 0.9645 = 96.45 Mbit/sec
1+ 844875
Page 7