Ryu Controller
Ryu Controller
Ryu Controller
which supports packet processing operations. In respect of self.addLink( s4Switch, s5Switch ) topos = { 'mytopo': ( lambda: MyTopo() ) }
support for southbound protocols, Ryu is working hand in
Fig. 2. Python script for generating scenarios
hand with protocols such as XFlow (Netflow and Sflow),
OF-Config, NETCONF, Open vSwitch Database To evaluate the statistics related to the performance of
Management Protocol (OVSDB), etc. VLAN, GRE and controller, mesh topology is implemented over 6 switches
VLAN, etc is also supported by Ryu Packet Libraries. with five different scenario having difference in only
Let us have a look on Ryu Managers and Core- number of nodes connected to each peripheral switch. As
Processes. The main executable is Ryu Manager. Ryu an effort to implement and test the controller’s
runs and listens to peculiar IP and Port, eg. 0.0.0.0:6633 to performance using scalability, we created a custom
connect to Ryu manager which uses RyuApp class using topology with five different scenarios having difference in
inheritance where, the Ryu messaging service does the number of nodes as shown in Table 1.
support components developed in other languages.
Table1: Scenario Table for Experiment
Ryu is distributed with multiple applications such as
a simple_switch, router, isolation, firewall, GRE tunnel, Scenario Number of switch Number of nodes
topology, VLAN, etc. Ryu applications are single- Scenario 1 6 50
threaded entities, which implement various functionalities.
Ryu applications send asynchronous events to each other. Scenario 2 6 100
The functional architecture of a Ryu application is shown Scenario 3 6 150
in Figure 2.
Scenario 4 6 200
To preserve the order of events, each Ryu application Scenario 5 6 250
has a receive queue (FIFO) for events FIFO. The thread’s
main loop pops out events from the receive queue and Scenario 6 6 300
Table2: Configuration Specification for Experiments content of ‘myresults’ file using command: plot
“myresults” title “Tcp_Flow” with linespoints.
Ubuntu 16.04.3 LTS
In the same way, using the python script all other
Mininet 2.2.1 0dl
scenarios are developed as stated in Table 1 with
OpenFlow ( 0x1:0x4)1.3 configuration provided in Table 2.
iPerf 3.0.7 One by one each scenario is tested using the step 1 to
CPU Intel Core i5 520M step 6 and results are obtained which is discussed in
upcoming section of performance analysis. Kindly note
RAM 6GB DDR3 that simulation execution of experiment needs RAM
support not less than 6GB, especially for the simulations
having nodes more than 200.
Now the step by step procedure is followed to perform
the experiment on Ryu using Mininet. For obtaining IV. PERFORMANCE ANALISIS
statistics tool used is iPerf.
This section provides the results obtained during the
Step 1: The first step is to run the Ryu controller using experimentation. With this paper, authors have made
the script. Here, the name of application program is attempt to address the scalability features of the Ryu
simple_switch_stp_13.py. Simple_switch_stp_13 is an
controller by implementing six scenarios in simulation
application program to develop spanning tree scenario
experimental environment which will be discussed in this
because, we are using mesh topology and mesh topology
has loops. To avoid loops we need spanning tree and thus, section in detail. This section has total 6 graphs which
stplib.py library is used which performs Bridge Protocol represents results for each scenario listed in Table 1. To
Data Unit BPDU packet exchange. Before executing this evaluate performance of Ryu controller in incremental
command, control must be in the folder of ryu. Now, we experiment with respect to scalability, throughput is the
provide Command: best matching parameter which will suffice our aim of
experiment. Thus, in this section, we have limited the
./bin/ryu-manager ryu/app/simple_switch_stp_13.py study to throughput only.
Step 2: Next step is to run the mininet mesh topology Graph of Fig. 3 shows the results obtained by
script, by providing topology name and switch OVSK performing transmission between client and server having
with the following command: sudo mn --custom the strength of nodes supported by network limited to 50.
~/mininet/examples/mymesh.py --topo=mytopo --mac -- It is observed from the graph that average throughput
controller remote --switch ovsk. Once the command is stays at 1.65 Gbps. Graph of Fig. 3 also shows that the
executed, check the connectivity between all the hosts variations are very high within the duration of 100 sec of
using mininet Command: pingall. simulation.
Step 3: Now we define one client and one server
which is done by any two host of the developed network.
The command to perform this task is: xterm h1_1 h6_60.
We have used first and last host with xterm command.
This will open two terminal windows, one as client and
another as server. Check configuration details on both the
windows with command: ifconfig.
Step 4: Now we need to generate the traffic between
client and server and log the events using iPerf tool. First
we go to server window and enter the command: iPerf –s
–p 6633 –i 1 > result. Here, ‘result’ is the filename
provided to store the results. Once the server starts waits
for the client. Now at the client side, to generate traffic,
we need to provide IP address of the server with the port
address by following command: iPerf –c 10.0.0.50 –p
6633 –t 100. Here 100 represent time in seconds.
Fig. 3. Throughput for scenarios with 50 nodes.
Step 5: Next step is filtering the logged file for
obtaining experiment specific results. We can check the Similarly, the stability is a big concern if we look at
content of generated file using command: more result. For the Fig. 4 which demonstrates the throughput graph of
filtering we have used grep and awk command: cat result | scenario with 100 nodes. It is even worst as far as stability
grep sec | head -100 | tr – “ ” | awk ‘{print $3,$5}’ > is concerned in comparison of Fig. 3. Again, if observed
myresults. Here, ‘myresults’ is the name of file where well, Fig. 5 is a bit stable even in presence of number of
filtered results are stored which can be checked for nodes equal to 150. However, there are few instances of
content using ‘more’ command: more myresults. highly volatile behavior of the network in Fig.5. Fig. 6
Step 6: Next step is to plot the graphs of obtained graph describes again excessive variations in the
results for which Gnuplot is used in this experiment. To throughput when the number of nodes reaches to 200.
start gnuplot tool, the command is: gnuplot. Next, plot the Once again, if we observe the graph shown in Fig. 7
having 250 nodes, it seems to be stable in comparison of Fig. 6. Throughput for scenarios with 200 nodes.
graph of Fig. 4 and Fig. 6 with 100 and 200 nodes. Still
few instances does not prove it better in comparison of
Fig.3 and Fig.5 with 50 and 150 nodes. Once, the final
result of 300 nodes scenario was obtained, it was observed
that throughput was increasing will tolerance of few
instance of dropping. For few seconds, simulation was
running well but, by the end of reaching to middle of the
simulation time, slowly, dropping instances were
observed frequently leading to degraded performance in
the second half of the simulation run.