打开APP
userphoto
未登录

开通VIP,畅享免费电子书等14项超值服

开通VIP
Learning SDN Internals | Researching SDN principals and mechanics

Learning POX : Testing OpenFlow switch

Posted on

Application name is pcap_switch.py

Overview

POX has a simple OpenFlow dataplane implementation. It can be tested using OFTest framework. To connect data ports of POX switch implementation to OFTest Linux virtual ethernet interfaces (VETH) are used. And a default controller port  6633 is used for Openflow connection.

Example topology

Running OFTest

  1. sudo ip link add type veth
  2. sudo ifconfig veth0 up

    sudo ifconfig veth7 up
  3. sudo ./pox.py –no-openflow datapaths.pcap_switch –address=127.0.0.1 –ports=veth0,veth2,veth4,veth6
  4. sudo ./oft -i 1@veth1 -i 2@veth3 -i 3@veth5 -i 4@veth7 -H 127.0.0.1 –log-file=”results.log”

Test results

Suite nameResults
basicOK
openflow_protocol_messages2 out of 10 failed
actions4 out of 19 failed
flow_matches2 out of 23 failed
flow_expireOK
port_statsOK
smoke3 out of 14 failed
latency1 out of 2 failed
message_types2 out of 15 failed
countersOK
default_drop1 out of 1 failed
pktact16 out of 60 failed
nicira_dec_ttl2 out of 2 failed
nicira_role3 out of 3 failed
loadOK
bsn_ipmask1 out of 1 failed
bsn_mirror1 out of 1 failed
bsn_shell1 out of 1 failed

References

Learning POX : Network virtualization

Posted on

The application name is aggregator.py.

Overview

Network virtualization is a paradigm when a physical network topology details are hidden from the user behind some intermediate entity. This beast handles events from the physical switches and translates commands received from the user application.

POX has frameworks for both sides of OpenFlow, one is the controller side which is used to control switches, and another one is the switch side which takes commands from controllers. The aggregator is a switch, allowing other controllers to connect to it and send it OpenFlow commands and so forth. But underneath, it implements this by controlling other OpenFlow switches.

It aggregates all the ports and flow tables of all the underlying switches and presents them to its own controller as if they were all part of one big switch. When the controller sends a flow table entry, the aggregator translates it to the underlying switches. When the controller requests flow statistics, the aggregator collects it from all the switches and responds with a proper combined statistics reply message.

Implementation details

Underlying switches are interconnected using flow-based GRE tunnels supported by Open vSwitch NXM extension.

Example topology

All the flows installed by the controller are get translated and installed over all switches hidden underneath the aggregator. Redirect action translation example is shown below.

Redirect action translation


During initialization stage three flow tables are created and prepopulated with service rules on each switch controlled by the aggregator. They are:

  • Table 0, responsible to multiplex packets coming from local or tunnel ports to the following two tables
  • Table 1, named “Remote“, is responsible to properly redirect traffic coming from a tunnel port
  • Table 2, named “Openflow“, is the table where translated flow tables are installed to

The following table represents service rules that are pre-installed to each table on each switch.

Preinstalled flows

TableMatchAction
Table 0Port = 3 (tunnel port)Resubmit to Table 1
Table 0AnyResubmit to Table 2
Table 1Tunnel id = 0Redirect to controller
Table 1Tunnel id = 1Redirect to port 1
Table 1Tunnel id = 2Redirect to port 2
Table 1Tunnel id = 0x7e (MALL)Redirect to All
Table 1Tunnel id = 0x7f (MFLOOD)Flood
Table 2AnyRedirect to controller

Ports

The aggregator hides all underlying switches and is seen as one big switch with lots of ports thus it has to translate switch port numbers. A simple formula is used:

Aggregator port number = switch port number + MAX_PORT_NUMBER * (switch dpid - 1)

Where MAX_PORT_NUMBER is a constant that defines the maximum number of ports that switch can own. In our module it is equal to 16.

Local portSwitch dpidAggregator port
111
212
1217
2218

Using cookies

The cookie field is used to map flow table entries between the aggregator and underlying switches.
For this purpose a cookie is generated and assigned to a flow table entry installed to the aggregator and controlled switches. Two mapping dictionaries are maintained to store correspondence between cookies.

Facilitating Nicira Extended Match(NXM) support in POX allows to use cookie as a key when modifying and deleting flow entries on switches.

Running the aggregator

  1. Start mininet using “sudo python agg_net.py
  2. Add GRE tunnels to switches using “sudo ./aggregator.sh 2
  3. Start main POX controller “./pox.py log.level –DEBUG openflow.of_01 –port=7744 forwarding.l2_pairs
  4. Start the aggregator “./pox.py log.level –DEBUG edge.aggregator –ips=172.16.0.1,172.16.0.2

Note that OVS 2.2.90 and Ubuntu 13.10 were used for experimentation.
Mininet script (agg_net.py)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
#!/usr/bin/python
  
from mininet.net import Mininet
from mininet.node import Controller, RemoteController, Node
from mininet.cli import CLI
from mininet.log import setLogLevel, info
from mininet.link import Link, Intf
  
def aggNet():
  
    NODE1_IP='172.16.0.1'
    NODE2_IP='172.16.0.2'
    CONTROLLER_IP='127.0.0.1'
  
    net = Mininet( topo=None,
                   build=False)
  
    net.addController( 'c0',
                      controller=RemoteController,
                      ip=CONTROLLER_IP,
                      port=6633)
  
    h1 = net.addHost( 'h1', ip='10.0.0.1' )
    h2 = net.addHost( 'h2', ip='10.0.0.2' )
    h3 = net.addHost( 'h3', ip='10.0.0.3' )
    h4 = net.addHost( 'h4', ip='10.0.0.4' )
    s1 = net.addSwitch( 's1' )
    s2 = net.addSwitch( 's2' )
  
    net.addLink( h1, s1 )
    net.addLink( h2, s1 )
    net.addLink( h3, s2 )
    net.addLink( h4, s2 )
  
    net.start()
    CLI( net )
    net.stop()
  
if __name__ == '__main__':
    setLogLevel( 'info' )
    aggNet()

Tunnel setup script (aggregator.sh)

1
2
3
4
5
6
7
8
9
10
11
12
#!/bin/bash
num=$1
echo Adding tunnels for $num switches
rmmod dummy
modprobe dummy numdummies=$((num+1))
  
for x in $(seq 1 $1); do
  ifconfig dummy$x 172.16.0.$x
  ovs-vsctl del-port s$x tun$x 2> /dev/null
  ovs-vsctl add-port s$x tun$x -- set Interface tun$x type=gre \
    options:remote_ip=flow options:local_ip=172.16.0.$x options:key=flow
done

References

Learning POX OpenFlow controller : Proactive approach

Posted on

The application name is topo_proactive.py.
This module works in a proactive mode. It is dependant on discovery module and can work with spanning_tree module in the topology with loops.

The idea behind this module is to assign all switches and hosts topology aware IP addresses. Knowning all IP addresses in advance, routing rules are installed.
Routing is based on the short path calculation. Hosts and switches get IP addresses by DHCP protocol from the controller.
Rules for creating IP addresses look the following way:

  • Switch address has a format of 10.<switch_id>.0.1
  • Network address has a format of 10.<switch_id>.0.0
  • Host address has a format of 10.<switch_id>.<switch_port_id>.<host_id>

Where

  • switch_id is generated by controller reflection just the ordering number
  • switch_port_id is a switch port number where host is connected to
  • host_id is the ordering number of a host

Example topology

The proactive nature of the module is encapsulated in the send_table method of TopoSwitch class. The logic inside this method populates the following rules to a switch:

  • Redirect all DHCP, LLDP, ARP, NDP packets to controller
  • Redirect packets with destination IP address from a network owned by known switch to a next hop port calculated by short path algorithm
  • Redirect packets with destination IP from a network owned by current switch  to a particular port where host with this IP was previously seen

Rules in OVS, retrieved using sudo ovs-ofctl dump-flows s1 command, will look the folllowing way.

1
2
3
4
5
6
7
8
9
10
11
cookie=0x0, duration=150.350s, table=0, n_packets=0, n_bytes=0, idle_age=150, ip,nw_dst=10.3.0.0/16 actions=output:1
 cookie=0x0, duration=150.350s, table=0, n_packets=0, n_bytes=0, idle_age=150, ip,nw_dst=10.2.0.0/16 actions=output:1
 cookie=0x0, duration=150.350s, table=0, n_packets=5, n_bytes=1711, idle_age=90, udp,tp_src=68,tp_dst=67 actions=CONTROLLER:65535
 cookie=0x0, duration=90.025s, table=0, n_packets=0, n_bytes=0, idle_age=90, ip,nw_dst=10.1.3.1 actions=mod_dl_src:00:00:00:00:00:01,mod_dl_dst:f2:5e:b1:0d:1f:3e,output:3
 cookie=0x0, duration=93.717s, table=0, n_packets=0, n_bytes=0, idle_age=93, ip,nw_dst=10.1.2.1 actions=mod_dl_src:00:00:00:00:00:01,mod_dl_dst:ea:2b:ce:81:3d:ca,output:2
 cookie=0x0, duration=150.350s, table=0, n_packets=0, n_bytes=0, idle_age=150, priority=32767,ip,nw_dst=255.255.255.255 actions=output:2,output:3
 cookie=0x0, duration=150.053s, table=0, n_packets=0, n_bytes=0, idle_age=150, ip,nw_dst=10.1.1.1 actions=mod_dl_src:00:00:00:00:00:01,mod_dl_dst:6a:c2:7e:06:13:98,output:1
 cookie=0x0, duration=150.350s, table=0, n_packets=29, n_bytes=1189, idle_age=0, priority=65000,dl_dst=01:23:20:00:00:01,dl_type=0x88cc actions=CONTROLLER:65535
 cookie=0x0, duration=150.350s, table=0, n_packets=0, n_bytes=0, idle_age=150, priority=32767,ip,nw_dst=10.1.3.0/24 actions=CONTROLLER:65535
 cookie=0x0, duration=150.350s, table=0, n_packets=0, n_bytes=0, idle_age=150, priority=32767,ip,nw_dst=10.1.1.0/24 actions=CONTROLLER:65535
 cookie=0x0, duration=150.350s, table=0, n_packets=0, n_bytes=0, idle_age=150, priority=32767,ip,nw_dst=10.1.2.0/24 actions=CONTROLLER:65535

Start POX using the following command line.
./pox.py log.level –DEBUG openflow.of_01 forwarding.topo_proactive openflow.discovery

Mininet script (proactive_net.py)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
#!/usr/bin/python
  
from mininet.net import Mininet
from mininet.node import Controller, RemoteController, Node
from mininet.cli import CLI
from mininet.log import setLogLevel, info
from mininet.link import Link, Intf
  
def aggNet():
  
    CONTROLLER_IP='127.0.0.1'
  
    net = Mininet( topo=None,
                build=False)
  
    net.addController( 'c0',
                    controller=RemoteController,
                    ip=CONTROLLER_IP,
                    port=6633)
  
    h1 = net.addHost( 'h1', ip='0.0.0.0' )
    h2 = net.addHost( 'h2', ip='0.0.0.0' )
    h3 = net.addHost( 'h3', ip='0.0.0.0' )
    h4 = net.addHost( 'h4', ip='0.0.0.0' )
    s1 = net.addSwitch( 's1' )
    s2 = net.addSwitch( 's2' )
    s3 = net.addSwitch( 's3' )
  
    net.addLink( s1, s2 )
    net.addLink( s2, s3 )
  
    net.addLink( h1, s1 )
    net.addLink( h2, s1 )
    net.addLink( h3, s3 )
    net.addLink( h4, s3 )
  
    net.start()
    CLI( net )
    net.stop()
  
if __name__ == '__main__':
    setLogLevel( 'info' )
    aggNet()

After executing “sudo python proactive_net.py” execute the following commands inside mininet shell.

1
2
3
4
h1 dhclient h1-eth0
h2 dhclient h2-eth0
h3 dhclient h3-eth0
h4 dhclient h4-eth0

References

Learning POX OpenFlow controller : L2 Switch using Multiple Tables

Posted on

The application name is l2_nx.py.
This module works in a reactive mode.

The idea behind this module is to introduce a feature of multiple OpenFlow tables. It is enabled by Nicira extension that allows POX to to benefit from OVS support of OpenFlow 1.2 features.
Using multiple tables for packets processing leads to much better utilization of the memory used to store rules in HW.
The functioning of this feature relies on resubmit action that is used to specify the next table to process a packet.

Module implementation is rather small and straightforward. It utilizes two tables to learn MACs.

The first thing a controller performs after a connection with a new switch is established is requesting OVS to enable such features as multiple tables and extended PacketIn format.

1
2
3
4
5
6
# Turn on Nicira packet_ins msg = nx.nx_packet_in_format()
event.connection.send(msg) # Turn on ability to specify table in flow_mods msg = nx.nx_flow_mod_table_id()
event.connection.send(msg)

After that a rule to send all packets to controller is installed to the first table.

1
2
3
4
5
msg = nx.nx_flow_mod()
msg.priority = 1 # Low priority msg.actions.append(of.ofp_action_output(port = of.OFPP_CONTROLLER))
msg.actions.append(nx.nx_action_resubmit.resubmit_table(table = 1))
event.connection.send(msg)

And a rule to flood all packets is installed to the second table.

1
2
3
4
5
msg = nx.nx_flow_mod()
msg.table_id = 1
msg.priority = 1 # Low priority msg.actions.append(of.ofp_action_output(port = of.OFPP_FLOOD))
event.connection.send(msg)

As soon as the controller is notified (using Barrier reply) that switch has finished a described setup, handler for PacketIn events of Nicira format is activated.

1
event.connection.addListenerByName("PacketIn", _handle_PacketIn)

The first table is used to store source addresses. The purpose of this table is to enable data path to distinguish new MACs from learned earlier. In the first case a packet is forwarded to the controller and in the second case it is resubmitted to another table.

1
2
3
4
msg = nx.nx_flow_mod()
msg.match.of_eth_src = packet.src
msg.actions.append(nx.nx_action_resubmit.resubmit_table(table = 1))
event.connection.send(msg)

Once controller is notified on a new source MAC, it installs a rule to the second table to send packets destined to this MAC to the proper port.

1
2
3
4
5
msg = nx.nx_flow_mod()
msg.table_id = 1
msg.match.of_eth_dst = packet.src
msg.actions.append(of.ofp_action_output(port = event.port))
event.connection.send(msg)

Comparing to l2_pairs implementation of L2 learning switch this module optimizes usage of hardware resources.

It is not difficult to calculate the number of rules that would be installed for a topology with one switch and several hosts attached. The formula here would be two times N where N is the number of hosts. Thus for a topology of one switch and four hosts it will result in 2 * 4 = 8 rules. Output of sudo ovs-ofctl dump-flows s1 OVS command is presented below just after pingall command has been executed inside Mininet.

Rules in OVS, retrieved using sudo ovs-ofctl dump-flows s1 command, will look the following way.

1
2
3
4
5
6
7
8
9
10
cookie=0x0, duration=18.185s, table=0, n_packets=16, n_bytes=1568, idle_age=11, priority=1 actions=CONTROLLER:65535,resubmit(,1)
cookie=0x0, duration=10.916s, table=0, n_packets=6, n_bytes=252, idle_age=6, dl_src=fa:20:3e:ae:e7:23 actions=resubmit(,1)
cookie=0x0, duration=10.969s, table=0, n_packets=6, n_bytes=252, idle_age=6, dl_src=56:7f:57:70:73:fb actions=resubmit(,1)
cookie=0x0, duration=10.923s, table=0, n_packets=6, n_bytes=252, idle_age=6, dl_src=5a:0c:d5:0a:cf:dd actions=resubmit(,1)
cookie=0x0, duration=10.945s, table=0, n_packets=6, n_bytes=252, idle_age=6, dl_src=02:a7:f3:f9:24:b5 actions=resubmit(,1)
cookie=0x0, duration=18.185s, table=1, n_packets=19, n_bytes=1862, idle_age=10, priority=1 actions=FLOOD
cookie=0x0, duration=10.921s, table=1, n_packets=6, n_bytes=252, idle_age=6, dl_dst=5a:0c:d5:0a:cf:dd actions=output:3
cookie=0x0, duration=10.969s, table=1, n_packets=6, n_bytes=252, idle_age=6, dl_dst=56:7f:57:70:73:fb actions=output:1
cookie=0x0, duration=10.944s, table=1, n_packets=6, n_bytes=252, idle_age=6, dl_dst=02:a7:f3:f9:24:b5 actions=output:2
cookie=0x0, duration=10.913s, table=1, n_packets=6, n_bytes=252, idle_age=6, dl_dst=fa:20:3e:ae:e7:23 actions=output:4

New things that appeared in this module

  • call_when_ready allows to wait until another module is loaded and executed
  • nx_packet_in_format allows OVS to send packets in an extended format
  • nx_flow_mod_table_id notifies OVS that multiple tables will be programmed by controller
  • nx_action_resubmit – resubmit action needed to specify the next table
  • nx_flow_mod – extended flow_mod with table identifier support

References

Learning POX OpenFlow controller : Global Forwarding Database

Posted on

The application name is l2_multi.py.
This module works in a reactive mode. It is dependant on discovery module and can work with spanning_tree module in the topology with loops.

The idea behind this module is to have a forwarding database for a whole topology, i.e. several connected switches. A shortest path is calculated between each and every switch and OpenFlow rules matching each observed flow are installed to switches along aforementioned path.

Rules in OVS, retrieved using sudo ovs-ofctl dump-flows s1 command, will look the folllowing way.

1
2
3
4
cookie=0x0, duration=6.681s, table=0, n_packets=0, n_bytes=0, idle_timeout=10, hard_timeout=30, idle_age=6, priority=65535,arp,in_port=1,vlan_tci=0x0000,dl_src=1e:58:08:9b:8d:8f,dl_dst=2a:12:2e:f9:fe:63,arp_spa=10.0.0.1,arp_tpa=10.0.0.2,arp_op=2 actions=output:2
cookie=0x0, duration=6.771s, table=0, n_packets=1, n_bytes=42, idle_timeout=10, hard_timeout=30, idle_age=6, priority=65535,arp,in_port=1,vlan_tci=0x0000,dl_src=1e:58:08:9b:8d:8f,dl_dst=2a:12:2e:f9:fe:63,arp_spa=10.0.0.1,arp_tpa=10.0.0.2,arp_op=1 actions=output:2
cookie=0x0, duration=6.721s, table=0, n_packets=1, n_bytes=42, idle_timeout=10, hard_timeout=30, idle_age=6, priority=65535,arp,in_port=2,vlan_tci=0x0000,dl_src=2a:12:2e:f9:fe:63,dl_dst=1e:58:08:9b:8d:8f,arp_spa=10.0.0.2,arp_tpa=10.0.0.1,arp_op=2 actions=output:1
cookie=0x0, duration=6.768s, table=0, n_packets=0, n_bytes=0, idle_timeout=10, hard_timeout=30, idle_age=6, priority=65535,arp,in_port=2,vlan_tci=0x0000,dl_src=2a:12:2e:f9:fe:63,dl_dst=1e:58:08:9b:8d:8f,arp_spa=10.0.0.2,arp_tpa=10.0.0.1,arp_op=1 actions=output:1

There are two flows of ARP requests and replies that result in four rules for two directions.

Several classes are created to encapsulate logic required for thsi module:

  • l2_multi – the main entity responsible for handling events coming from openflow library, i.e. ConnectionUp, BarrierIn and discovery module, i.e. LinkEvent
  • Switch – represents a physical switch and contains logic to handle PacketIn events and installing forwarding rules
  • WaitingPath is a cache of packets that are waiting on its path to be installed
  • PathInstalled is an event, fired once the rules are installed to all the switches along the shortest path enabling traffic with a particular destination MAC to traverse the topology according to previously calculated path

Once a packet is received by a controller, its “location”, namely switch and originating port, is saved to a map named mac_map and source MAC address is used as a key.

In case if aforementioned map already contains “location” for destination MAC, the shortest path is installed, meaning all switches between originating switch and destination switch are appropriately programmed with rules to forward packets that have the same headers along one shortest path.

1
2
3
dest = mac_map[packet.dst]
match = of.ofp_match.from_packet(packet) self.install_path(dest[0], dest[1], match, event)

Shortest path is returned by _get_path method that uses Floyd-Warshall algorithm to calculate “raw” path, namely a list of nodes.
Then this information is augumented with output ports.

As soon as shortest path is calculated it is converted to appropriate rules and is installed to all switches that belong to it.

1
2
3
4
5
6
7
def _install_path (self, p, match, packet_in=None):
  wp = WaitingPath(p, packet_in)
  for sw,in_port,out_port in p:
    self._install(sw, in_port, out_port, match)
    msg = of.ofp_barrier_request()
    sw.connection.send(msg)
    wp.add_xid(sw.dpid,msg.xid)

First of all WaitingPath is created that caches our packet to be sent out after we receive Barrier reply. Then rules for each switch belonging to the shortest path are installed accomponied with Barrier request message. All rules are basically the forwarding rules that have a common match but different output port.

1
2
3
4
5
6
7
8
9
def _install (self, switch, in_port, out_port, match, buf = None):
  msg = of.ofp_flow_mod()
  msg.match = match
  msg.match.in_port = in_port
  msg.idle_timeout = FLOW_IDLE_TIMEOUT
  msg.hard_timeout = FLOW_HARD_TIMEOUT
  msg.actions.append(of.ofp_action_output(port = out_port))
  msg.buffer_id = buf
  switch.connection.send(msg)

As soon as Barrier reply is received WaitingPath is notified to output previously cached packet and raise PathInstalled event. WaitingPath is using a list of Barrier requests identifiers, called XIDs, to wait until all switches reply with Barrier reply. Each time Barrier reply from one of switches along the path is received, its XID is removed from the list. Thus emtpy list, named xids, indicates that path is installed to all the switches and it is safe to send the cached packet.

1
2
3
4
5
6
7
8
9
10
11
def notify (self, event):
  self.xids.discard((event.dpid,event.xid))
  if len(self.xids) == 0:
    # Done!     if self.packet:
      log.debug("Sending delayed packet out %s"
                % (dpid_to_str(self.first_switch),))
      msg = of.ofp_packet_out(data=self.packet,
          action=of.ofp_action_output(port=of.OFPP_TABLE))
      core.openflow.sendToDPID(self.first_switch, msg)
    core.l2_multi.raiseEvent(PathInstalled(self.path))

PathInstalled event could be used in other modules subscribed to events from l2_multi.

New things that appered in this module

  • Barrier
  • Floyd-Warshall algorithm
  • XID
  • core.l2_multi.raiseEvent

References

Learning POX OpenFlow controller : Imitating L3

Posted on

The application name is l3_learning.py.
This module works in a reactive mode.

The module does several things
1. Learns the correspondence between IP and MAC addresses.
2. Uses this information to install the rule that replaces a destination MAC while forwarding a packet to the correct port.
3. Generates ARP requests in case if a destination IP is unknown.
4. Replies to ARP requests.

1. The PacketIn handler updates ARP table of a switch with particular DPID.

1
self.arpTable[dpid][packet.next.srcip] = Entry(inport, packet.src)

The field packet.next.srcip is a source IP address while packet.src is its source MAC address.

2. In case if there is match between destination IP address and pair of port and MAC address the packet is sent out, plus routing rule is installed to the switch. The essence of routing rule installed in the same handler is presented below.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
if dstaddr in self.arpTable[dpid]:
  prt = self.arpTable[dpid][dstaddr].port
  mac = self.arpTable[dpid][dstaddr].mac
  
  actions = []
  actions.append(of.ofp_action_dl_addr.set_dst(mac))
  actions.append(of.ofp_action_output(port = prt))
  match = of.ofp_match.from_packet(packet, inport)
  match.dl_src = None # Wildcard source MAC
  msg = of.ofp_flow_mod(command=of.OFPFC_ADD,
                        idle_timeout=FLOW_IDLE_TIMEOUT,
                        hard_timeout=of.OFP_FLOW_PERMANENT,
                        buffer_id=event.ofp.buffer_id,
                        actions=actions,
                        match=of.ofp_match.from_packet(packet, inport))
  
  event.connection.send(msg.pack())

The interesting part of the above flow modification request is ofp_action_dl_addr action that is
responsible to replace a destination MAC.

We can check the rule in OVS using “sudo ovs-ofctl dump-flows s1″ command.

cookie=0x0, duration=4.122s, table=0, n_packets=1, n_bytes=98,
idle_timeout=10, idle_age=4, priority=65535,icmp,in_port=1,vlan_tci=0x0000,
dl_src=56:08:eb:75:57:d7,dl_dst=5e:37:62:da:fb:24,
nw_src=10.0.0.1,nw_dst=10.0.0.2,nw_tos=0,icmp_type=8,
icmp_code=0 actions=mod_dl_dst:5e:37:62:da:fb:24,output:2

3. In case if a destination port and MAC are unknown yet, the packet buffer id is stored in the waiting list.

1
2
entry = (time.time() + MAX_BUFFER_TIME,event.ofp.buffer_id,inport)
bucket.append(entry)

These packets will be sent out later on, once the controller learns yet unknown IPs.

1
self._send_lost_buffers(dpid, packet.next.srcip, packet.src, inport)

And ARP request is flooded from all ports asking about the unknown IP address.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
r = arp()
r.hwtype = r.HW_TYPE_ETHERNET
r.prototype = r.PROTO_TYPE_IP
r.hwlen = 6
r.protolen = r.protolen
r.opcode = r.REQUEST
r.hwdst = ETHER_BROADCAST
r.protodst = dstaddr
r.hwsrc = packet.src
r.protosrc = packet.next.srcip
e = ethernet(type=ethernet.ARP_TYPE, src=packet.src, dst=ETHER_BROADCAST)
e.set_payload(r)
log.debug("%i %i ARPing for %s on behalf of %s" % (dpid, inport, str(r.protodst), str(r.protosrc)))
msg = of.ofp_packet_out()
msg.data = e.pack()
msg.actions.append(of.ofp_action_output(port = of.OFPP_FLOOD))
msg.in_port = inport
event.connection.send(msg)

4. In case if ARP request is received and the requested IP address is already known, the ARP reply is constructed and sent to the curious host. Otherwise the ARP request is simply flooded from all the ports of the particular switch.

5. One more detail regarding this module is an option to specify default gateways IP addresses in command line. That will result in populating ARP tables of every switch with the following entry for each gateway address.

1
self.arpTable[dpid][IPAddr(fake)] = Entry(of.OFPP_NONE, dpid_to_mac(dpid))

References

Learning POX OpenFlow controller : Another L2 switch implementation

Posted on

The application name is l2pairs.py.
This module works in a reactive mode, meaning that rules are installed as soon as a flow is observed by the controller. Comparing to l2_learning this module supports any number of switches and uses only MAC addresses for matching frames.

A global dictionary is maintained where a pair of connection and MAC address are used as a key while port is a value.

1
2
table[(event.connection,packet.src)] = event.port
dst_port = table.get((event.connection,packet.dst))

Once controller receives a packet with destination MAC that can not be resolved in a destination port, the packet is sent to back to originating switch with a command to output it from all ports except the one where it was received on.

1
2
3
msg = of.ofp_packet_out(data = event.ofp)
msg.actions.append(of.ofp_action_output(port = all_ports))
event.connection.send(msg)

Only in the case when a dictionary already contains the destination port, two rules are installed for both directions while sending the packet out from the destination port.

The first rule looks the following way.

1
2
3
4
5
msg = of.ofp_flow_mod()
msg.match.dl_dst = packet.src
msg.match.dl_src = packet.dst
msg.actions.append(of.ofp_action_output(port = event.port))
event.connection.send(msg)

The second rule looks the slightly different.

1
2
3
4
5
6
msg = of.ofp_flow_mod()
msg.data = event.ofp
msg.match.dl_src = packet.src
msg.match.dl_dst = packet.dst
msg.actions.append(of.ofp_action_output(port = dst_port))
event.connection.send(msg)

The assignment of event.ofp, i.e PacketIn data, to msg.data is a trick to make an extra action while installing the rule, namely to output the packet.

It is not difficult to calculate the number of rules that would be installed for a topology with one switch and several hosts attached. The formula here would be square N plus N where N is the number of hosts. Thus for a topology of one switch and four hosts it will result in 16 – 4 = 12 rules. Output of sudo ovs-ofctl dump-flows s1 OVS command is presented below just after pingall command has been executed inside Mininet.

1
2
3
4
5
6
7
8
9
10
11
12
cookie=0x0, duration=2.703s, table=0, n_packets=2, n_bytes=196, idle_age=2, dl_src=5a:0c:d5:0a:cf:dd,dl_dst=fa:20:3e:ae:e7:23 actions=output:4
cookie=0x0, duration=2.733s, table=0, n_packets=3, n_bytes=238, idle_age=2, dl_src=fa:20:3e:ae:e7:23,dl_dst=02:a7:f3:f9:24:b5 actions=output:2
cookie=0x0, duration=3.041s, table=0, n_packets=3, n_bytes=238, idle_age=2, dl_src=02:a7:f3:f9:24:b5,dl_dst=56:7f:57:70:73:fb actions=output:1
cookie=0x0, duration=3.007s, table=0, n_packets=2, n_bytes=196, idle_age=2, dl_src=56:7f:57:70:73:fb,dl_dst=5a:0c:d5:0a:cf:dd actions=output:3
cookie=0x0, duration=2.877s, table=0, n_packets=3, n_bytes=238, idle_age=2, dl_src=fa:20:3e:ae:e7:23,dl_dst=56:7f:57:70:73:fb actions=output:1
cookie=0x0, duration=2.845s, table=0, n_packets=2, n_bytes=196, idle_age=2, dl_src=02:a7:f3:f9:24:b5,dl_dst=5a:0c:d5:0a:cf:dd actions=output:3
cookie=0x0, duration=2.665s, table=0, n_packets=3, n_bytes=238, idle_age=2, dl_src=fa:20:3e:ae:e7:23,dl_dst=5a:0c:d5:0a:cf:dd actions=output:3
cookie=0x0, duration=3.077s, table=0, n_packets=2, n_bytes=196, idle_age=2, dl_src=56:7f:57:70:73:fb,dl_dst=02:a7:f3:f9:24:b5 actions=output:2
cookie=0x0, duration=2.772s, table=0, n_packets=2, n_bytes=196, idle_age=2, dl_src=02:a7:f3:f9:24:b5,dl_dst=fa:20:3e:ae:e7:23 actions=output:4
cookie=0x0, duration=2.986s, table=0, n_packets=3, n_bytes=238, idle_age=2, dl_src=5a:0c:d5:0a:cf:dd,dl_dst=56:7f:57:70:73:fb actions=output:1
cookie=0x0, duration=2.805s, table=0, n_packets=3, n_bytes=238, idle_age=2, dl_src=5a:0c:d5:0a:cf:dd,dl_dst=02:a7:f3:f9:24:b5 actions=output:2
cookie=0x0, duration=2.913s, table=0, n_packets=2, n_bytes=196, idle_age=2, dl_src=56:7f:57:70:73:fb,dl_dst=fa:20:3e:ae:e7:23 actions=output:4

References

本站仅提供存储服务,所有内容均由用户发布,如发现有害或侵权内容,请点击举报
打开APP,阅读全文并永久保存 查看更多类似文章
猜你喜欢
类似文章
【热】打开小程序,算一算2024你的财运
Open vSwitch FAQ
A tcpdump Tutorial and Primer
WinPcap: Filtering expression syntax
通过实例学习 tcpdump 命令
tcpdump使用手册
抓包工具 tcpdump 用法说明
更多类似文章 >>
生活服务
热点新闻
分享 收藏 导长图 关注 下载文章
绑定账号成功
后续可登录账号畅享VIP特权!
如果VIP功能使用有故障,
可点击这里联系客服!

联系客服