Jump to: navigation, search

VTN:Beryllium:User Guide:OpenStack Support

How to set up OpenStack for the integration with VTN Manager

This guide describes how to set up OpenStack for integration with OpenDaylight Controller.

While OpenDaylight Controller provides several ways to integrate with OpenStack, this guide focus on the way which uses VTN features available on OpenDaylight controller. In the integration, VTN Manager work as network service provider for OpenStack. VTN Manager features, enable OpenStack to work in pure OpenFlow environment in which all switches in data plane are OpenFlow switch.

Requirements

  1. OpenDaylight Controller. (VTN features must be installed.)
  2. OpenStack Control Node.
  3. OpenStack Compute Node.
  4. OpenFlow Switch like mininet (not mandatory).
The VTN features support multiple OpenStack nodes. You can deploy multiple OpenStack Compute Nodes.
In management plane, OpenDaylight Controller, OpenStack nodes and OpenFlow switches should communicate with each other.
In data plane, Open vSwitches running in OpenStack nodes should communicate with each other through a physical or logical OpenFlow switches. The core OpenFlow switches are not mandatory. Therefore, you can directly connect to the Open vSwitch's.

OpenStack Demo Picture.png

Sample Configuration

Below steps depicts the configuration of single OpenStack Control node and OpenStack Compute node setup. Our test setup is as follows

Odl vtn devstack setup.png

Server Preparation

  1. Install Ubuntu 14.04 LTS in two servers (OpenStack Control node and Compute node respectively).
  2. While installing, Ubuntu mandates creation of a User, we created the user "stack". We will use the same user for running devstack.
  3. Proceed with the below mentioned User Settings and Network Settings in both the Control and Compute nodes.
User Settings for devstack
1. Log in to both servers.
2. Disable ufw
    sudo ufw disable
3. Install the below packages (optional, provides ifconfig and route coammnds, handy for debugging!!)
    sudo apt-get install net-tools
4. Edit sudo vim /etc/sudoers and add an entry as follows
    stack ALL=(ALL) NOPASSWD: ALL
Network Settings
  1. Checked the output of ifconfig -a, two interfaces were listed eth0 and eth1 as indicated in the image above.
  2. We had connected eth0 interface to the Network where ODL Controller is reachable.
  3. eth1 interface in both servers were connected to a different network to act as data plane for the VM's created using the OpenStack.
  4. Manually edited the file : sudo vim /etc/network/interfaces and made entries as follows:
   stack@ubuntu-devstack:~/devstack$ cat /etc/network/interfaces
   # This file describes the network interfaces available on your system
   # and how to activate them. For more information, see interfaces(5).
   # The loop-back network interface
   auto lo
   iface lo inet loopback
   # The primary network interface
   auto eth0
   iface eth0 inet static
        address <IP_ADDRESS_TO_REACH_ODL>
        netmask <NET_MASK>
        broadcast <BROADCAST_IP_ADDRESS>
        gateway <GATEWAY_IP_ADDRESS>
  auto eth1
  iface eth1 inet static
       address <IP_ADDRESS_UNIQ> 
       netmask <NETMASK>

Note:

 1.Please ensure that the eth0 interface is the default route and it is able to reach the ODL_IP_ADDRESS. 
 2.The entries for eth1 are not mandatory, If not set, we may have to manually do "ifup eth1" after the stacking is complete to activate the interface.
Finalize the User and Network Settings
  1. Please reboot both nodes after the user and network settings to have the network settings applied to the network.
  2. Login again and check the output of ifconfig to ensure that both interfaces are listed.

ODL Settings and Execution

VTN Configuration for OpenStack Integration:
  • VTN uses the configuration parameters from 90-vtn-neutron.xml file for the OpenStack integration.
  • These values will be set for the OpenvSwitch, in all the participating OpenStack nodes.
  • A configuration file 90-vtn-neutron.xml will be generated automatically by following the below steps,
  • Download the latest Beryllium karaf distribution from the below link,
   http://www.opendaylight.org/software/downloads
  • cd distribution-karaf-0.4.0-Beryllium and run karaf by using the following command ./bin/karaf.
  • Install the below feature to generate 90-vtn-neutron.xml,
feature:install odl-vtn-manager-neutron
  • Logout from the karaf console and check 90-vtn-neutron.xml file from the following path distribution-karaf-0.4.0-Beryllium/etc/opendaylight/karaf/.
  • The contents of 90-vtn-neutron.xml should be as follows,
bridgename=br-int
portname=eth1
protocols=OpenFlow13
failmode=secure
  • The values of the configuration parameters must be changed based on the user environment.
    • Especially, portname should be carefully configured, because if the value is wrong, OpenDaylight controller fails to forward packets.
    • Other parameters works fine as is for general use cases.
  • bridgename
    • The name of the bridge in Open vSwitch, that will be created by OpenDaylight Controller.
    • It must be "br-int".
  • portname
    • The name of the port that will be created in the vbridge in Open vSwitch.
    • This must be the same name of the interface of OpenStack Nodes which is used for interconnecting OpenStack Nodes in data plane.(in our case:eth1)
    • By default, if 90-vtn-neutron.xml is not generated, VTN uses ens33 as portname.
  • protocols
    • OpenFlow protocol through which OpenFlow Switch and Controller communicate.
    • The values can be OpenFlow13 or OpenFlow10.
  • failmode
    • The value can be "standalone" or "secure".
    • Please use "secure" for general use cases.
Start ODL Controller
  1. Please refer to the Installation Pages to run ODL with VTN Feature enabled.
  2. After running ODL Controller, please ensure ODL Controller listens to the ports:6633,6653, 6640 and 8080
  3. Please allow the ports in firewall for the devstack to be able to communicate with ODL Controller.

Note

6633/6653 - OpenFlow Ports
6640 - OVS Manager Port
8080 - Port for REST API

Devstack Setup

Get Devstack (All nodes)
1. Install git application using
  sudo apt-get install git
2. get devstack
  git clone https://git.openstack.org/openstack-dev/devstack; 
3. Switch to stable/Juno Version branch
  cd devstack
  git checkout stable/juno

Note:

  • If you want to use stable/kilo Version branch, Please execute the below command in devstack folder
  git checkout stable/kilo
  • If you want to use stable/liberty Version branch, Please execute the below command in devstack folder
  git checkout stable/liberty
Stack Control Node
  1. local.conf:
  2. cd devstack in the controller node
 ./stack.sh
Verify Control Node stacking
  • stack.sh prints out Horizon is now available at http://<CONTROL_NODE_IP_ADDRESS>:8080/
  • Execute the command sudo ovs-vsctl show in the control node terminal and verify if the bridge br-int is created.
  • Typical output of the ovs-vsctl show is indicated below
   e232bbd5-096b-48a3-a28d-ce4a492d4b4f
   Manager "tcp:192.168.64.73:6640"
       is_connected: true
   Bridge br-int
       Controller "tcp:192.168.64.73:6633"
           is_connected: true
       fail_mode: secure
       Port "eth1"
          Interface "eth1"
   ovs_version: "2.0.2"
Stack Compute Node
  1. local.conf:
  2. cd devstack in the controller node
 ./stack.sh
Verify Compute Node stacking
  • stack.sh prints out This is your host ip: <COMPUTE_NODE_IP_ADDRESS>
  • Execute the command sudo ovs-vsctl show in the control node terminal and verify if the bridge br-int is created.
  • The output of the ovs-vsctl show will be similar to the one seen in control node.
Additional Verifications
  • Please visit the ODL DLUX GUI after stacking all the nodes, http://<ODL_IP_ADDRESS>:8181/index.html. The switches, topology and the ports that are currently read can be validated.

For Beryllium use:

http://<controller-ip>:8181/index.html


Some Tips
  • If the interconnected between the OVS is not seen, Please bring up the interface for the dataplane manually using the below comamnd
  ifup <interface_name> 

Note for Beryllium release version: If you are using Beryllium release version you need to manually add flow entries to OpenFlow switches in the Mininet. The flow entries are needed to forward packets to controller when there is a table-miss. This configuration is required only in case of OpenFlow 1.3 or using OVS versions (>2.1.1).

ovs-ofctl --protocols=OpenFlow13 add-flow br-int priority=0,actions=output:CONTROLLER

Note for Beryllium SR1 and later version: No need to execute the above flow add commands to OF1.3+ switches if you use Beryllium SR1 or later versions. Since, VTN Manager itself installs the table-miss flow entry to the OF1.3+ switches so that unmatched packets are punted to the controller.

  • Please Accept Promiscuous mode in the networks involving the interconnect.

Create VM from Devstack Horizon GUI

  1. Login to http://<CONTROL_NODE_IP_ADDRESS>:8080/ to check the horizon GUI.
    OpenStackGui
    Enter the value for User Name as admin and enter the value for Password as labstack.
  2. We should first ensure both the hypervisors(control node and compute node) are mapped under hypervisors by clicking on Hpervisors tab.
    Hypervisors
  3. Create a new Network from Horizon GUI.
    • Click on Networks Tab.
    • click on the Create Network button.
      Network Created
    • A popup screen will appear.
    • Enter network name and click Next button.
      Step 1
    • Create a sub network by giving Network Address and click Next button .
      Step 2
    • Specify the additional details for subnetwork (please refer the image for your reference).
      Step 3
    • Click Create button .
  4. Create VM Instance
    • Navigate to Instances tab in the GUI.
      Instance Creation
    • Click on Lauch Instances button.
    • Click on Details tab to enter the VM details.For this demo we are creating Ten VM's(insances).
      Launch Instance
    • In the Networking tab, we must select the network.
    • For this we need to drag and drop the Available networks to Selected Networks (i.e) Drag vtn1 which was created from Available networks to Selected Networks .
    • Click Launch to create the instances.
      Launch_Instance_network
    • Ten VM's will be created.
      Load_All_Instances
    • Click on any VM displayed in the Instances tab and click the Console tab.
      Instance Console
    • Login to the VM console and verify with a ping commad.
      Instance ping

Verification of Control and Compute Node after VM creation

  • Every time a new VM is created, more interfaces are added to the br-int bridge in OVS
  • Use sudo ovs-vsctl show to list the number of interfaces added
  • Please visit the dlux GUI to list the new nodes in every switch.

Getting Started with DLUX

Ensure that you have created a topology and enabled MD-SAL feature in the Karaf distribution before you use DLUX for network management.

Logging In

To log in to DLUX, after installing the application:

  • Open a browser and enter the login URL. If you have installed DLUX as a stand-alone, then the login URL is http://localhost:9000/DLUX/index.html. However if you have deployed DLUX with Karaf, then the login URL is http://\<your IP\>:8181/dlux/index.html.
  • Login to the application with user ID and password credentials as admin.

NOTE: admin is the only user type available for DLUX in this release.

Working with DLUX

To get a complete DLUX feature list, install restconf, odl l2 switch, and switch while you start the DLUX distribution.

DLUX Page

NOTE: DLUX enables only those modules, whose APIs are responding. If you enable just the MD-SAL in beginning and then start dlux, only MD-SAL related tabs will be visible. While using the GUI if you enable AD-SAL karaf features, those tabs will appear automatically.

Viewing Network Statistics

The Nodes module on the left pane enables you to view the network statistics and port information for the switches in the network.

To use the Nodes module:

  • Select Nodeson the left pane.
  The right pane displays atable that lists all the nodes, node connectors and the statistics.
  • Enter a node ID in the Search Nodes tab to search by node connectors.
  • Click on the Node Connector number to view details such as port ID, port name, number of ports per switch, MAC Address, and so on.
  • Click Flows in the Statistics column to view Flow Table Statistics for the particular node like table ID, packet match, active flows and so on.
  • Click Node Connectors to view Node Connector Statistics for the particular node ID.

Viewing Network Topology

To view network topology:

  • Select Topology on the left pane. You will view the graphical representation on the right pane.
 In the diagram 
 blue boxes represent the switches,black represents the hosts available, and lines represents how switches are connected.

NOTE: DLUX UI does not provide ability to add topology information. The Topology should be created using an open flow plugin. Controller stores this information in the database and displays on the DLUX page, when the you connect to the controller using openflow.

DLUX Topology Page

OpenStack PackStack Installation Steps

  • Please go through the below wiki page for OpenStack PackStack installation steps.

References