Jump to: navigation, search

Release/Hydrogen/Virtualization/Installation Guide

This content was created for the Hydrogen release and is out-of-date and is considered deprecated. It is unlikely to be updated in the future either. Information here should be taken with that in mind.

Virtualization Contents
Virt. Install

Virt. User Guide
Virt. Release Notes
Developer Guide
Hydrogen Main

Installing From Zip

The simplest way to install OpenDaylight is via the pre-built zip file.

Getting the Zip File

You can find the OpenDaylight Hydrogen Release Virtualization Edition zip file on the downloads page (direct link to 0.1.1 zip):

Installing

Prerequisites:
  • OpenDaylight Hydrogen has been developed and tested for Java 1.7 JVMs and JDKs
  • Note: on some platforms, there are known issues with Oracle Java 1.7.0_21 and 1.7.0_25, but 1.7.0_45 and 1.7.0_51 have worked fine
  • In general, OpenDaylight requires appropriate setting of the JAVA_HOME directory
  • More information can be found in the OpenDaylight Hydrogen Release Notes.
Understanding The Structure:
  • The main content of OpenDaylight Hydrogen is in a directory called opendaylight, where you will see the following files:
  • run.sh — launches OpenDaylight on Linux/Mac/Unix systems
  • run.bat — launches OpenDaylight on Windows systems
  • run.base.sh —
  • run.internal.sh —
  • externalapps — for applications such as VTN
  • version.properties — indicates the build version
  • configuration — basic initialization files (internal to OpenDaylight)
  • lib — Java libraries
  • plugins — OpenDaylight's OSGi plugins
Running OpenDaylight:
  • Enter ./run.sh -virt   {ovsdb | opendove | vtn}   [advanced options]   or similary on windows using ./run.bat with administrator privileges to launch OpenDaylight.
  • You must select one of the 3 supported network virtualization technologies: ovsdb | opendove | vtn
  • Advanced options : [-jmx]   [-jmxport <num>]   [-debug]   [-debugsuspend]   [-debugport <num>]   [-start [<console port>]]   [-stop]   [-status]   [-console]   [-help]   [-of13]   [-bundlefilter <bundlefilter>]   [<other args will automatically be used for the JVM>]
  • Navigate to http://<ip-address-of-machine-where-you-ran-opendaylight>:8080 to open the web interface, then use the following credentials to log in:
  • User: admin
  • Password: admin
  • If you are running OpenDaylight on the same machine as your browser, you can browse to http://localhost:8080 or http://127.0.0.1:8080 to avoid needing to know the IP address of the machine you are using.


Installing From RPM

Getting the RPM

Install

Use this method to install OpenDaylight using the yum repo:

  1. Install the yum repo
  Download the repo file from:
  https://nexus.opendaylight.org/content/repositories/opendaylight-yum-fedora-19-x86_64/rpm/opendaylight-release/0.1.0-2.fc19.noarch/opendaylight-release-0.1.0-2.fc19.noarch.rpm
  Install the repo file:
  sudo rpm -Uvh opendaylight-release-0.1.0-2.fc19.noarch.rpm
  
  2. Install opendaylight
  Base Edition: sudo yum install opendaylight
  Virtualization Edition: sudo yum install opendaylight-virtualization
  Service Provider: sudo yum install opendaylight-serviceprovider

Use this method if the OpenDaylight packages have already been downloaded:

  sudo rpm -Uvh /path/to/rpms/*.rpm or
  sudo yum localinstall /path/to/rpms/*.rpm

Enable/disable

  sudo systemctl enable opendaylight-controller.service
  sudo systemctl disable opendaylight-controller.service

Start/stop

  sudo systemctl start opendaylight-controller.service
  sudo systemctl stop opendaylight-controller.service
Note(1): to get the osgi console: telnet 127.0.0.1 2400.
Use ctrl+] to break from the osgi console. Then quit to exit the telnet session.
Note(2): to reach the OpenDaylight page use the following url: http://127.0.0.1:8080/

Configuration

Edit the sysconfig file to change the type of edition:

  sudo vi /etc/sysconfig/opendaylight-controller

Set ODL_DIST to the desired edition, e.g.,

  ODL_DIST="virt-ovsdb"
Note: to use a virtualization or service-provider edition requires that the compatable rpm be installed.

Use the following table to select the edition:

Note: only the base and virt-ovsdb editions are supported when installed via rpm
Edition Value
base base
virtualization ovsdb virt-ovsdb
virtualization vtn virt-vtn
virtualization opendove virt-opendove
virtualization affinity virt-affinity
service provider sp

Any additional options, e.g., -of13 and -debug can be specified like this:

  ODL_OPTS="-debug -of13"

The default installation assumes the Base edition with OpenFlow 1.0:
ODL_DIST="base"
ODL_OPTS=""

To use the OVSDB Virtualization edition with OpenFlow 1.3, use the following values:
ODL_DIST="virt-ovsdb"
ODL_OPTS="-of13"

VirtualBox Image

Getting the VirtualBox Image

You can find the OpenDaylight Hydrogen Release VirtualBox Image on the download page here.

VM description

Installation Procedure

Prerequisites

  • Virtualbox (if you use QEMU or VMware you can find instructions online on how to covert ova file to these)

Installation steps

  1. Download the VM ova file from link above
  2. Open VirtualBox and do import appliance
  3. Configure the VM with the following recommended settings
    • Processor: 4x CPU if you plan to run the controller in the VM, just 1 if you don't
    • RAM: 4GB if you plan to run the controller in the VM, or just 1GB if you don't
    • Network: 1x NIC, bridge mode is recommended, otherwise NAT (to share your Internet connection) or host-only (creates internal network)
  4. Start the VM
  5. Login
    • for Ubuntu VM, Login with mininet/mininet
    • for Fedora (where available), Login with odl/odl ; The root password is "password"
  6. Open README.txt

Using the VM

This VM can be used in two scenarios:

  • Self contained: Both OpenDaylight and the mininet network emulator will run in the this VM
  • Network Emulator: The mininet network emulator will run in this VM, and OpenDaylight can be run on an external machine or another VM

Docker Image

What is Docker

Docker, provided by docker.io, and available in most Linux distributions as well as available on MacOS and Windows, is an open-source project to easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds and more.

For more information on docker please read docker.io's documentation.

The sudo command and the docker Group

(reprinted from docker.io's basic documentation):

The docker daemon always runs as the root user, and since Docker version 0.5.2, the docker daemon binds to a Unix socket instead of a TCP port. By default that Unix socket is owned by the user root, and so, by default, you can access it with sudo.

Starting in version 0.5.3, if you (or your Docker installer) create a Unix group called docker and add users to it, then the docker daemon will make the ownership of the Unix socket read/writable by the docker group when the daemon starts. The docker daemon must always run as the root user, but if you run the docker client as a user in the docker group then you don't need to add sudo to all the client commands.

OpenDaylight Docker Images

There are public images available via the public docker repository. You can find the images by issuing a docker search command looking for 'opendaylight' i.e.

   $ docker search opendaylight
   Found 3 results matching your query ("opendaylight")
   NAME                                     DESCRIPTION
   opendaylight/base-edition                The base OpenDaylight SDN controller
   opendaylight/serviceprovider-edition     The service provider version of the OpenDaylight SDN controller
   opendaylight/virtualization-edition      The virtualization version of the OpenDaylight SDN controller

Each of these images have version tags that allow the specification of the version via the version name. `latest is also a support tag to identify the latest official release. For the first release of OpenDaylight, the version tag is hydrogen.

Using the Image

The OpenDaylight docker image is meant to be used to start an instance of the OpenDaylight SDN controller and that process will be invoked when the docker image is run. Any command line options you append to the docker run command will be passed on to the the OpenDaylight run.sh startup script. In its simplest form you can invoke an instance of the Opendaylight controller using the command:

   docker run -d <image-identifier> -virt <virtualization-type>

Where <image-identifier> can be one of the pre-build image references, i.e. opendaylight/virtualization. Additional information and options for 'running' a docker image can be found at docker.io's run documentation.

Ports

The OpenDaylight controller image will expose the following ports from the container to the host system:

  • 1088 - JMX access
  • 1830 - Netconf use
  • 2400 - OSGi console
  • 4342 - Lisp Flow Mapping (for Service Provider Edition only)
  • 5666 - ODL Internal clustering RPC
  • 6633 - OpenFlow use
  • 7800 - ODL Clustering
  • 8000 - Java debug access
  • 8080 - OpenDaylight web portal
  • 8383 - Netconf use
  • 12001 - ODL Clustering

By default these ports not will be mapped to ports on the host system (i.e. the system on which the docker run command is invoked). To understand how to enable docker container instances to communicate without having to 'hard wire' the port information see docker.io's documentation on linking.

If you wish to map these ports to specific port numbers on the host system, this can be accomplished as command line options to the docker run command using the 'port map' option specified using the -p option. The syntax for this option is documented in docker.io's run documentation, but is essentially -p <host-port>:<container-port>.

Clustering

OpenDaylight supports the concept of clustering using a command line option -Dsupernodes to support high availability.

The docker images can be used to set up a cluster on a single docker server (host) using the docker naming and linking capability along with some modifications that were made to the OpenDaylight's processing of the supernodes host specifications.

'NOTE: The cluster configuration setup described in this document does not work for containers that are running on separate hosts. Supporting clustering using docker images across hosts is an advanced topic that relies on setting up virtual networks between the containers and is beyond the scope of this introduction.'

To support docker based clustering the syntax of the supernodes parameter has been extended. The important changes are:

  • +self - interpreted as a reference to the local host's address (not 127.0.0.1) and will be resolved to an IP address through the environment variable HOSTNAME.
  • +<name> - interpreted as a reference to another container, <name>, and will be resolved using the environment variables defined by docker when the -link command line option is used

It is important to note that these extensions will only be used if OpenDaylight determines that it is running inside a container. This is determined by the value of the environment variable container being set to lxc.

All values not prefixed by a + will be interpreted normally.

Below is an example of starting up a three node cluster using this syntax:

   $ docker run -d -name node1 opendaylight/virtualization -Dsupernodes=+self -virt vtn
   a8435cc23e13cb4e04c3c9788789e7e831af61c735d14a33025b3dd6c76e2938
   $ docker run -d -name node2 -link node1:n1 opendaylight/virtualization -Dsupernodes=+self:+n1 -virt vtn
   fa0b37dfd216291e36fd645a345751a1a6079123c99d75326a5775dce8414a93
   $ docker run -d -name node3 -link node1:n1 opendaylight/virtualization -Dsupernodes=+self:+n1 -virt vtn
   9ad6874aa85cad29736030239baf836f46ceb0c242baf873ab455674040d96b1

The cluster can be verified through the OpenDaylight user interface. This can be accomplished by first determining the IP address of one of the nodes:

   $ docker inspect -format='{ {.NetworkSettings.IPAddress} }' node1
   172.17.0.46

Note: The spaces between the ``{ {`` and ``} }`` should not be entered, wikitext just does not allow proper display of double brackets without attempting to interpret them

After determining the IP address you can view the web interface by typing http://172.17.0.46:8080 in the browser address bar, authenticating with the default user name and password (admin/admin), and then viewing the cluster information by selecting Cluster from the right hand drop down menu. A popup window should be displayed that shows all the nodes in the cluster with the master marked with a C and the node to which you are currently connected marked with a * (asterisks).

Install VTN Coordinator

Installation from Virtualization Edition (Hydrogen)