Monday, November 18, 2013

Part 3 - How to install OpenStack Havana on Ubuntu - KVM and the hypervisor services

This is the third post of a multi-part installation guide describing how I installed OpenStack Havana. This post specifically talks about installing KVM and the nova-compute and neutron-plugin-openvswitch-agent services. The first post discussed how to deploy the prerequisites, OpenStack Dashboard (Horizon), Keystone, and the Nova services. The second post talks about installing and configuring the Neutron services using the ML2 plugin, Open vSwitch agent, and GRE tunneling. At a minimum I recommend reading the first post as the PoC design is explained there.

On to the steps...

Install KVM


I'm going to assume that you have enabled virtualization support in your BIOS, if not, go do that now.

Install Ubuntu 12.04.2

Personally I have had multiple issues with the 12.04.3 release, I ended up just going back to 12.04.2 for the installation media. Grab the 12.04.2 installation media and install Ubuntu. If you need help with the installation follow these steps.

Install Ubuntu OpenStack package pre-reqs and update Ubuntu

# apt-get update && apt-get -y install python-software-properties && add-apt-repository -y cloud-archive:havana && apt-get update && apt-get -y upgrade dist-upgrade && apt-get -y autoremove && reboot

Once the server reboots log back in via SSH or the console and elevate to superuser.

Install the primary and supporting OpenStack Ubuntu packages

Once apt-get is configured to use the Havana repository we can retrieve the KVM and OpenStack packages and install them. I choose to install as much as possible at once but if you aren't comfortable with this please feel free to break the package installation up into chunks.
# apt-get install -y kvm libvirt-bin pm-utils openvswitch-datapath-dkms nova-compute-kvm neutron-plugin-openvswitch-agent python-mysqldb

Configure the supporting services

NTP


The NTP client on the compute node(s) should be pointed to the controller, which in my case is called havana-wfe and uses an IP address of 192.168.1.110.
# sed -i 's/server 0.ubuntu.pool.ntp.org/#server 0.ubuntu.pool.ntp.org/g' /etc/ntp.conf && sed -i 's/server 1.ubuntu.pool.ntp.org/#server 1.ubuntu.pool.ntp.org/g' /etc/ntp.conf && sed -i 's/server 2.ubuntu.pool.ntp.org/#server 2.ubuntu.pool.ntp.org/g' /etc/ntp.conf && sed -i 's/server 3.ubuntu.pool.ntp.org/#server 3.ubuntu.pool.ntp.org/g' /etc/ntp.conf && sed -i 's/server ntp.ubuntu.com/server 192.168.1.110/g' /etc/ntp.conf && service ntp restart

Disable packet filtering


Packet destination filtering needs to be disabled so let's update the /etc/sysctl.conf file again and run sysctl to sync the changes immediately.
# sed -i 's/#net.ipv4.conf.default.rp_filter=1/net.ipv4.conf.default.rp_filter=0/' /etc/sysctl.conf && sysctl net.ipv4.conf.default.rp_filter=0 && sed -i 's/#net.ipv4.conf.all.rp_filter=1/net.ipv4.conf.all.rp_filter=0/' /etc/sysctl.conf && sysctl net.ipv4.conf.all.rp_filter=0

Update the guestfs permissions


On Ubuntu libguestfs access is limited to root, we need to change that.
# chmod 0644 /boot/vmlinuz*

Remove the SQLite DB


There a SQLite DB created by the nova-common package, we don't need it so we can remove it.
# rm /var/lib/nova/nova.sqlite

Configure the nova-compute service


Edit the configuration files

Earlier we installed the nova-compute-kvm package, now we need to configure it. There are few methods to do this:
  • Copy the existing nova.conf and Nova api-paste.ini files from the nova-api server and update where necessary
  • Update the default nova.conf and Nova api-paste.ini files via copy and paste from the existing files on the nova-api server
  • Manually enter the information

In almost all cases I recommend copying the files from the nova-api server and update them in-place, this ensures consistency across the OpenStack infrastructure nodes. The root account is disabled on Ubuntu so we need to do some pre-work.

SSH into the server running the nova-api, elevate (or just use sudo), make new directory, copy the files, and then update the permissions to the non-privileged user account you are going to use to SFTP the files.
# mkdir nova_cfg_files && cd nova_cfg_files && cp /etc/nova/nova.conf /etc/nova/api-paste.ini . && chown richard *.* && exit
$ exit
We are now back at the nova-compute shell and we need to SFTP the files from havanafe back to nova-compute, overwrite the existing files, then restart the nova-compute service.
# mkdir nova_cfg_files && cd nova_cfg_files
# sftp richard@192.168.1.110
sftp> cd nova_cfg_files/
sftp> get *.*
sftp> quit
# chown nova:nova nova.conf && cp nova.conf /etc/nova/nova.conf && chown nova:nova /etc/nova/api-paste.ini && cp api-paste.ini /etc/nova/api-paste.ini
Open /etc/nova/nova.conf in a text editor, search for vncserver_proxyclient_address property, change the value to 192.168.1.113, and save the file.
vncserver_proxyclient_address = 192.168.1.113
Restart the nova-compute service to consume the changes.
# restart nova-compute

Nova service validation

Validating the nova services is pretty easy, it's a single command using the nova-manage utility. Look for the :-) under the State column, xxx in the State column indicates a failed service. You should now see the nova-compute host now listed with the nova-compute service.
# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-cert        havana-wfe                           internal         enabled    :-)   2013-10-29 17:30:59
nova-conductor   havana-wfe                           internal         enabled    :-)   2013-10-29 17:30:53
nova-consoleauth havana-wfe                           internal         enabled    :-)   2013-10-29 17:30:51
nova-scheduler   havana-wfe                           internal         enabled    :-)   2013-10-29 17:30:50
nova-compute     nova-compute                         nova             enabled    :-)   2013-10-29 17:30:51

Configure Open vSwitch


Check the Open vSwitch service status

I normally do these steps to make sure Open vSwitch is running with the correct openvswitch module loaded. You don't have to do but I find that it cuts down on any troubleshooting that I do later on.
# lsmod | grep openvswitch
openvswitch            66857  0
If it isn't there then we need to insert it into the running kernel.
# modprobe openvswitch
Next I verify that the correct openvswitch kernel module has been compiled and installed.
# modinfo openvswitch
filename:       /lib/modules/3.2.0-55-generic/updates/dkms/openvswitch.ko
version:        1.10.2
license:        GPL
description:    Open vSwitch switching datapath
srcversion:     EBF7178BF66BA8C40E397CB
depends:
vermagic:       3.2.0-55-generic SMP mod_unload modversions
If the version returned isn't 1.10.2 then you need to install the openvswitch-datapath-dkms Ubuntu package.
# apt-get install openvswitch-datapath-dkms
Once the datapath module has been compiled and installed re-run the modinfo command to verify that the correct version has been installed.

Verify that the openvswitch-switch service is running.
# service openvswitch-switch status
openvswitch-switch start/running
Finally I query the openvswitch-switch service via the ovsdb protocol to make sure it responds.
# ovs-vsctl show
2a0dd496-cdcf-4bd5-9870-839e1bae4d5d
    ovs_version: "1.10.2"

Create the integration bridge

The virtual machines require off-VM network connectivity and it is provided via to two Open vSwitch bridges that are interconnected. One OVS bridge is used to connect to the VM vif and is called the integration bridge, the second OVS bridge hosts the GRE endpoints and is called the tunnel bridge. The integration bridge must be manually created and will be managed by the neutron-plugin-openvswitch-agent. Normally the integration bridge is named br-int so we will stick with that name.
# ovs-vsctl add-br br-int
You can verify that the br-int OVS bridge has been successfully created by running ovs-vsctl show again.
# ovs-vsctl show
2a0dd496-cdcf-4bd5-9870-839e1bae4d5d
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
    ovs_version: "1.10.2"
We need to now configure the Neutron agent to work with Open vSwitch.

Configure the Neutron agent


The OVS bridge we just created will be managed by the OpenStack neutron-plugin-openvswitch-agent whereas the neutron-plugin-openvswitch-agent will be managed by the neutron-server using the ML2 plugin. The neutron-plugin-openvswitch-agent uses four config files:
  • /etc/neutron/neutron.conf
  • /etc/neutron/api-paste.ini
  • /etc/neutron/plugins/ml2/ml2_conf.ini
  • /etc/init/neutron-plugin-openvswitch-agent.conf

We need to update or replace all four and instead of building the files from scratch I'm going to copy them from the havana-network node.

Update/replace the neutron-plugin-openvswitch-agent configuration files

Remember that the root account is disabled on Ubuntu so we need to do some pre-work. SSH into the network node, elevate (or just use sudo), make a new directory, copy the files into this directory, and update the permissions to the non-privileged user account your are going to use to SFTP the files back to the nova-compute server.
# mkdir neutron_files && cd neutron_files && cp /etc/neutron/api-paste.ini /etc/neutron/neutron.conf /etc/neutron/plugins/ml2/ml2_conf.ini /etc/init/neutron-plugin-openvswitch-agent.conf . && sudo chown richard *.* && exit
$ exit
We are now back in the nova-compute shell and need to make a new directory, SFTP the files from havana-network back to nova-compute, overwrite the existing files, updated the files, then restart the neutron-plugin-openvswitch-agent service.
# mkdir neutron_files && cd neutron_files && sftp richard@192.168.1.111
sftp> cd neutron_files/
sftp> get *.*
sftp> quit
# chgrp neutron *.* && cp api-paste.ini /etc/neutron/api-paste.ini && cp neutron.conf /etc/neutron/neutron.conf && mkdir /etc/neutron/plugins/ml2 && cp ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini && cp neutron-plugin-openvswitch-agent.conf /etc/init/neutron-plugin-openvswitch-agent.conf
We don't need to make any updates to the /etc/neutron/api-paste.ini, /etc/neutron/neutron.conf, or /etc/init/neutron-plugin-openvswitch-agent.conf files so we will leave those alone. We do need to make updates to the ml2_conf.ini file so open the /etc/neutron/plugins/ml2/ml2_conf.ini file in a text editor and update the local_ip property to point to the nova-compute server's designated GRE endpoint IP.
# sed -i 's/local_ip = 172.16.0.10/local_ip = 172.16.0.11/g' /etc/neutron/plugins/ml2/ml2_conf.ini

Cleanup and verification

Restart the neutron-plugin-openvswitch-agent service

# restart neutron-plugin-openvswitch-agent

Verify that neutron-plugin-openvswitch-agent is working

To verify that the agent is working we can re-run the ovs-vsctl show command.
# ovs-vsctl show
2a0dd496-cdcf-4bd5-9870-839e1bae4d5d
    Bridge br-tun
        Port "gre-172.16.0.10"
            Interface "gre-172.16.0.10"
                type: gre
                options: {in_key=flow, local_ip="172.16.0.11", out_key=flow, remote_ip="172.16.0.10"}
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    ovs_version: "1.10.2"
You should receive output similar to this. You will notice that a few things have changed:
  • The OVS tunnel bridge has been generated automatically (called br-tun)
  • An OVS port of the type gre has been constructed automatically on the br-tun bridge, thereby establishing a GRE tunnel with the havana-network server.

If your output looks like this you are good to go. You can now log into Horizon and start to build your cloud.