Sunday, October 08, 2017

OpenStack Neutron Server configuration parameters and their values

During troubleshooting I was having difficulty tracing a specific Neutron configuration parameter, specifically I needed to determine what order the default router types are applied. In the case of VMware Integrated OpenStack (aka VIO), Neutron Server uses NSX as the SDN controller, so two configuration files are in play:

  • /etc/neutron/neutron.conf
  • /etc/neutron/plugins/vmware/nsxv.ini

I was looking for the tenant_router_types value, the neutron-server log states it was set to shared, distributed, exclusive and I want to set it to distributed, shared, exclusive. I know this is an easy fix, just update the configuration files and restart neutron-server but from some reason my brain wasn’t working well this day and I wasn’t able to find that parameter in either nsxv.ini.

I ultimately did some grepping in the neutron-server log to find the Python method that is used to retrieve the configuration parameters from the configuration files. The specific method, oslo_service.service, can be searched for in any of the OpenStack service logs and will give you not only the list of parameters used by that service but also all of the values that are assigned. I like to do this when I’m troubleshooting, this way I can keep the configuration version controlled while I change things.

Hopefully you are running a centralized log aggregation solution (e.g. vRealize Log Insight, Splunk, ELK), you should be able to search for that string and retrieve the output. If you aren’t and you unfamiliar with the OpenStack logs here are the exact steps to retrieve the parameters straight from the logs, note that if the logs that contain the Neutron Server start up sequence have rolled you will have to restart the service.

  1. SSH to the server running the Neutron-Server service. In the case of VIO you must SSH to the OMS first, then SSH to either loadbalancer01 if running in Compact Mode or controller01 in HA mode.

  2. Next, run the following command to grep the logs for the specific entry.

    grep -R 'oslo_service.service' /var/log/neutron
  3. This should return a bunch of lines that look like this:

    /var/log/neutron/neutron-server.log.1:2017-10-07 16:51:54.408 27313 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python2.7/dist-packages/oslo_config/cfg.py:2738
  4. You can also filter the log metadata out, this sed command isn’t perfect but it works for me.

    grep -R 'DEBUG oslo_service.service \[\-\]' /var/log/neutron/ | sed 's/.* \[\-\]//;s/ log_opt_values .*//;s/^[ \t]*//' | sed '/.*[\*]/,$!d' | sed -n 'G; s/\n/&&/; /^\([ -~]*\n\).*\n\1/d; s/\n//; h; P'

    This should result in a bunch of lines that now look like this:

    agent_down_time                = 75

Monday, December 30, 2013

How to determine what Neutron security groups are assigned to what Keystone tenant/project

Today I was trying to use the Neutron Python CLI client to determine what Neutron security groups a specific tenant was using. I'm not sure if this is a Havana bug but regardless of method I continuously received all of the security groups for all of the tenants. I ended up resorting to a MySQL query using the Keystone and Neutron databases. Oh well, when I get a sec I'll search the Havana buglist for an answer as to why.

mysql> select keystone.project.name,neutron.securitygroups.name,neutron.securitygroups.id from keystone.project JOIN neutron.securitygroups where keystone.project.id=neutron.securitygroups.tenant_id;
+-------------+---------+--------------------------------------+
| name        | name    | id                                   |
+-------------+---------+--------------------------------------+
| service     | default | 0beeb42f-f96c-4ce9-beb2-cc8884121ac0 |
| QA          | default | 5525973b-2453-4d5b-9dac-b24e302f10db |
| admin       | default | 59f4a7f6-84d7-4059-a418-2fdd373b22ef |
| Engineering | default | ca1d104e-cf38-4a78-bbf8-656aea774e0c |
| Engineering | test    | d1f42d05-a8a2-4812-bd06-bf145567891f |
+-------------+---------+--------------------------------------+
5 rows in set (0.00 sec)

If anyone knows a better way please post it in the Comments section.

Thursday, November 28, 2013

(Possibly) Cool OpenStack Neutron and Open vSwitch tools

During my troubleshooting on neutron-plugin-openvswitch-agent I stumbled across two utilities/scripts that I definitely want to take a look at. Hopefully I’ll get some time in mid-December to play with them and post my experiences.

Debug Helper Script for Neutron - extends python-neutronclient to allow for deeper control of Neutron commands during troubleshooting. You can create, manipulate, and destroy probes on whatever the L2 switch is.

ovs-sandbox - allows for mucking with Open vSwitch in a quasi-virtual environment. The script can construct a software simulated network environment and is used in the Open vSwitch Advanced Features Tutorial.

What commands are called during the startup of the neutron-plugin-openvswitch-agent?

I spent a few days last week troubleshooting a networking issue at client that required me to step through the initialization of the neutron-plugin-openvswitch-agent under Havana. My client is using the Neutron ML2 plugin and Open vSwitch (commonly refered to as “OVS”) agent configured to use GRE tunneling. The final resolution of the issue was simplistic, the correct OVS module had not be installed but as I generated a large amount of notes I thought I would post them in hopes that they will be beneficial to someone else. My client is using the ML2 plugin + Open vSwitch (commonly refered to as “OVS”) agent configured to use GRE tunneling.

If you haven’t dealt with OpenStack in the past the neutron-plugin-openvswitch-agent relies on several underlying services and utilities: the Open vSwitch kernel module, Open vSwitch management utilities (ovs-vsctl and ovs-ofctl), iptables, and ip. The OpenFlow management API is also used to build and manage the OVS flow tables that OVS leverages to manipulate and direct network traffic.

If you are interested you can view these entries in the /var/log/neutron/openvswitch-agent.log file when the neutron-plugin-openvswitch-agent is configured for debug logging. Also in some of the steps below I will walk through some of the code as those commands were the most interesting for me at the time.

Before I continue I want to thank Kyle Mestery from Cisco (IRC nick: mestery) who helped point me in the right direction on the more intense code blocks. Here are the primary references I used to decipher/interpret what is going on.

The primary code files I dissected were:

There are also some additional references scattered through the post.

All of the OVS commands are called via neutron-rootwrap or, if configured, just sudo. I also removed all of the single quotes and some of the commas for readability sake.

sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

The agent starts out by retrieving the IP information of the OVS integration bridge (br-int) local port. The MAC address returned is reformated for use as the suffix of the Neutron OVS agent’s ID.

ip -o link show br-int

If the br-tun patch port exists on the br-int bridge it is deleted.

ovs-vsctl --timeout=2 -- --if-exists del-port br-int patch-tun

Next any existing entries in the integration bridge flow table are deleted to ensure a clean environment and the first OpenFlow flow entry is added. The arguments hard_timeout and idle_timeout are set to 0 to ensure that the flow does not expire. The priority argument is set to 1 to ensure that other flows with a priority <1 will not act on a packet first. The actions=normal argument-value key-pair indicates that the default L2 processing performed by the Open vSwitch will occur.

ovs-ofctl del-flows br-int
ovs-ofctl add-flow br-int hard_timeout=0,idle_timeout=0,priority=1,actions=normal

If tunneling is enabled (as it is in this case) the setup_tunnel_br function definition is called to configure the tunnel bridge (br-tun). The neutron-plugin-openvswitch-agent requires a specific OVS switch and OpenFlow flow configuration to operate, to achieve the clean slate the existing br-tun switch is destroyed (if it existed) and then recreated.

ovs-vsctl --timeout=2 -- --if-exists del-br br-tun
ovs-vsctl --timeout=2 add-br br-tun

Next the agent will add a new port to the br-int bridge…

ovs-vsctl --timeout=2 add-port br-int patch-tun

…configure the new port as a patch…

ovs-vsctl --timeout=2 set Interface patch-tun type=patch

…and assign the patch port to act as a peer to the patch-int port.

ovs-vsctl --timeout=2 set Interface patch-tun options:peer=patch-int

If ovs-vsctl show is executed now the OVS bridges will look like this. This is the base configuration for a Neutron node that is not running the router agent (L3 agent).

12cb27cf-6188-45c0-9421-b11ef1b865c8
    Bridge br-tun
        Port br-tun
            Interface br-tun
                type: internal
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    ovs_version: "1.10.2"

Prior to the next few steps the neutron-plugin-openvswitch-agent verifies that the patch port has been successfully created.

ovs-vsctl --timeout=2 get Interface patch-tun ofport

The agent adds and configures a patch port and verifies the creation using the same steps for the br-tun bridge.

ovs-vsctl --timeout=2 add-port br-tun patch-int
ovs-vsctl --timeout=2 set Interface patch-int type=patch
ovs-vsctl --timeout=2 set Interface patch-int options:peer=patch-tun
ovs-vsctl --timeout=2 get Interface patch-int ofport

Even though the tunnel bridge was just created the Neutron OVS agent deletes any existing flows for the tunnel bridge. If you are like me you are probably thinking “Why? I just created it.”. By default a single entry is created in the OpenFlow flow table and we need to remove it so as to not conflict with the flows that will be added.

ovs-ofctl del-flows br-tun

This next set of steps uses the output from the previous ovs-vsctl command to fill the in_port argument value. In this case the in_port value is 1, so remember that in_port=1 is really in_port=patch-int on the br-tun bridge.

You will also see that multiple OpenFlow flow tables are used and, in hopes of reducing your (and most definitely my) future confusion, here’s the breakdown of which table is which from the constants.py file. I recommend reading an excellent blog post by Assaf Muller from Red Hat Israel describing what each flow table does.

PATCH_LV_TO_TUN = 1
GRE_TUN_TO_LV = 2
VXLAN_TUN_TO_LV = 3
LEARN_FROM_TUN = 10
UCAST_TO_TUN = 20
FLOOD_TO_TUN = 21

The flow tables are effectively chained together by the flow entries themselves and the flow’s priority establishes the packet manipulation hierarchy for other flow entries in same flow table with “0” having the lowest priority and “254” the highest. There are also special entries in the flow table used to ensure that any OpenFlow management traffic always takes precedence when the controller is in-band but the flows aren’t visible using the standard “dump-flows” arguments.

Now onto the rest of the configuration…

A flow is inserted into the default OpenFlow flow table that uses the resubmit action to ensure that all packets entering the patch-int port are initially sorted to flow table 1 (PATCH_LV_TO_TUN).

ovs-ofctl add-flow br-tun "hard_timeout=0,idle_timeout=0,priority=1,in_port=1,actions=resubmit(,1)"

Another flow is then inserted into the default OpenFlow flow table with the lowest priority (0) and will drop any other packets not sorted by the first flow.

ovs-ofctl add-flow br-tun hard_timeout=0,idle_timeout=0,priority=0,actions=drop

Unicast packets (represented by the value of the dl_dst argument) are forwarded to OpenFlow flow table 20 (referred to as the UCAST_TO_TUN flow table). It has the lowest priority (priority=0) and no packet timeout (hard_timeout=0,idle_timeout=0).

ovs-ofctl add-flow br-tun "hard_timeout=0,idle_timeout=0,priority=0,table=1,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00,actions=resubmit(,20)"

Multicast packets (represented by the value of the dl_dst argument) are forwarded to OpenFlow flow table 21 (FLOOD_TO_TUN). It has a higher priority (priority=1) to ensure that packets are matched against it prior to the unicast flow and has no packet timeout (hard_timeout=0,idle_timeout=0).

ovs-ofctl add-flow br-tun "hard_timeout=0,idle_timeout=0,priority=0,table=1,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00,actions=resubmit(,21)"

Next OpenFlow table 2 (GRE_TUN_TO_LV) and OpenFlow table 3 (VXLAN_TUN_TO_LV) are populated initially with entries that drop all traffic. These two tables are also used to set the local VLAN ID (or Lvid) used internally in the Open vSwitch switch itself based on the tunnel ID.

ovs-ofctl add-flow br-tun hard_timeout=0,idle_timeout=0,priority=0,table=2,actions=drop
ovs-ofctl add-flow br-tun hard_timeout=0,idle_timeout=0,priority=0,table=3,actions=drop

The following flow is more complex than the others, it is used to dynamically learn the MAC addresses traversing the switch dataplane. First the flow is inserted into OpenFlow table 10 (LEARN_FROM_TUN) with no idle or hard timeout for packets matched to the flow itself, with a higher priority of 1, and outputs to the patch-int port.

The learn argument is used to modify an existing flow table, in this case flow table 20 (UCAST_TO_TUN). The priority is set to 1 to ensure that all of the packets are sorted via this flow first and a timeout is set to ensure that the new MAC address will eventually be removed if not seen again within the timeout period.

The remaining arguments are prefixed by the letters NXM, the abbreviation for ”Nicira Extended Match”. NXM is an Open vSwitch extension written by Nicira that provides a matching facility for network packets.

NXM_OF_VLAN_TCI[0..11] represents the internal-to-Open vSwitch-switch (referred to as the Lvid) 802.1q VLAN tag control information (TCI) header. This TCI field is 12 bits long, hence the [0..11] value is required to retrieve the entire field and is saved…

NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[] assigns the source MAC address to the destination…

load:0->NXM_OF_VLAN_TCI[] clears the internal-to-Open vSwitch-switch VLAN TCI …

load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[] writes the GRE tunnel ID to the tun_id register…

output:NXM_OF_IN_PORT[] sets what the egress OVS port is…

…and finally output: 1 forwards the packet out via the original patch-int port.

ovs-ofctl add-flow br-tun "hard_timeout=0,idle_timeout=0,priority=1,table=10,actions=learn(table=20,priority=1,hard_timeout=300,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]), output:1"

Here’s a quick example of what two flows dynamically created by this learn argument look like. In this case the dl_dst value in each flow points to the specific network interface of VMs running on nova-compute nodes.

cookie=0x0, duration=339549.273s, table=20, n_packets=62605, n_bytes=16031792, hard_timeout=300, idle_age=3, hard_age=3, priority=1,vlan_tci=0x0003/0x0fff,dl_dst=fa:16:3e:b1:07:8f actions=load:0->NXM_OF_VLAN_TCI[],load:0x2->NXM_NX_TUN_ID[],output:2
cookie=0x0, duration=97.68s, table=20, n_packets=0, n_bytes=0, hard_timeout=300, idle_age=97, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=fa:16:3e:6c:a6:5a actions=load:0->NXM_OF_VLAN_TCI[],load:0x1->NXM_NX_TUN_ID[],output:3

The next flow is used to capture any remaining packets and forward them to flow table 21 (FLOOD_TO_TUN) - basically these packets are unknown and all unknown packets are not considered multicast/broadcast and should be dropped.

ovs-ofctl add-flow br-tun "hard_timeout=0,idle_timeout=0,priority=0,table=20,actions=resubmit(,21)"

Finally, drop any of the packets that get to flow table 21 (FLOOD_TO_TUN).

ovs-ofctl add-flow br-tun hard_timeout=0,idle_timeout=0,priority=0,table=21,actions=drop

Next the Neutron OVS agent retrieves the list of existing bridges, filters out the integration and tunnel bridges, and then searches for any remaining bridges to determine whether any are externally linked and should be managed.

ovs-vsctl --timeout=2 list-br

Up to now the integration bridge (br-int) and tunnel bridge (br-tun) have been configured with their default specifications. The GRE tunnel ports have not been created or configured so the agent does that now.

The remote GRE endpoints are retrieved from the topology schema overseen/maintained by the neutron-server daemon. Notice that the GRE ports below are named using an IP prefixed by ‘gre’, this one thing that has changed (IMHO for the better) in the latest iteration of the Neutron OVS agent. The IP address in the name is the remote host’s IP, in this case 172.31.254.29 is a remote nova-compute node’s IP address.

For brevity’s sake I’m only including the configuration steps for one remote GRE endpoint, this environment has many but the configuration is the same.

First a port is created on the tunnel bridge with the name gre-172.31.254.29…

ovs-vsctl --timeout=2 -- --may-exist add-port br-tun gre-172.31.254.29

…the port is configured as type gre…

ovs-vsctl --timeout=2 set Interface gre-172.31.254.29 type=gre

…a remote GRE endpoint is added…

ovs-vsctl --timeout=2 set Interface gre-172.31.254.29 options:remote_ip=172.31.254.29

…the local GRE endpoint is added…

ovs-vsctl --timeout=2 set Interface gre-172.31.254.29 options:local_ip=172.31.254.65

…will be controlled by a OpenFlow flow…

ovs-vsctl --timeout=2 set Interface gre-172.31.254.29 options:in_key=flow
ovs-vsctl --timeout=2 set Interface gre-172.31.254.29 options:out_key=flow

…and then a check is done to determine whether the port was created and configured successfully. In the background a comparison is done between the returned ofport value and -1. As long as the ofport value is greater than -1 the port was created/configured successfully. If -1 is returned the port configuration failed but the agent’s initialization sequence doesn’t stop.

ovs-vsctl --timeout=2 get Interface gre-172.31.254.29 ofport

The integer returned from the last command is also used to populate the in_port argument value in the next OpenFlow flow table configuration command which is used to add the port to the GRE_TUN_TO_LV flow table. Any network packets that ingress this new GRE endpoint port will be manipulated using the OpenFlow flow entries in the GRE_TUN_TO_LV created earlier.

ovs-ofctl add-flow br-tun "hard_timeout=0,idle_timeout=0,priority=1,in_port=4,actions=resubmit(,2)"

The next two commands tally the existing, added, and/or removed ports by comparing the ports that existed during the last poll of the Neutron OVS agent. Both of these commands are initiated by an if statement found on line 1081 and 1082 in the ovs_neutron_plugin.py file:

if polling_manager.is_polling_required:
port_info = self.update_ports(ports)

First the function definition update_ports in the same file is called using the self.update_ports(ports) action. The update_ports calls the function definition get_vif_port_set in the ovs_lib.py file which retrieves the list of existing integration bridge ports and sets the result to a variable called port_names.

ovs-vsctl --timeout=2 list-ports br-int

Secondly the list Interface action retrieves a list of the ports with their names and external IDs (the attached MAC address, interface ID, interface status, and VM ID) and assigns the results to a variable.

ovs-vsctl --timeout=2 --format=json -- --columns=name,external_ids list Interface

The two variables are compared, if a difference is found the divergence is calculated as one of two states: added or removed, and the network ports are created or deleted (in this case there was no divergence found so no logs…I’ll try to dig some up if I can).

Lastly the firewall needs to be refreshed for the new ports, both IPv4 and IPv6.

iptables-save -c
iptables-restore -c
ip6tables-save -c
ip6tables-restore -c

At this point the underlying structure has been built for the plugin to work and the next steps update the OVS bridges and OpenFlow flow tables to support the existing ports. Once those (if any) ports have been created and configured the neutron-plugin-openvswitch-agent will poll periodically.

Monday, November 18, 2013

Part 3 - How to install OpenStack Havana on Ubuntu - KVM and the hypervisor services

This is the third post of a multi-part installation guide describing how I installed OpenStack Havana. This post specifically talks about installing KVM and the nova-compute and neutron-plugin-openvswitch-agent services. The first post discussed how to deploy the prerequisites, OpenStack Dashboard (Horizon), Keystone, and the Nova services. The second post talks about installing and configuring the Neutron services using the ML2 plugin, Open vSwitch agent, and GRE tunneling. At a minimum I recommend reading the first post as the PoC design is explained there.

On to the steps...

Install KVM


I'm going to assume that you have enabled virtualization support in your BIOS, if not, go do that now.

Install Ubuntu 12.04.2

Personally I have had multiple issues with the 12.04.3 release, I ended up just going back to 12.04.2 for the installation media. Grab the 12.04.2 installation media and install Ubuntu. If you need help with the installation follow these steps.

Install Ubuntu OpenStack package pre-reqs and update Ubuntu

# apt-get update && apt-get -y install python-software-properties && add-apt-repository -y cloud-archive:havana && apt-get update && apt-get -y upgrade dist-upgrade && apt-get -y autoremove && reboot

Once the server reboots log back in via SSH or the console and elevate to superuser.

Install the primary and supporting OpenStack Ubuntu packages

Once apt-get is configured to use the Havana repository we can retrieve the KVM and OpenStack packages and install them. I choose to install as much as possible at once but if you aren't comfortable with this please feel free to break the package installation up into chunks.
# apt-get install -y kvm libvirt-bin pm-utils openvswitch-datapath-dkms nova-compute-kvm neutron-plugin-openvswitch-agent python-mysqldb

Configure the supporting services

NTP


The NTP client on the compute node(s) should be pointed to the controller, which in my case is called havana-wfe and uses an IP address of 192.168.1.110.
# sed -i 's/server 0.ubuntu.pool.ntp.org/#server 0.ubuntu.pool.ntp.org/g' /etc/ntp.conf && sed -i 's/server 1.ubuntu.pool.ntp.org/#server 1.ubuntu.pool.ntp.org/g' /etc/ntp.conf && sed -i 's/server 2.ubuntu.pool.ntp.org/#server 2.ubuntu.pool.ntp.org/g' /etc/ntp.conf && sed -i 's/server 3.ubuntu.pool.ntp.org/#server 3.ubuntu.pool.ntp.org/g' /etc/ntp.conf && sed -i 's/server ntp.ubuntu.com/server 192.168.1.110/g' /etc/ntp.conf && service ntp restart

Disable packet filtering


Packet destination filtering needs to be disabled so let's update the /etc/sysctl.conf file again and run sysctl to sync the changes immediately.
# sed -i 's/#net.ipv4.conf.default.rp_filter=1/net.ipv4.conf.default.rp_filter=0/' /etc/sysctl.conf && sysctl net.ipv4.conf.default.rp_filter=0 && sed -i 's/#net.ipv4.conf.all.rp_filter=1/net.ipv4.conf.all.rp_filter=0/' /etc/sysctl.conf && sysctl net.ipv4.conf.all.rp_filter=0

Update the guestfs permissions


On Ubuntu libguestfs access is limited to root, we need to change that.
# chmod 0644 /boot/vmlinuz*

Remove the SQLite DB


There a SQLite DB created by the nova-common package, we don't need it so we can remove it.
# rm /var/lib/nova/nova.sqlite

Configure the nova-compute service


Edit the configuration files

Earlier we installed the nova-compute-kvm package, now we need to configure it. There are few methods to do this:
  • Copy the existing nova.conf and Nova api-paste.ini files from the nova-api server and update where necessary
  • Update the default nova.conf and Nova api-paste.ini files via copy and paste from the existing files on the nova-api server
  • Manually enter the information

In almost all cases I recommend copying the files from the nova-api server and update them in-place, this ensures consistency across the OpenStack infrastructure nodes. The root account is disabled on Ubuntu so we need to do some pre-work.

SSH into the server running the nova-api, elevate (or just use sudo), make new directory, copy the files, and then update the permissions to the non-privileged user account you are going to use to SFTP the files.
# mkdir nova_cfg_files && cd nova_cfg_files && cp /etc/nova/nova.conf /etc/nova/api-paste.ini . && chown richard *.* && exit
$ exit
We are now back at the nova-compute shell and we need to SFTP the files from havanafe back to nova-compute, overwrite the existing files, then restart the nova-compute service.
# mkdir nova_cfg_files && cd nova_cfg_files
# sftp richard@192.168.1.110
sftp> cd nova_cfg_files/
sftp> get *.*
sftp> quit
# chown nova:nova nova.conf && cp nova.conf /etc/nova/nova.conf && chown nova:nova /etc/nova/api-paste.ini && cp api-paste.ini /etc/nova/api-paste.ini
Open /etc/nova/nova.conf in a text editor, search for vncserver_proxyclient_address property, change the value to 192.168.1.113, and save the file.
vncserver_proxyclient_address = 192.168.1.113
Restart the nova-compute service to consume the changes.
# restart nova-compute

Nova service validation

Validating the nova services is pretty easy, it's a single command using the nova-manage utility. Look for the :-) under the State column, xxx in the State column indicates a failed service. You should now see the nova-compute host now listed with the nova-compute service.
# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-cert        havana-wfe                           internal         enabled    :-)   2013-10-29 17:30:59
nova-conductor   havana-wfe                           internal         enabled    :-)   2013-10-29 17:30:53
nova-consoleauth havana-wfe                           internal         enabled    :-)   2013-10-29 17:30:51
nova-scheduler   havana-wfe                           internal         enabled    :-)   2013-10-29 17:30:50
nova-compute     nova-compute                         nova             enabled    :-)   2013-10-29 17:30:51

Configure Open vSwitch


Check the Open vSwitch service status

I normally do these steps to make sure Open vSwitch is running with the correct openvswitch module loaded. You don't have to do but I find that it cuts down on any troubleshooting that I do later on.
# lsmod | grep openvswitch
openvswitch            66857  0
If it isn't there then we need to insert it into the running kernel.
# modprobe openvswitch
Next I verify that the correct openvswitch kernel module has been compiled and installed.
# modinfo openvswitch
filename:       /lib/modules/3.2.0-55-generic/updates/dkms/openvswitch.ko
version:        1.10.2
license:        GPL
description:    Open vSwitch switching datapath
srcversion:     EBF7178BF66BA8C40E397CB
depends:
vermagic:       3.2.0-55-generic SMP mod_unload modversions
If the version returned isn't 1.10.2 then you need to install the openvswitch-datapath-dkms Ubuntu package.
# apt-get install openvswitch-datapath-dkms
Once the datapath module has been compiled and installed re-run the modinfo command to verify that the correct version has been installed.

Verify that the openvswitch-switch service is running.
# service openvswitch-switch status
openvswitch-switch start/running
Finally I query the openvswitch-switch service via the ovsdb protocol to make sure it responds.
# ovs-vsctl show
2a0dd496-cdcf-4bd5-9870-839e1bae4d5d
    ovs_version: "1.10.2"

Create the integration bridge

The virtual machines require off-VM network connectivity and it is provided via to two Open vSwitch bridges that are interconnected. One OVS bridge is used to connect to the VM vif and is called the integration bridge, the second OVS bridge hosts the GRE endpoints and is called the tunnel bridge. The integration bridge must be manually created and will be managed by the neutron-plugin-openvswitch-agent. Normally the integration bridge is named br-int so we will stick with that name.
# ovs-vsctl add-br br-int
You can verify that the br-int OVS bridge has been successfully created by running ovs-vsctl show again.
# ovs-vsctl show
2a0dd496-cdcf-4bd5-9870-839e1bae4d5d
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
    ovs_version: "1.10.2"
We need to now configure the Neutron agent to work with Open vSwitch.

Configure the Neutron agent


The OVS bridge we just created will be managed by the OpenStack neutron-plugin-openvswitch-agent whereas the neutron-plugin-openvswitch-agent will be managed by the neutron-server using the ML2 plugin. The neutron-plugin-openvswitch-agent uses four config files:
  • /etc/neutron/neutron.conf
  • /etc/neutron/api-paste.ini
  • /etc/neutron/plugins/ml2/ml2_conf.ini
  • /etc/init/neutron-plugin-openvswitch-agent.conf

We need to update or replace all four and instead of building the files from scratch I'm going to copy them from the havana-network node.

Update/replace the neutron-plugin-openvswitch-agent configuration files

Remember that the root account is disabled on Ubuntu so we need to do some pre-work. SSH into the network node, elevate (or just use sudo), make a new directory, copy the files into this directory, and update the permissions to the non-privileged user account your are going to use to SFTP the files back to the nova-compute server.
# mkdir neutron_files && cd neutron_files && cp /etc/neutron/api-paste.ini /etc/neutron/neutron.conf /etc/neutron/plugins/ml2/ml2_conf.ini /etc/init/neutron-plugin-openvswitch-agent.conf . && sudo chown richard *.* && exit
$ exit
We are now back in the nova-compute shell and need to make a new directory, SFTP the files from havana-network back to nova-compute, overwrite the existing files, updated the files, then restart the neutron-plugin-openvswitch-agent service.
# mkdir neutron_files && cd neutron_files && sftp richard@192.168.1.111
sftp> cd neutron_files/
sftp> get *.*
sftp> quit
# chgrp neutron *.* && cp api-paste.ini /etc/neutron/api-paste.ini && cp neutron.conf /etc/neutron/neutron.conf && mkdir /etc/neutron/plugins/ml2 && cp ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini && cp neutron-plugin-openvswitch-agent.conf /etc/init/neutron-plugin-openvswitch-agent.conf
We don't need to make any updates to the /etc/neutron/api-paste.ini, /etc/neutron/neutron.conf, or /etc/init/neutron-plugin-openvswitch-agent.conf files so we will leave those alone. We do need to make updates to the ml2_conf.ini file so open the /etc/neutron/plugins/ml2/ml2_conf.ini file in a text editor and update the local_ip property to point to the nova-compute server's designated GRE endpoint IP.
# sed -i 's/local_ip = 172.16.0.10/local_ip = 172.16.0.11/g' /etc/neutron/plugins/ml2/ml2_conf.ini

Cleanup and verification

Restart the neutron-plugin-openvswitch-agent service

# restart neutron-plugin-openvswitch-agent

Verify that neutron-plugin-openvswitch-agent is working

To verify that the agent is working we can re-run the ovs-vsctl show command.
# ovs-vsctl show
2a0dd496-cdcf-4bd5-9870-839e1bae4d5d
    Bridge br-tun
        Port "gre-172.16.0.10"
            Interface "gre-172.16.0.10"
                type: gre
                options: {in_key=flow, local_ip="172.16.0.11", out_key=flow, remote_ip="172.16.0.10"}
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    ovs_version: "1.10.2"
You should receive output similar to this. You will notice that a few things have changed:
  • The OVS tunnel bridge has been generated automatically (called br-tun)
  • An OVS port of the type gre has been constructed automatically on the br-tun bridge, thereby establishing a GRE tunnel with the havana-network server.

If your output looks like this you are good to go. You can now log into Horizon and start to build your cloud.

Friday, November 15, 2013

Part 2 - How to install OpenStack Havana on Ubuntu - Neutron using the ML2 plugin + Open vSwitch agent configured to use GRE

This post is the second in a multi-part series describing how I installed and configured OpenStack Havana within a test environment. The first post detailed the environment setup and how to install/configure the prerequisites (database and MQ), Keystone, Nova, Glance, and Horizon services. This post will describe how to install and configure Neutron with the ML2 plugin and the Open vSwitch agent. If you haven't read the first post I recommend looking over it quickly, there's some good stuff that's relevant to this post.

Prerequisites


Here's the list of things I did before the OpenStack install:

Install Ubuntu 12.04.2

Personally I have had multiple issues with the 12.04.3 release, I ended up just going back to 12.04.2 for the installation media. Grab the 12.04.2 installation media and install Ubuntu. If you need help with the installation follow these steps.

Install Ubuntu OpenStack package pre-reqs and update Ubuntu

# apt-get update && apt-get -y install python-software-properties && add-apt-repository -y cloud-archive:havana && apt-get update && apt-get -y upgrade dist-upgrade && apt-get -y autoremove && reboot

Once the server reboots log back in via SSH or the console and elevate to superuser.

Configure the local networking

The local networking configuration for the havana-wfe controller was simplistic, the havana-network configuration is more complex.

At a minimum the network node requires two network interfaces (eth0 and eth1) and, if possible, use three network interfaces (eth0, eth1, eth2). Static IP addressing is used in the examples below and is recommended but not required. I'm going to walk you through both scenarios, pick the one that suits you best.

Two-interface scenario

The two-interface scenario uses the first network interface to provide connectivity for OpenStack management and to host VM traffic (specifically the GRE tunnels). The second network interface is used to provide external connectivity to and from remote networks, such as the Internet.

# OpenStack management and VM intra-OpenStack cloud traffic  
auto eth0  
iface eth0 inet static  
address 192.168.1.111
netmask 255.255.255.0
gateway 192.168.1.1

# VM external access via the L3 agent
auto eth1  
iface eth1 inet manual  
up ifconfig $IFACE 0.0.0.0 up  
up ip link set $IFACE promisc on  
down ip link set $IFACE promisc off  
down ifconfig $IFACE down

Three-interface scenario

The three-interface scenario uses the first network interface to provide connectivity for OpenStack management, the second hosts VM traffic (specifically the GRE tunnels), and the third provides external connectivity to and from remote networks, such as the Internet.
# OpenStack management  
auto eth0  
iface eth0 inet static  
address 192.168.1.111
netmask 255.255.255.0
gateway 192.168.1.1

# VM intra-OpenStack cloud traffic  
auto eth1  
iface eth1 inet static  
address 172.16.0.10
netmask 255.255.255.0

# VM external access via the L3 agent
auto eth2  
iface eth2 inet manual  
up ifconfig $IFACE 0.0.0.0 up  
up ip link set $IFACE promisc on  
down ip link set $IFACE promisc off  
down ifconfig $IFACE down

I chose the three-interface scenario. Here's what my /etc/network/interfaces configuration file looks like:

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
address 192.168.1.111
netmask 255.255.255.0
gateway 192.168.1.1 

# Hosts the VM GRE tunnels
auto eth1
iface eth1 inet static
address 172.16.0.10
netmask 255.255.255.0

# Provides external cloud connectivity via L3 agent
auto eth2
iface eth2 inet manual
up ifconfig $IFACE 0.0.0.0 up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down

There are a couple of options to sync the changes, you can restart networking...

# /etc/init.d/networking restart

...or restart the server.

# reboot

Install the primary and supporting OpenStack Ubuntu packages

# apt-get install -y neutron-server neutron-plugin-openvswitch-agent neutron-dhcp-agent neutron-l3-agent neutron-metadata-agent openvswitch-switch openvswitch-datapath-dkms ntp python-mysqldb

Configure the supporting services


NTP configuration

All the OpenStack infrastructure VMs should point to the same NTP server. Update the /etc/ntp.conffile to point to the IP address or DNS A record of the primary NTP source and save the file. In my case I'm running NTP on the havana-wfe VM which is using the IP address 192.168.1.110. My /etc/ntp.conf file looks like this, note that I commented the NTP pool servers out and changed the fallback server to the IP of havana-wfe.

...
# Use servers from the NTP Pool Project. Approved by Ubuntu Technical Board  
# on 2011-02-08 (LP: #104525). See http://www.pool.ntp.org/join.html for  
# more information.  
#server 0.ubuntu.pool.ntp.org  
#server 1.ubuntu.pool.ntp.org  
#server 2.ubuntu.pool.ntp.org  
#server 3.ubuntu.pool.ntp.org  

# Use Ubuntu's ntp server as a fallback.  
server 192.168.1.110
...

Save the file and restart the NTP service to sync the changes.

# service ntp restart

Enable IP forwarding and disable packet destination filtering

We need to enable IP forwarding (if you want more information on Linux IP forwarding go here), to do that we need to do two things: update the /etc/sysctl.conffile and so we don't have to reboot, immediately configure the kernel.

# sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf && sysctl net.ipv4.ip_forward=1

We also need to disable packet destination filtering so let's update the /etc/sysctl.conf file again and run sysctl to sync the changes immediately.

# sed -i 's/#net.ipv4.conf.default.rp_filter=1/net.ipv4.conf.default.rp_filter=0/' /etc/sysctl.conf && sysctl net.ipv4.conf.default.rp_filter=0 && sed -i 's/#net.ipv4.conf.all.rp_filter=1/net.ipv4.conf.all.rp_filter=0/' /etc/sysctl.conf && sysctl net.ipv4.conf.all.rp_filter=0

Create an OpenStack credentials file

Same as we did in the first post, you will need an OpenStack credentials file. The easiest way is to just copy/SFTP the file from your controller over to this one, in any case here are the directions in case you don't want to. We have already have created the OpenStack admin user, make sure the correct credentials are used.

export OS_AUTH_URL=http://192.168.1.110:5000/v2.0  
export OS_TENANT_NAME=admin  
export OS_USERNAME=admin  
export OS_PASSWORD=password

Let's add the values to your profile's environment.

# source creds_file_name

Create the Neutron MySQL database (if you haven't already done so)

If you followed the first post you have already created the database and you can skip down to the Neutron configuration section. If you didn't follow the first post (no biggie), hey, you need to do this. SSH into whatever node is running MySQL and log into MySQL as the root user or another privileged MySQL user.

# mysql -u root -p

Create the database, then create the new user "neutron", set the password, and assign privileges for the new user to the neutron database.

mysql> CREATE DATABASE neutron;  
mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'password';  
mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'neutron'@'%' IDENTIFIED BY 'password';  
mysql> FLUSH PRIVILEGES;  
mysql> QUIT;

Configure Keystone (if you haven't already done so)

If you haven't yet configured Keystone to support Neutron then read on, else head down to the Neutron configuration section. First we need to list the tenant IDs and role IDs, they are needed to create the new neutron user. We care about the service tenant ID and the admin role ID.

# keystone tenant-list && keystone role-list  
+----------------------------------+---------+---------+  
|        id                        |  name   | enabled |  
+----------------------------------+---------+---------+  
| 62178df3e23040d286a86059216cbfb6 | admin   |  True   |  
| 5b4d9fae6e5d4776b8400d6bb1af17a1 | service |  True   |  
+----------------------------------+---------+---------+  
+----------------------------------+----------------------+  
|        id                        |         name         |  
+----------------------------------+----------------------+  
| 3b545aa30a4d4965b76777fb0def3b8d |    KeystoneAdmin     |  
| 31f1ddb2dbad4a1b9560fe5bbce2fe5e | KeystoneServiceAdmin |  
| ffe5c15294cb4be99dfd2d41055603f3 |        Member        |  
| 9fe2ff9ee4384b1894a90878d3e92bab |       _member_       |  
| 24b384d1a7164a8dbb60747b7fb42d68 |        admin         |  
+----------------------------------+----------------------+

Now that we have the service tenant ID we can create the neutron user.

# keystone user-create --name=neutron --pass=password --tenant-id=5b4d9fae6e5d4776b8400d6bb1af17a1 --email=neutron@revolutionlabs.net  
+----------+----------------------------------+  
| Property |              Value               |  
+----------+----------------------------------+  
|  email   |  neutron@revolutionlabs.net      |  
| enabled  |               True               |  
|    id    | e1e5326833684ab3bfefdf5f805cf22a |  
|   name   |              neutron             |  
| tenantId | 5b4d9fae6e5d4776b8400d6bb1af17a1 |  
+----------+----------------------------------+

Copy the new neutron user ID from the last step and add the admin role to the neutron user. We also verify that the user was created.

# keystone user-role-add --tenant-id=5b4d9fae6e5d4776b8400d6bb1af17a1 --user-id=e1e5326833684ab3bfefdf5f805cf22a --role-id=24b384d1a7164a8dbb60747b7fb42d68  
# keystone user-role-list --tenant-id=5b4d9fae6e5d4776b8400d6bb1af17a1 --user-id=e1e5326833684ab3bfefdf5f805cf22a  
+----------------------------------+----------+----------------------------------+----------------------------------+  
|        id                        |    name  |             user_id              |             tenant_id            |  
+----------------------------------+----------+----------------------------------+----------------------------------+  
| 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | e1e5326833684ab3bfefdf5f805cf22a | 5b4d9fae6e5d4776b8400d6bb1af17a1 |  
| 24b384d1a7164a8dbb60747b7fb42d68 |   admin  | e1e5326833684ab3bfefdf5f805cf22a | 5b4d9fae6e5d4776b8400d6bb1af17a1 |  
+----------------------------------+----------+----------------------------------+----------------------------------+

The neutron user has now been created and the admin role has been assigned. Two more things to do, one, create the Neutron service, and two, use the new service ID to assign a set of endpoints to the service. Since this is a lab I'm only using the default region. Replace the IP address with the Keystone Service API IP address or the DNS A record of the same host.

# keystone service-create --name=neutron --type=network  
+-------------+----------------------------------+  
|  Property   |               Value              |  
+-------------+----------------------------------+  
| description |                                  |  
|   id        | 8f9256fb735c4fd8a7584ea0cbbcaa84 |  
|   name      |              neutron             |  
|   type      |              network             |  
+-------------+----------------------------------+

# keystone endpoint-create --region=RegionOne --service-id=8f9256fb735c4fd8a7584ea0cbbcaa84 --publicurl=http://192.168.1.110:9696 --internalurl=http://192.168.1.110:9696 --adminurl=http://192.168.1.110:9696  
+-------------+----------------------------------+  
|  Property   |              Value               |  
+-------------+----------------------------------+  
|  adminurl   |    http://192.168.1.110:9696     |  
|     id      | 858b43c66ed84b409d0149dea3994d71 |  
| internalurl |    http://192.168.1.110:9696     |  
|  publicurl  |    http://192.168.1.110:9696     |  
|    region   |            RegionOne             |  
| service_id  | 8f9256fb735c4fd8a7584ea0cbbcaa84 |  
+-------------+----------------------------------+

Neutron configuration


Neutron uses several configuration files to run all of the services (neutron-server, neutron-plugin-openvswitch-agent, neutron-dhcp-agent, neutron-l3-agent, neutron-metadata-agent). Here is a breakdown of what files are used by what services, note that we aren't changing the default command-line arguments for the DHCP, L3, or Metadata agents.

neutron-server /etc/neutron/neutron.conf
/etc/neutron/api-paste.ini
/etc/neutron/plugins/ml2/ml2_conf.ini
/etc/default/neutron-server
neutron-plugin-openvswitch-agent /etc/neutron/neutron.conf
/etc/neutron/api-paste.ini
/etc/neutron/plugins/ml2/ml2_conf.ini
/etc/init/neutron-plugin-openvswitch-agent.conf
neutron-dhcp-agent /etc/neutron/neutron.conf
/etc/neutron/api-paste.ini
/etc/neutron/dhcp_agent.ini
neutron-l3-agent /etc/neutron/neutron.conf
/etc/neutron/api-paste.ini
/etc/neutron/l3_agent.ini
neutron-metadata-agent /etc/neutron/neutron.conf
/etc/neutron/api-paste.ini
/etc/neutron/metadata_agent.ini

We will need to make updates to all of the files for the Neutron services to work and to enable the ML2 plugin with the Open vSwitch agent.

neutron.conf

Open up the /etc/neutron/neutron.conf file with your favorite text editor and add or update the following items. Don't save the file until you complete all of them.

Core plugin


Even though the OVSNeutronPlugin is deprecated it is still listed as the default core plugin. We need to modify the core_plugin property value.

...
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
...

Advanced service modules


The advanced services (router, load-balancers, FWaaS, or VPNaaS) need to be explictly called out for their modules to be enabled. There are two places where they are defined but I'm only going to use the router (l3-agent) service now. Find the service_plugins property and add the following text.

...
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
...

At the very end of the file there should be a property called service_provider under the service_providers section, comment it out.

...
[service_providers]
...
# service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

api_paste_config location


Verify that the api_paste_config property is pointing to the correct file location. If the property is uncommented the entire path needs to be there.

...
# Paste configuration file
api_paste_config = /etc/neutron/api-paste.ini
...

Allow overlapping IPs


Locate the allow_overlapping_ips property, uncomment it and change the value to True.
...
allow_overlapping_ips = True
...

RabbitMQ


Find the rabbit_host property, uncomment, then change the value to the IP of your RabbitMQ server.

...
rabbitmq_host = 192.168.1.110
...

Keystone authentication


Next find the [keystone_authtoken] section and update the properties to point to your Keystone server.

...
[keystone_authtoken]  
auth_host = 192.168.1.110 
auth_port = 35357  
auth_protocol = http  
admin_tenant_name = service  
admin_user = neutron  
admin_password = password 
signing_dir = $state_path/keystone-signing
...

Disable the database connection string


The ML2 plugin file takes care of the database connection string now so either remove the existing one or comment the connection string out.

...
[database]
...
# connection = sqlite:////var/lib/neutron/neutron.sqlite
...

Save the /etc/neutron/neutron.conf file and move on to the /etc/neutron/api-paste.ini file.

api-paste.ini

Open the /etc/neutron/api-paste.ini file with your favorite text editor. We need to update the [filter:authtoken] section with the Keystone info and then save the file.

...
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = 192.168.1.110
admin_tenant_name = service  
admin_user = neutron  
admin_password = password
...

ML2 configuration

This is where things can get dicey. The ML2 plugin is installed with the neutron-server installation package and installs the files at /usr/share/pyshared/neutron/plugins/ml2. What I wasn't able to find though was a configuration file for ML2 so I used devstack to reverse-engineer it. We will have to create the directory structure and the ml2_conf.ini from scratch, then populate the file with entries.

First create the directory hold the file.

# mkdir /etc/neutron/plugins/ml2

Next create the configuration file using your favorite text editor and paste the following text. Make sure you update the local_ip property to the correct IP address that will host the GRE tunnels and update the connection property to point to your MySQL server. We will also change the owner of the file once it is saved.

# nano /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
# (ListOpt) List of network type driver entrypoints to be loaded from
# the neutron.ml2.type_drivers namespace.
#
# Example: type_drivers = flat,vlan,gre,vxlan
type_drivers = gre

# (ListOpt) Ordered list of network_types to allocate as tenant
# networks. The default value 'local' is useful for single-box testing
# but provides no connectivity between hosts.
#
# Example: tenant_network_types = vlan,gre,vxlan
tenant_network_types = gre

# (ListOpt) Ordered list of networking mechanism driver entrypoints
# to be loaded from the neutron.ml2.mechanism_drivers namespace.
# Example: mechanism_drivers = arista
# Example: mechanism_drivers = cisco,logger
mechanism_drivers = openvswitch,linuxbridge

[ml2_type_flat]
# (ListOpt) List of physical_network names with which flat networks
# can be created. Use * to allow flat networks with arbitrary
# physical_network names.
#
# flat_networks =
# Example:flat_networks = physnet1,physnet2
# Example:flat_networks = *

[ml2_type_vlan]
# (ListOpt) List of [::] tuples
# specifying physical_network names usable for VLAN provider and
# tenant networks, as well as ranges of VLAN tags on each
# physical_network available for allocation as tenant networks.
#
# network_vlan_ranges =
# Example: network_vlan_ranges = physnet1:1000:2999,physnet2

[ml2_type_gre]
# (ListOpt) Comma-separated list of : tuples enumerating ranges of GRE tunnel IDs that are available for tenant network allocation
tunnel_id_ranges = 1:1000

[ml2_type_vxlan]
# (ListOpt) Comma-separated list of : tuples enumerating
# ranges of VXLAN VNI IDs that are available for tenant network allocation.
#
# vni_ranges =

# (StrOpt) Multicast group for the VXLAN interface. When configured, will
# enable sending all broadcast traffic to this multicast group. When left
# unconfigured, will disable multicast VXLAN mode.
#
# vxlan_group =
# Example: vxlan_group = 239.1.1.1

[database]
sql_connection = mysql://neutron:password@192.168.1.110/neutron

[ovs]
enable_tunneling = True
local_ip = 172.16.0.10

[agent]
tunnel_types = gre
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

Now save the file and let's update the new directory and the new file's group owner.

# chgrp -R neutron /etc/neutron/plugins

The ML2 plugin has been configured, we now need to update the services to use the plugin.

neutron-server startup configuration

Update the /etc/default/neutron-server file to point to the ML2 plugin ini file and save the file.

# defaults for neutron-server
# path to config file corresponding to the core_plugin specified in
# neutron.conf
NEUTRON_PLUGIN_CONFIG="/etc/neutron/plugins/ml2/ml2_conf.ini"

Neutron Open vSwitch agent


The neutron-plugin-openvswitch-agent service also uses the configuration provided by the ML2 plugin ini file. Open this file, update the second --config-file argument value to use the /etc/neutron/plugins/ml2/ml2_conf.ini file, and save the file.

...
exec start-stop-daemon --start --chuid neutron --exec /usr/bin/neutron-openvswitch-agent -- --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --log-file=/var/log/neutron/openvswitch-agent.log

Both the neutron-server and neutron-plugin-openvswitch-agent services are configured. We now need to configure the unique files for each of the remaining services.

DHCP agent


Open the /etc/neutron/dhcp_agent.ini file with your favorite text editor, verify that the following properties are set correctly, update the file where required, and save it.

[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
...
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
...
use_namespaces = True
...

L3 agent


Open the /etc/neutron/l3_agent.ini file with your favorite editor, verify that the following properties are set correctly, update the file where required, and save it.

[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
...
use_namespaces = True
...

Metadata agent


Open the /etc/neutron/metadata_agent.ini file with your favorite editor, verify that the following properties are set correctly, update the file where required, and save it. Make sure that the metadata_proxy_shared_secret value is the same as the neutron_metadata_proxy_shared_secret property's value in the /etc/nova/nova.conf. The metadata agent uses the value as a password to communicate.

[DEFAULT]
...
auth_url = http://192.168.1.110:5000/v2.0
auth_region = RegionOne
admin_tenant_name = service
admin_user = neutron
admin_password = password
...
nova_metadata_url = 192.168.1.110
...
nova_metadata_port = 8775
...
metadata_proxy_shared_secret = helloOpenStack
...

Create the Open vSwitch bridges


The neutron-plugin-openvswitch-agent expects two bridges, the integration bridge and the tunnel bridge. Make sure you add the correct network interface to the br-ex bridge, you want to use whichever interface is configured for promiscuous mode.

# ovs-vsctl add-br br-int
# ovs-vsctl add-br br-ex
# ovs-vsctl add-port br-ex eth1

Restart the services and validate that everything works


At this point all of the Neutron services have been configured and we need to restart the services.

# restart neutron-server && restart neutron-plugin-openvswitch-agent && restart neutron-dhcp-agent && restart neutron-l3-agent && restart neutron-metadata-agent

To validate that the all of the Neutron services are running correctly we can use the neutron python client. As with nova-manage service list look for the :-) faces.

# neutron agent-list
+--------------------------------------+--------------------+---------------------------------+-------+----------------+
| id                                   | agent_type         | host                            | alive | admin_state_up |
+--------------------------------------+--------------------+---------------------------------+-------+----------------+
| 3e14909e-6efd-4047-8685-2926b78d8a58 | DHCP agent         | havana-network                  | :-)   | True           |
| 42504987-f47f-491d-bbb9-e9e9ec9026b8 | Open vSwitch agent | havana-network                  | :-)   | True           |
| 66bc6757-5f42-446e-a158-511e6c271b0c | Open vSwitch agent | nova-compute                    | :-)   | True           |
| c6e5f3b5-8a16-409d-a005-eafa1cbb6bf8 | L3 agent           | havana-network                  | :-)   | True           |
+--------------------------------------+--------------------+---------------------------------+-------+----------------+

Once the Neutron services have been restarted you should be able to log into Horizon. Open a web browser and point it to http://192.168.1.110/horizon, replacing the IP address with the havana-wfe IP or hostname. Hopefully Horizon will log in and present you the dashboard in all of its glory.

Monday, November 11, 2013

Part 1 - How to install OpenStack Havana on Ubuntu - Prerequisites, Horizon, Keystone, and Nova

This post is the first of a multi-part series describing how I installed and configured OpenStack Havana within a test environment. This first post will discuss the method I used to install and configure the base OS, the OpenStack prerequisites, and Keystone, Nova, and Horizon. There are additional posts coming that will describe the install and configuration of Neutron, Cinder, Glance, Heat, and Ceilometer.

This effort is basically so I can get acquainted with the overall installation and determine whether there are any differences between the Grizzly install and the Havana install. (Spoiler alert - there are) and this configuration screams PoC so please remember that.

Here's my test environment:
  • Ubuntu 12.04.3 LTS as the OS on all nodes
  • KVM as the hypervisor
  • MySQL hosts the databases
  • Neutron is used for networking
  • Keystone is used for authentication with MySQL as the backend
  • ML2 is the L2 agent
  • Open vSwitch is the L2 plugin
  • UTC is the configured time zone on all of the nodes

Also, I want to give credit where credit is due, this post includes steps from multiple places plus my own additions/findings.

Note, I ran all of the commands as the superuser. If you aren't comfortable running as root just prefix sudo to all of the commands.

The "design"


As I mentioned this design is really for a lab or PoC, I do not recommend mimicking it for production use. I am hosting the lab on a dedicated (read: non-OpenStack-controlled) KVM server and will use a second KVM server to act as the nova-compute node.
  • A compute "controller" VM called havana-wfe will run the majority of the services (listed below). It has been configured with a single network device and assigned the IP address 192.168.1.110.
    • apache2+django
    • cinder-api, cinder-scheduler
    • glance-api, glance-registry
    • horizon
    • keystone
    • memcached
    • mysql
    • nova-api, nova-cert, nova-conductor, nova-consoleauth, nova-novncproxy, nova-scheduler
    • rabbitmq
  • A network "controller" VM called havana-network will run neutron-server, neutron-dhcp-agent, neutron-l3-agent, neutron-plugin-openvswitch-agent, and neutron-metadata-agent. It has been configured with three network devices:
    • eth0 provides management traffic communication to and from the other OpenStack agents and is assigned 192.168.1.111
    • eth1 hosts the GRE tunnels and is assigned 172.16.0.10
    • eth2 will be used by the l3-agent for external connectivity in/out of the OpenStack cloud and therefore is configured to live in promiscuous mode
  • The second KVM server is called nova-compute and will run the nova-compute and neutron-plugin-openvswitch-agent services. It is configured with two network devices:
    • eth0 is used for management and assigned the IP address 192.168.1.113
    • eth1 hosts the GRE tunnels and is assigned the IP address 172.16.0.11

Prerequisites


Here's the list of things I did before the OpenStack install:

Install Ubuntu 12.04.2

Personally I have had multiple issues with the 12.04.3 release, I ended up just going back to 12.04.2 for the installation media. Grab the 12.04.2 installation media and install Ubuntu. If you need help with the installation follow these steps.

Install Ubuntu OpenStack package pre-reqs and update Ubuntu

# apt-get update && apt-get -y install python-software-properties && add-apt-repository -y cloud-archive:havana && apt-get update && apt-get -y upgrade dist-upgrade && apt-get -y autoremove && reboot

Once the server reboots log back in via SSH or the console and elevate to superuser.

Install the primary and supporting OpenStack Ubuntu packages

Once apt-get is configured to retrieve the correct OpenStack packages we can go ahead and start installing them. I choose to install as much as possible at once but if you aren't comfortable with this please feel free to break the package installation up into chunks.
# apt-get -y install mysql-server python-mysqldb rabbitmq-server ntp keystone python-keystone python-keystoneclient glance nova-api nova-cert novnc nova-consoleauth nova-scheduler nova-novncproxy nova-doc nova-conductor nova-ajax-console-proxy python-novaclient cinder-api cinder-scheduler openstack-dashboard memcached libapache2-mod-wsgi

Configure the supporting services

MySQL, NTP, and RabbitMQ were just installed, we need to configure MySQL, I'm leaving RabbitMQ and NTP as-is for now.

Update the MySQL bind address


MySQL maintains most of it's configuration in a file located at /etc/mysql/my.cnf. By default MySQL is configured to only allow local connections so we need to update the bind-address value to allow for remote connections. Edit the my.cnf file and replace the existing value (more than likely it is 127.0.0.1) with the primary IP address of the MySQL server.
bind-address = 192.168.1.110
Once the config file has been updated restart MySQL.
# service mysql restart

Secure MySQL


It's good practice to remove the basic security flaw...oops, I mean "features". The mysql_secure_installation script asks you to change the root password, removes the default anonymous accounts, disables the ability for the root account to log in remotely to MySQL, and drops the test databases.
# mysql_secure_installation

Create the databases


We need to log into MySQL now to create the databases for some of the OpenStack services. Ceilometer uses MongoDB and Swift doesn't use a database.
# mysql -u root -p
Create each database, then create the unique user, set the password, and assign privileges for the new user to the corresponding database. If you want you can replace the username and password values with whatever you want the service's username and password to be, just make sure that you update the service's configuration file later in the install.

Cinder

mysql> CREATE DATABASE cinder;  
mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'password';  
mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'password';

Glance

mysql> CREATE DATABASE glance;  
mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'password';  
mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'password';

Keystone

mysql> CREATE DATABASE keystone;  
mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'password';  
mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'password';

Neutron

mysql> CREATE DATABASE neutron;  
mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'password';  
mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'password';

Nova

mysql> CREATE DATABASE nova;  
mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'password';  
mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'password';

Cleanup

mysql> FLUSH PRIVILEGES;
mysql> QUIT;

Create two OpenStack credential files


Yeah, I know it isn't secure but it's a lab and this way I don't have to type as much.

Create a random seed


First let's create a custom character string to act as the token password. The returned value is the seed we will use as a "password" to access Keystone prior to creating any accounts.
# openssl rand -hex 10
4fcadbe846130de04cfc

Update the Keystone admin_token value


We need to replace the existing admin_token value (it's probably ADMIN) under the [DEFAULT] section in the /etc/keystone/keystone.conf file with the OpenSSL seed. It should be the first property you see.
admin_token = 4fcadbe846130de04cfc

Create the files


Next we create two new text files somewhere on the Keystone server and add the following entries. Using these values allows one to bypass the username/password credential requirement for Keystone. Make sure that the OS_SERVICE_TOKEN value matches the value in the /etc/keystone/keystone.conf file. I named this file admin.token.creds, where admin refers to the user, token refers to the auth method, and creds represent that the file is a credentials file.
export OS_SERVICE_TOKEN=4fcadbe846130de04cfc
export OS_SERVICE_ENDPOINT=http://192.168.1.110:35357/v2.0
I named this file admin.user.creds, where the first admin refers to the OS_TENANT_NAME, the second admin refers to the OS_USERNAME, user refers to the auth method, and creds represent that the file is a credentials file.
export OS_AUTH_URL=http://192.168.1.110:5000/v2.0
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=password

Add the authentication values to your profile


Let's add the OS_SERVICE_TOKEN values to your profile's environment. We have to retrieve a token to interact with the OpenStack services so by using these two values we are effectively bypassing the password requirement for a user. Note that we are also authenticating against the Keystone admin URL, not the internal URL.
# source admin.token.creds
We should now be able to interact with Keystone once it is configured.

Keystone


Update the configuration file

Open /etc/keystone/keystone.conf with a text editor and update the connection value in [sql] section to the correct string.
connection = mysql://keystone:password@192.168.1.110/keystone

Import the schema

We need to create all of the tables and configure them, the Keystone developers created a specific command to do this.
# keystone-manage db_sync

Restart Keystone

Restarting the Keystone service needs to occur prior to the next steps so that the updated OS_SERVICE_TOKEN value can be used. We could probably do this earlier in the process but I'd prefer to not have to deal with errors.
restart keystone
You can also verify that Keystone has restarted correctly by tailing the log files.
# tail -f /var/log/keystone/keystone.log
--OR--
# tail -f /var/log/upstart/keystone.log
Look for two entries, one for starting the admin endpoint and a second referencing the public endpoint.
2013-11-12 18:28:48.240 3724 INFO keystone.common.environment.eventlet_server [-] Starting /usr/bin/keystone-all on 0.0.0.0:35357
2013-11-12 18:28:48.241 3724 INFO keystone.common.environment.eventlet_server [-] Starting /usr/bin/keystone-all on 0.0.0.0:5000

Create the objects

We need to populate Keystone with tenants, users, roles, and services and then configure each of them. Fyi, there's an easier way that the manual method, mseknibilel very graciously created two Keystone population scripts that are referred to in his OpenStack Grizzly Install Guide. If you aren't interested in the manual way you can go that route but I haven't tried them yet with Havana so YMMV.

Create the tenants

# keystone tenant-create --name=admin
# keystone tenant-create --name=service
Note the service tenant ID, you will need it below when you create the individual service users.

Create the admin user

# keystone user-create --name=admin --pass=password --email=admin@revolutionlabs.net

Create the base roles


(I need to figure out why we are creating two separate roles for keystone "administration". I think it's for separation of duties using the policy.json files between and within the admin tenant and service tenant but I very well could be wrong. Those two accounts aren't really used anywhere that I can see.)
# keystone role-create --name=admin
# keystone role-create --name=KeystoneAdmin
# keystone role-create --name=KeystoneServiceAdmin
# keystone role-create --name=Member

Assign the roles to the admin user


(Hey python-keystoneclient developers, it would be nice to get a confirmation or something stating that the role assignment worked...)
# keystone user-role-add --tenant=admin --user=admin --role=admin
# keystone user-role-add --tenant=admin --user=admin --role=KeystoneAdmin
# keystone user-role-add --tenant=admin --user=admin --role=KeystoneServiceAdmin

Create the individual service users


Replace the ADD_SVC_TENANT_ID text with the ID from the Create the tenants step above.
# keystone user-create --name=cinder --pass=password --tenant-id=ADD_SVC_TENANT_ID --email=cinder@revolutionlabs.net
# keystone user-create --name=glance --pass=password --tenant-id=ADD_SVC_TENANT_ID --email=glance@revolutionlabs.net
# keystone user-create --name=neutron --pass=password --tenant-id=ADD_SVC_TENANT_ID --email=neutron@revolutionlabs.net
# keystone user-create --name=nova --pass=password --tenant-id=ADD_SVC_TENANT_ID --email=nova@revolutionlabs.net

Assign the admin role to each of users

# keystone user-role-add --tenant=service --user=cinder --role=admin
# keystone user-role-add --tenant=service --user=glance --role=admin
# keystone user-role-add --tenant=service --user=neutron --role=admin
# keystone user-role-add --tenant=service --user=nova --role=admin

Create the individual services

# keystone service-create --name=cinder --type=volume
# keystone service-create --name=glance --type=image
# keystone service-create --name=keystone --type=identity
# keystone service-create --name=neutron --type=network
# keystone service-create --name=nova --type=compute

Create the individual service endpoints

Ok here's where this procedure becomes a PITA in my opinion. Replace the ADD_*_SVC_ID text with the service IDs created in the previous step. Make sure that get the URI strings correct else you will have to kill them and add them again.
# keystone endpoint-create --region=RegionOne --service-id=ADD_CINDER_SVC_ID --adminurl='http://192.168.1.110:8776/v1/%(tenant_id)s' --internalurl='http://192.168.1.110:8776/v1/%(tenant_id)s' --publicurl='http://192.168.1.110:8776/v1/%(tenant_id)s'
# keystone endpoint-create --region=RegionOne --service-id=ADD_GLANCE_SVC_ID --adminurl=http://192.168.1.110:9292/ --internalurl=http://192.168.1.110:9292/ --publicurl=http://192.168.1.110:9292/
# keystone endpoint-create --region=RegionOne --service-id=ADD_KEYSTONE_SVC_ID --adminurl=http://192.168.1.110:35357/v2.0 --internalurl=http://192.168.1.110:5000/v2.0 --publicurl=http://192.168.1.110:5000/v2.0
# keystone endpoint-create --region=RegionOne --service-id=ADD_NEUTRON_SVC_ID --adminurl=http://192.168.1.111:9696/ --internalurl=http://192.168.1.111:9696/ --publicurl=http://192.168.1.111:9696/
# keystone endpoint-create --region=RegionOne --service-id=ADD_NOVA_SVC_ID --adminurl='http://192.168.1.110:8774/v2/%(tenant_id)s' --internalurl='http://192.168.1.110:8774/v2/%(tenant_id)s' --publicurl='http://192.168.1.110:8774/v2/%(tenant_id)s'

Clear the cached OpenStack credentials

So Keystone should be configured now for the default OpenStack services. We need to now unset the sourced OpenStack credentials and then source your own personal credentials file (which we created earlier under the *Create the files* section under Keystone.
# unset OS_SERVICE_TOKEN
# unset OS_SERVICE_ENDPOINT
# source admin.user.creds

Glance


File configuration

Four Glance files require configuration with the following values, for this lab I didn't alter any of the other values. Note, for some strange reason the official version of the OpenStack Installation Guide for Ubuntu 12.04(LTS) does not update the MySQL connection string in either of the glance-api.conf or the glance-registry.conf and a bug has been filed.

Glance API


Open the /etc/glance/glance-api.conf file with your favorite text editor. Under the [DEFAULT] section change the sql_connection string to point to the Glance database in MySQL.
sql_connection = mysql://glance:password@192.168.1.110/glance
Under the # ============ Notification System Options ===================== subsection replace the RabbitMQ server value (normally localhost with the IP address of the RabbitMQ server, in my case it is 192.168.1.110.
rabbit_host = 192.168.1.110
Update the [keystone_authtoken] section to point to the Keystone internal API IP address and change the default admin_tenant_name, admin_user, and admin_password to the correct values for the Glance service account.
auth_host = 192.168.1.110
admin_tenant_name = service
admin_user = glance
admin_password = password
Finally enable the flavor property under the [paste_deploy] section and set the value to keystone. Save the file.
flavor = keystone
Open the /etc/glance/glance-api-paste.ini file with a text editor and add the [keystone_authtoken] values to the [filter:authtoken] section, it doesn't normally exist by default. Save the file once completed.
auth_host = 192.168.1.110
admin_tenant_name = service
admin_user = glance
admin_password = password
Save the file and move on to the Glance Registry configuration.

Glance Registry

Open the /etc/glance/glance-registry.conf file in your favorite text editor and add the same data with the exception of the rabbit_host property. Under the [DEFAULT] section change the sql_connection string to point to the Glance database in MySQL.
sql_connection = mysql://glance:password@192.168.1.110/glance
Add the [keystone_authtoken] section.
auth_host = 192.168.1.110
admin_tenant_name = service
admin_user = glance
admin_password = password
Finally enable the flavor property under the [paste_deploy] section, set the value to keystone then save the file.
flavor = keystone
Open the /etc/glance/glance-registry-paste.ini file with a text editor and add the [keystone_authtoken] values to the [filter:authtoken] section, save the file once completed.
auth_host = 192.168.1.110
admin_tenant_name = service
admin_user = glance
admin_password = password
The Glance configuration file updates are done, let's move on to syncing the Glance database and testing the services.

Sync the Glance database

The Glance utility glance-manage has the same database sync option as the Keystone utility we used earlier: db_sync.
# glance-manage db_sync

Glance configuration cleanup and verification

Next we need to restart the glance-api and glance-registry services so that the updated config values will be used.
# restart glance-api && restart glance-registry
Once the services have been restarted successfully you should be able to import an image. The cirros Linux image is really small so let's use it.
# glance image-create --name=cirros --disk-format=qcow2 --container-format=bare --is-public=false --location=https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
You should receive the following output indicating a successful import.
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | d972013792949d0d3ba628fbe8685bce     |
| container_format | bare                                 |
| created_at       | 2013-10-25T15:48:19.801268           |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | qcow2                                |
| id               | 88a35ffe-d00d-46b4-a726-4fcd3d0e52c7 |
| is_public        | False                                |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | cirros                               |
| owner            | None                                 |
| protected        | False                                |
| size             | 13147648                             |
| status           | active                               |
| updated_at       | 2013-10-25T15:48:20.326698           |
+------------------+--------------------------------------+

Troubleshooting Glance

Log files


If you didn't receive a similar output start by looking at the Glance logs files:
  • /var/log/glance/api.log
  • /var/log/glance/registry.log
The logs may not have any info in them. If not you can enable verbose or debug logging by editing the /etc/glance/glance-api.conf and /etc/glance/glance-registry.conf and uncommenting then changing either of the logging values to True, and restarting the services. For verbose logging do this:
verbose = True
For debug logging do this:
debug = True
Then restart the services.
# restart glance-api
# restart glance-registry
If after the restart there are still no log entries in either log file check the /var/log/upstart/glance-api.log and /var/log/upstart/glance-registry.log log files, they contain the log entries for the Upstart service manager and should tell you why either of the services wouldn't start.

Nova


There are several services in the nova family and all of them use the same two configuration files: /etc/nova/nova.conf and /etc/nova/api-paste.ini. We need to update several values in each file, sync the database, and then restart the services.

Nova configuration

The default /etc/nova/nova.conf config file is, well, sparse. No comments and you have to add several property key pairs. I'm not entirely sure why the Nova developers go this way but oh well. Open the /etc/nova/nova.conf file in your editor of choice and update/add the following properties and values.

api-paste file location


Validate that there is a property called api_paste_config and it's value is set to /etc/nova/api-paste.ini. If it doesn't exist append the location of Nova's api-paste.ini config file at the end of the [DEFAULT] section. Normally this file is located at /etc/nova/api-paste.ini on Ubuntu systems.
api_paste_config = /etc/nova/api-paste.ini

Keystone auth


Create a subsection called Auth and set the authentication strategy to keystone.
# Auth
auth_strategy = keystone

VNC configuration


Create a subsection called VNC and add entries to configure it.
# VNC
novncproxy_base_url=http://192.168.1.110:6080/vnc_auto.html
vnc_enabled = true
vnc_keymap = en_us
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 192.168.1.110

RabbitMQ


Create a subsection called Messaging Queuing and configure the nova services to use RabbitMQ for messaging and point Nova to the RabbitMQ server.
# Messaging
rpc_backend = nova.rpc.impl_kombu
rabbit_host = 192.168.1.110

Neutron


Add a section for the Neutron configuration and copy and paste the text.
# Neutron
network_api_class=nova.network.neutronv2.api.API
neutron_url=http://192.168.1.111:9696
neutron_auth_strategy=keystone
neutron_admin_tenant_name=service
neutron_admin_username=neutron
neutron_admin_password=password
neutron_admin_auth_url=http://192.168.1.110:5000/v2.0
firewall_driver=nova.virt.firewall.NoopFirewallDriver
security_group_api=neutron
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtGenericVIFDriver

Nova metadata


The neutron-metadata-agent acts as a proxy and forwards any requests to the Nova-API. For this process to work properly we need to add a few lines. Note that the value of the neutron_metadata_proxy_shared_secret property must match the value of the metadata_proxy_shared_secret property in the /etc/neutron/metadata_agent.ini file on the host running the neutron-metadata-agent.
...
# Nova Metadata
service_neutron_metadata_proxy = True
neutron_metadata_proxy_shared_secret = helloOpenStack
...

Add the database connection string

We are going to create a new section called [DATABASE] at the end of the file and add the following string, make sure that you replace the IP address value with the the IP of your environment's MySQL IP.
[DATABASE]
connection = mysql://nova:password@192.168.1.110/nova
Save the file and open the /etc/nova/api-paste.ini file for editing. We only need to update the [filter:authtoken] values in this file.
...
[filter_authtoken]
auth_host = 192.168.1.110
...
auth_tenant_name = service
auth_user = nova
auth_password = password

Sync the database

Guess what? Nova has a utility called nova-manage that has a command that sync its database. But the Nova developers went their own way and the command is different from the others.
# nova-manage db sync
The output of the sync will look similar (if not the same) as below:
2013-10-25 18:26:05.174 16766 INFO migrate.versioning.api [-] 132 -> 133...
2013-10-25 18:26:20.030 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:20.031 16766 INFO migrate.versioning.api [-] 133 -> 134...
2013-10-25 18:26:20.999 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:21.000 16766 INFO migrate.versioning.api [-] 134 -> 135...
2013-10-25 18:26:21.200 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:21.201 16766 INFO migrate.versioning.api [-] 135 -> 136...
2013-10-25 18:26:21.343 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:21.344 16766 INFO migrate.versioning.api [-] 136 -> 137...
2013-10-25 18:26:21.486 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:21.486 16766 INFO migrate.versioning.api [-] 137 -> 138...
2013-10-25 18:26:21.654 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:21.654 16766 INFO migrate.versioning.api [-] 138 -> 139...
2013-10-25 18:26:21.840 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:21.840 16766 INFO migrate.versioning.api [-] 139 -> 140...
2013-10-25 18:26:21.873 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:21.873 16766 INFO migrate.versioning.api [-] 140 -> 141...
2013-10-25 18:26:22.099 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:22.100 16766 INFO migrate.versioning.api [-] 141 -> 142...
2013-10-25 18:26:22.242 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:22.242 16766 INFO migrate.versioning.api [-] 142 -> 143...
2013-10-25 18:26:22.477 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:22.477 16766 INFO migrate.versioning.api [-] 143 -> 144...
2013-10-25 18:26:22.999 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:22.999 16766 INFO migrate.versioning.api [-] 144 -> 145...
2013-10-25 18:26:23.133 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:23.133 16766 INFO migrate.versioning.api [-] 145 -> 146...
2013-10-25 18:26:23.317 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:23.318 16766 INFO migrate.versioning.api [-] 146 -> 147...
2013-10-25 18:26:23.493 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:23.494 16766 INFO migrate.versioning.api [-] 147 -> 148...
2013-10-25 18:26:23.871 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:23.872 16766 INFO migrate.versioning.api [-] 148 -> 149...
2013-10-25 18:26:27.506 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:27.506 16766 INFO migrate.versioning.api [-] 149 -> 150...
2013-10-25 18:26:27.867 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:27.868 16766 INFO migrate.versioning.api [-] 150 -> 151...
2013-10-25 18:26:28.229 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:28.229 16766 INFO migrate.versioning.api [-] 151 -> 152...
2013-10-25 18:26:50.678 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:50.679 16766 INFO migrate.versioning.api [-] 152 -> 153...
2013-10-25 18:26:50.754 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:50.754 16766 INFO migrate.versioning.api [-] 153 -> 154...
2013-10-25 18:26:54.856 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:54.856 16766 INFO migrate.versioning.api [-] 154 -> 155...
2013-10-25 18:26:55.007 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:55.008 16766 INFO migrate.versioning.api [-] 155 -> 156...
2013-10-25 18:26:55.359 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:55.360 16766 INFO migrate.versioning.api [-] 156 -> 157...
2013-10-25 18:26:55.477 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:55.477 16766 INFO migrate.versioning.api [-] 157 -> 158...
2013-10-25 18:26:55.636 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:55.637 16766 INFO migrate.versioning.api [-] 158 -> 159...
2013-10-25 18:26:58.201 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:58.201 16766 INFO migrate.versioning.api [-] 159 -> 160...
2013-10-25 18:26:58.251 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:58.252 16766 INFO migrate.versioning.api [-] 160 -> 161...
2013-10-25 18:26:58.310 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:58.310 16766 INFO migrate.versioning.api [-] 161 -> 162...
2013-10-25 18:26:58.343 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:58.343 16766 INFO migrate.versioning.api [-] 162 -> 163...
2013-10-25 18:26:58.376 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:58.377 16766 INFO migrate.versioning.api [-] 163 -> 164...
2013-10-25 18:26:58.401 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:58.402 16766 INFO migrate.versioning.api [-] 164 -> 165...
2013-10-25 18:26:58.435 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:58.436 16766 INFO migrate.versioning.api [-] 165 -> 166...
2013-10-25 18:26:58.474 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:58.475 16766 INFO migrate.versioning.api [-] 166 -> 167...
2013-10-25 18:26:58.661 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:58.661 16766 INFO migrate.versioning.api [-] 167 -> 168...
2013-10-25 18:26:58.753 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:58.754 16766 INFO migrate.versioning.api [-] 168 -> 169...
2013-10-25 18:26:58.844 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:58.845 16766 INFO migrate.versioning.api [-] 169 -> 170...
2013-10-25 18:26:58.894 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:58.895 16766 INFO migrate.versioning.api [-] 170 -> 171...
2013-10-25 18:26:58.928 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:58.928 16766 INFO migrate.versioning.api [-] 171 -> 172...
2013-10-25 18:26:59.205 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:59.205 16766 INFO migrate.versioning.api [-] 172 -> 173...
2013-10-25 18:26:59.465 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:59.466 16766 INFO migrate.versioning.api [-] 173 -> 174...
2013-10-25 18:26:59.651 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:59.651 16766 INFO migrate.versioning.api [-] 174 -> 175...
2013-10-25 18:27:00.305 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:00.306 16766 INFO migrate.versioning.api [-] 175 -> 176...
2013-10-25 18:27:00.498 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:00.498 16766 INFO migrate.versioning.api [-] 176 -> 177...
2013-10-25 18:27:00.658 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:00.658 16766 INFO migrate.versioning.api [-] 177 -> 178...
2013-10-25 18:27:00.851 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:00.851 16766 INFO migrate.versioning.api [-] 178 -> 179...
2013-10-25 18:27:01.338 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:01.339 16766 INFO migrate.versioning.api [-] 179 -> 180...
2013-10-25 18:27:02.178 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:02.179 16766 INFO migrate.versioning.api [-] 180 -> 181...
2013-10-25 18:27:02.657 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:02.657 16766 INFO migrate.versioning.api [-] 181 -> 182...
2013-10-25 18:27:02.859 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:02.859 16766 INFO migrate.versioning.api [-] 182 -> 183...
2013-10-25 18:27:02.968 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:02.968 16766 INFO migrate.versioning.api [-] 183 -> 184...
2013-10-25 18:27:05.454 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:05.454 16766 INFO migrate.versioning.api [-] 184 -> 185...
2013-10-25 18:27:08.377 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:08.377 16766 INFO migrate.versioning.api [-] 185 -> 186...
2013-10-25 18:27:11.591 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:11.591 16766 INFO migrate.versioning.api [-] 186 -> 187...
2013-10-25 18:27:12.565 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:12.565 16766 INFO migrate.versioning.api [-] 187 -> 188...
2013-10-25 18:27:12.909 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:12.909 16766 INFO migrate.versioning.api [-] 188 -> 189...
2013-10-25 18:27:13.060 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:13.061 16766 INFO migrate.versioning.api [-] 189 -> 190...
2013-10-25 18:27:13.211 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:13.212 16766 INFO migrate.versioning.api [-] 190 -> 191...
2013-10-25 18:27:13.371 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:13.371 16766 INFO migrate.versioning.api [-] 191 -> 192...
2013-10-25 18:27:13.733 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:13.733 16766 INFO migrate.versioning.api [-] 192 -> 193...
2013-10-25 18:27:14.162 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:14.163 16766 INFO migrate.versioning.api [-] 193 -> 194...
2013-10-25 18:27:17.698 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:17.698 16766 INFO migrate.versioning.api [-] 194 -> 195...
2013-10-25 18:27:17.975 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:17.975 16766 INFO migrate.versioning.api [-] 195 -> 196...
2013-10-25 18:27:18.253 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:18.253 16766 INFO migrate.versioning.api [-] 196 -> 197...
2013-10-25 18:27:18.413 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:18.413 16766 INFO migrate.versioning.api [-] 197 -> 198...
2013-10-25 18:27:18.564 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:18.564 16766 INFO migrate.versioning.api [-] 198 -> 199...
2013-10-25 18:27:18.740 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:18.740 16766 INFO migrate.versioning.api [-] 199 -> 200...
2013-10-25 18:27:21.408 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:21.409 16766 INFO migrate.versioning.api [-] 200 -> 201...
2013-10-25 18:27:21.475 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:21.475 16766 INFO migrate.versioning.api [-] 201 -> 202...
2013-10-25 18:27:21.634 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:21.635 16766 INFO migrate.versioning.api [-] 202 -> 203...
2013-10-25 18:27:22.878 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:22.878 16766 INFO migrate.versioning.api [-] 203 -> 204...
2013-10-25 18:27:23.029 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:23.029 16766 INFO migrate.versioning.api [-] 204 -> 205...
2013-10-25 18:27:23.382 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:23.383 16766 INFO migrate.versioning.api [-] 205 -> 206...
2013-10-25 18:27:23.902 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:23.903 16766 INFO migrate.versioning.api [-] 206 -> 207...
2013-10-25 18:27:24.129 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:24.130 16766 INFO migrate.versioning.api [-] 207 -> 208...
2013-10-25 18:27:24.768 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:24.769 16766 INFO migrate.versioning.api [-] 208 -> 209...
2013-10-25 18:27:26.086 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:26.087 16766 INFO migrate.versioning.api [-] 209 -> 210...
2013-10-25 18:27:26.380 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:26.381 16766 INFO migrate.versioning.api [-] 210 -> 211...
2013-10-25 18:27:26.607 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:26.607 16766 INFO migrate.versioning.api [-] 211 -> 212...
2013-10-25 18:27:26.850 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:26.851 16766 INFO migrate.versioning.api [-] 212 -> 213...
2013-10-25 18:27:27.615 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:27.616 16766 INFO migrate.versioning.api [-] 213 -> 214...
2013-10-25 18:27:28.279 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:28.280 16766 INFO migrate.versioning.api [-] 214 -> 215...
2013-10-25 18:27:28.321 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:28.321 16766 INFO migrate.versioning.api [-] 215 -> 216...
2013-10-25 18:27:28.372 16766 INFO migrate.versioning.api [-] done

Cleanup and Nova validation

Restart services


We need to restart all of the Nova services so that they use the correct values. mseknibilel's OpenStack Grizzly Install Guide provided me an easier method that calling out each service.
# cd /etc/init.d/; for i in $( ls nova-* ); do sudo service $i restart; done
Just make sure that you recognize that you are now in the /etc/init.d directory, not you home directory. :)

Nova service validation


Validating the nova services is pretty easy, it's a single command using the nova-manage utility. Look for the :-) under the State column, xxx in the State column indicates a failed service.
# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-cert        havanafe                             internal         enabled    :-)   2013-10-25 18:34:02
nova-conductor   havanafe                             internal         enabled    :-)   2013-10-25 18:34:03
nova-consoleauth havanafe                             internal         enabled    :-)   2013-10-25 18:34:03
nova-scheduler   havanafe                             internal         enabled    :-)   2013-10-25 18:34:03

Nova troubleshooting

Log files


If you see xxx in the State column the best place to start are the log files. There are several. they are all located in the /var/log/nova directory. I recommend starting with the log file of the failed service (or services if you are really unlucky). Just as with Glance you may want to enable verbose or debug logging. Fyi, verbose logging is enabled out of the box for Nova, if you want to enable debug logging you will need to add an entry in the /etc/nova/nova.conf file and then restart the services.
debug = True
Note, when Nova is set to debug logging the nova-manage service list output looks like this:
# nova-manage service list
2013-10-25 18:42:53.183 16911 DEBUG nova.servicegroup.api [-] ServiceGroup driver defined as an instance of db __new__ /usr/lib/python2.7/dist-packages/nova/servicegroup/api.py:62
2013-10-25 18:42:53.258 16911 DEBUG stevedore.extension [-] found extension EntryPoint.parse('file = nova.image.download.file') _load_plugins /usr/lib/python2.7/dist-packages/stevedore/extension.py:84
2013-10-25 18:42:53.275 16911 DEBUG stevedore.extension [-] found extension EntryPoint.parse('file = nova.image.download.file') _load_plugins /usr/lib/python2.7/dist-packages/stevedore/extension.py:84
2013-10-25 18:42:53.277 16911 DEBUG nova.openstack.common.lockutils [req-b353eb5e-e4b3-4170-897f-e2a55946fee8 None None] Got semaphore "dbapi_backend" lock /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:166
2013-10-25 18:42:53.277 16911 DEBUG nova.openstack.common.lockutils [req-b353eb5e-e4b3-4170-897f-e2a55946fee8 None None] Got semaphore / lock "__get_backend" inner /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:245
Binary           Host                                 Zone             Status     State Updated_At
2013-10-25 18:42:53.643 16911 DEBUG nova.servicegroup.api [req-b353eb5e-e4b3-4170-897f-e2a55946fee8 None None] Check if the given member [{'binary': u'nova-cert', 'availability_zone': 'internal', 'deleted': 0L, 'created_at': datetime.datetime(2013, 10, 25, 18, 33, 12), 'updated_at': datetime.datetime(2013, 10, 25, 18, 42, 53), 'report_count': 58L, 'topic': u'cert', 'host': u'havanafe', 'disabled': False, 'deleted_at': None, 'disabled_reason': None, 'id': 1L}] is part of the ServiceGroup, is up service_is_up /usr/lib/python2.7/dist-packages/nova/servicegroup/api.py:94
2013-10-25 18:42:53.643 16911 DEBUG nova.servicegroup.drivers.db [req-b353eb5e-e4b3-4170-897f-e2a55946fee8 None None] DB_Driver.is_up last_heartbeat = 2013-10-25 18:42:53 elapsed = 0.643761 is_up /usr/lib/python2.7/dist-packages/nova/servicegroup/drivers/db.py:71
nova-cert        havanafe                             internal         enabled    :-)   2013-10-25 18:42:53
2013-10-25 18:42:53.644 16911 DEBUG nova.servicegroup.api [req-b353eb5e-e4b3-4170-897f-e2a55946fee8 None None] Check if the given member [{'binary': u'nova-conductor', 'availability_zone': 'internal', 'deleted': 0L, 'created_at': datetime.datetime(2013, 10, 25, 18, 33, 12), 'updated_at': datetime.datetime(2013, 10, 25, 18, 42, 53), 'report_count': 58L, 'topic': u'conductor', 'host': u'havanafe', 'disabled': False, 'deleted_at': None, 'disabled_reason': None, 'id': 2L}] is part of the ServiceGroup, is up service_is_up /usr/lib/python2.7/dist-packages/nova/servicegroup/api.py:94
2013-10-25 18:42:53.645 16911 DEBUG nova.servicegroup.drivers.db [req-b353eb5e-e4b3-4170-897f-e2a55946fee8 None None] DB_Driver.is_up last_heartbeat = 2013-10-25 18:42:53 elapsed = 0.645005 is_up /usr/lib/python2.7/dist-packages/nova/servicegroup/drivers/db.py:71
nova-conductor   havanafe                             internal         enabled    :-)   2013-10-25 18:42:53
2013-10-25 18:42:53.645 16911 DEBUG nova.servicegroup.api [req-b353eb5e-e4b3-4170-897f-e2a55946fee8 None None] Check if the given member [{'binary': u'nova-consoleauth', 'availability_zone': 'internal', 'deleted': 0L, 'created_at': datetime.datetime(2013, 10, 25, 18, 33, 13), 'updated_at': datetime.datetime(2013, 10, 25, 18, 42, 53), 'report_count': 58L, 'topic': u'consoleauth', 'host': u'havanafe', 'disabled': False, 'deleted_at': None, 'disabled_reason': None, 'id': 3L}] is part of the ServiceGroup, is up service_is_up /usr/lib/python2.7/dist-packages/nova/servicegroup/api.py:94
2013-10-25 18:42:53.645 16911 DEBUG nova.servicegroup.drivers.db [req-b353eb5e-e4b3-4170-897f-e2a55946fee8 None None] DB_Driver.is_up last_heartbeat = 2013-10-25 18:42:53 elapsed = 0.645806 is_up /usr/lib/python2.7/dist-packages/nova/servicegroup/drivers/db.py:71
nova-consoleauth havanafe                             internal         enabled    :-)   2013-10-25 18:42:53
2013-10-25 18:42:53.646 16911 DEBUG nova.servicegroup.api [req-b353eb5e-e4b3-4170-897f-e2a55946fee8 None None] Check if the given member [{'binary': u'nova-scheduler', 'availability_zone': 'internal', 'deleted': 0L, 'created_at': datetime.datetime(2013, 10, 25, 18, 33, 13), 'updated_at': datetime.datetime(2013, 10, 25, 18, 42, 43), 'report_count': 57L, 'topic': u'scheduler', 'host': u'havanafe', 'disabled': False, 'deleted_at': None, 'disabled_reason': None, 'id': 4L}] is part of the ServiceGroup, is up service_is_up /usr/lib/python2.7/dist-packages/nova/servicegroup/api.py:94
2013-10-25 18:42:53.646 16911 DEBUG nova.servicegroup.drivers.db [req-b353eb5e-e4b3-4170-897f-e2a55946fee8 None None] DB_Driver.is_up last_heartbeat = 2013-10-25 18:42:43 elapsed = 10.646609 is_up /usr/lib/python2.7/dist-packages/nova/servicegroup/drivers/db.py:71
nova-scheduler   havanafe                             internal         enabled    :-)   2013-10-25 18:42:43
Fun huh?

As with Glance if the log files aren't providing any information look through the /var/log/upstart/nova-*.log files for what the errors are.

Horizon (OpenStack Dashboard)


The dashboard is optional but IMHO it just makes life easier. In any case it just provides a GUI front-end to the APIs, there's nothing unique about how it calls things.

Update the Django config

OpenStack Horizon uses Django as the web framework, you can find out more information about it at their website. Open the /etc/openstack-dashboard/local_settings.py file in an editor then follow these steps.

Verify the time zone


Open the /etc/openstack-dashboard/local_settings.py and find the TIME_ZONE property, make sure it is set to UTC, if it isn't configure it as such.
TIME_ZONE = "UTC"

Set the host IP


Same file, look for the OPENSTACK_HOST property, we need to change the value to the IP address/DNS A record of the Keystone server, in my case this is 192.168.1.110.
OPENSTACK_HOST = "192.168.1.110"
Save the file.

Remove the Ubuntu theme

It's a bigger PITA than it is worth IMHO and it blows up the nice network map display. This will also restart the Apache service so the config changes will be consumed.
# apt-get purge openstack-dashboard-ubuntu-theme

Validate Horizon

Open up a web browser and connect to http://192.168.1.110/horizon. You should to get an error message when you log in because we haven't configured Neutron yet. That's the next post.