Monday, November 11, 2013

Part 1 - How to install OpenStack Havana on Ubuntu - Prerequisites, Horizon, Keystone, and Nova

This post is the first of a multi-part series describing how I installed and configured OpenStack Havana within a test environment. This first post will discuss the method I used to install and configure the base OS, the OpenStack prerequisites, and Keystone, Nova, and Horizon. There are additional posts coming that will describe the install and configuration of Neutron, Cinder, Glance, Heat, and Ceilometer.

This effort is basically so I can get acquainted with the overall installation and determine whether there are any differences between the Grizzly install and the Havana install. (Spoiler alert - there are) and this configuration screams PoC so please remember that.

Here's my test environment:
  • Ubuntu 12.04.3 LTS as the OS on all nodes
  • KVM as the hypervisor
  • MySQL hosts the databases
  • Neutron is used for networking
  • Keystone is used for authentication with MySQL as the backend
  • ML2 is the L2 agent
  • Open vSwitch is the L2 plugin
  • UTC is the configured time zone on all of the nodes

Also, I want to give credit where credit is due, this post includes steps from multiple places plus my own additions/findings.

Note, I ran all of the commands as the superuser. If you aren't comfortable running as root just prefix sudo to all of the commands.

The "design"


As I mentioned this design is really for a lab or PoC, I do not recommend mimicking it for production use. I am hosting the lab on a dedicated (read: non-OpenStack-controlled) KVM server and will use a second KVM server to act as the nova-compute node.
  • A compute "controller" VM called havana-wfe will run the majority of the services (listed below). It has been configured with a single network device and assigned the IP address 192.168.1.110.
    • apache2+django
    • cinder-api, cinder-scheduler
    • glance-api, glance-registry
    • horizon
    • keystone
    • memcached
    • mysql
    • nova-api, nova-cert, nova-conductor, nova-consoleauth, nova-novncproxy, nova-scheduler
    • rabbitmq
  • A network "controller" VM called havana-network will run neutron-server, neutron-dhcp-agent, neutron-l3-agent, neutron-plugin-openvswitch-agent, and neutron-metadata-agent. It has been configured with three network devices:
    • eth0 provides management traffic communication to and from the other OpenStack agents and is assigned 192.168.1.111
    • eth1 hosts the GRE tunnels and is assigned 172.16.0.10
    • eth2 will be used by the l3-agent for external connectivity in/out of the OpenStack cloud and therefore is configured to live in promiscuous mode
  • The second KVM server is called nova-compute and will run the nova-compute and neutron-plugin-openvswitch-agent services. It is configured with two network devices:
    • eth0 is used for management and assigned the IP address 192.168.1.113
    • eth1 hosts the GRE tunnels and is assigned the IP address 172.16.0.11

Prerequisites


Here's the list of things I did before the OpenStack install:

Install Ubuntu 12.04.2

Personally I have had multiple issues with the 12.04.3 release, I ended up just going back to 12.04.2 for the installation media. Grab the 12.04.2 installation media and install Ubuntu. If you need help with the installation follow these steps.

Install Ubuntu OpenStack package pre-reqs and update Ubuntu

# apt-get update && apt-get -y install python-software-properties && add-apt-repository -y cloud-archive:havana && apt-get update && apt-get -y upgrade dist-upgrade && apt-get -y autoremove && reboot

Once the server reboots log back in via SSH or the console and elevate to superuser.

Install the primary and supporting OpenStack Ubuntu packages

Once apt-get is configured to retrieve the correct OpenStack packages we can go ahead and start installing them. I choose to install as much as possible at once but if you aren't comfortable with this please feel free to break the package installation up into chunks.
# apt-get -y install mysql-server python-mysqldb rabbitmq-server ntp keystone python-keystone python-keystoneclient glance nova-api nova-cert novnc nova-consoleauth nova-scheduler nova-novncproxy nova-doc nova-conductor nova-ajax-console-proxy python-novaclient cinder-api cinder-scheduler openstack-dashboard memcached libapache2-mod-wsgi

Configure the supporting services

MySQL, NTP, and RabbitMQ were just installed, we need to configure MySQL, I'm leaving RabbitMQ and NTP as-is for now.

Update the MySQL bind address


MySQL maintains most of it's configuration in a file located at /etc/mysql/my.cnf. By default MySQL is configured to only allow local connections so we need to update the bind-address value to allow for remote connections. Edit the my.cnf file and replace the existing value (more than likely it is 127.0.0.1) with the primary IP address of the MySQL server.
bind-address = 192.168.1.110
Once the config file has been updated restart MySQL.
# service mysql restart

Secure MySQL


It's good practice to remove the basic security flaw...oops, I mean "features". The mysql_secure_installation script asks you to change the root password, removes the default anonymous accounts, disables the ability for the root account to log in remotely to MySQL, and drops the test databases.
# mysql_secure_installation

Create the databases


We need to log into MySQL now to create the databases for some of the OpenStack services. Ceilometer uses MongoDB and Swift doesn't use a database.
# mysql -u root -p
Create each database, then create the unique user, set the password, and assign privileges for the new user to the corresponding database. If you want you can replace the username and password values with whatever you want the service's username and password to be, just make sure that you update the service's configuration file later in the install.

Cinder

mysql> CREATE DATABASE cinder;  
mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'password';  
mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'password';

Glance

mysql> CREATE DATABASE glance;  
mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'password';  
mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'password';

Keystone

mysql> CREATE DATABASE keystone;  
mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'password';  
mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'password';

Neutron

mysql> CREATE DATABASE neutron;  
mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'password';  
mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'password';

Nova

mysql> CREATE DATABASE nova;  
mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'password';  
mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'password';

Cleanup

mysql> FLUSH PRIVILEGES;
mysql> QUIT;

Create two OpenStack credential files


Yeah, I know it isn't secure but it's a lab and this way I don't have to type as much.

Create a random seed


First let's create a custom character string to act as the token password. The returned value is the seed we will use as a "password" to access Keystone prior to creating any accounts.
# openssl rand -hex 10
4fcadbe846130de04cfc

Update the Keystone admin_token value


We need to replace the existing admin_token value (it's probably ADMIN) under the [DEFAULT] section in the /etc/keystone/keystone.conf file with the OpenSSL seed. It should be the first property you see.
admin_token = 4fcadbe846130de04cfc

Create the files


Next we create two new text files somewhere on the Keystone server and add the following entries. Using these values allows one to bypass the username/password credential requirement for Keystone. Make sure that the OS_SERVICE_TOKEN value matches the value in the /etc/keystone/keystone.conf file. I named this file admin.token.creds, where admin refers to the user, token refers to the auth method, and creds represent that the file is a credentials file.
export OS_SERVICE_TOKEN=4fcadbe846130de04cfc
export OS_SERVICE_ENDPOINT=http://192.168.1.110:35357/v2.0
I named this file admin.user.creds, where the first admin refers to the OS_TENANT_NAME, the second admin refers to the OS_USERNAME, user refers to the auth method, and creds represent that the file is a credentials file.
export OS_AUTH_URL=http://192.168.1.110:5000/v2.0
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=password

Add the authentication values to your profile


Let's add the OS_SERVICE_TOKEN values to your profile's environment. We have to retrieve a token to interact with the OpenStack services so by using these two values we are effectively bypassing the password requirement for a user. Note that we are also authenticating against the Keystone admin URL, not the internal URL.
# source admin.token.creds
We should now be able to interact with Keystone once it is configured.

Keystone


Update the configuration file

Open /etc/keystone/keystone.conf with a text editor and update the connection value in [sql] section to the correct string.
connection = mysql://keystone:password@192.168.1.110/keystone

Import the schema

We need to create all of the tables and configure them, the Keystone developers created a specific command to do this.
# keystone-manage db_sync

Restart Keystone

Restarting the Keystone service needs to occur prior to the next steps so that the updated OS_SERVICE_TOKEN value can be used. We could probably do this earlier in the process but I'd prefer to not have to deal with errors.
restart keystone
You can also verify that Keystone has restarted correctly by tailing the log files.
# tail -f /var/log/keystone/keystone.log
--OR--
# tail -f /var/log/upstart/keystone.log
Look for two entries, one for starting the admin endpoint and a second referencing the public endpoint.
2013-11-12 18:28:48.240 3724 INFO keystone.common.environment.eventlet_server [-] Starting /usr/bin/keystone-all on 0.0.0.0:35357
2013-11-12 18:28:48.241 3724 INFO keystone.common.environment.eventlet_server [-] Starting /usr/bin/keystone-all on 0.0.0.0:5000

Create the objects

We need to populate Keystone with tenants, users, roles, and services and then configure each of them. Fyi, there's an easier way that the manual method, mseknibilel very graciously created two Keystone population scripts that are referred to in his OpenStack Grizzly Install Guide. If you aren't interested in the manual way you can go that route but I haven't tried them yet with Havana so YMMV.

Create the tenants

# keystone tenant-create --name=admin
# keystone tenant-create --name=service
Note the service tenant ID, you will need it below when you create the individual service users.

Create the admin user

# keystone user-create --name=admin --pass=password --email=admin@revolutionlabs.net

Create the base roles


(I need to figure out why we are creating two separate roles for keystone "administration". I think it's for separation of duties using the policy.json files between and within the admin tenant and service tenant but I very well could be wrong. Those two accounts aren't really used anywhere that I can see.)
# keystone role-create --name=admin
# keystone role-create --name=KeystoneAdmin
# keystone role-create --name=KeystoneServiceAdmin
# keystone role-create --name=Member

Assign the roles to the admin user


(Hey python-keystoneclient developers, it would be nice to get a confirmation or something stating that the role assignment worked...)
# keystone user-role-add --tenant=admin --user=admin --role=admin
# keystone user-role-add --tenant=admin --user=admin --role=KeystoneAdmin
# keystone user-role-add --tenant=admin --user=admin --role=KeystoneServiceAdmin

Create the individual service users


Replace the ADD_SVC_TENANT_ID text with the ID from the Create the tenants step above.
# keystone user-create --name=cinder --pass=password --tenant-id=ADD_SVC_TENANT_ID --email=cinder@revolutionlabs.net
# keystone user-create --name=glance --pass=password --tenant-id=ADD_SVC_TENANT_ID --email=glance@revolutionlabs.net
# keystone user-create --name=neutron --pass=password --tenant-id=ADD_SVC_TENANT_ID --email=neutron@revolutionlabs.net
# keystone user-create --name=nova --pass=password --tenant-id=ADD_SVC_TENANT_ID --email=nova@revolutionlabs.net

Assign the admin role to each of users

# keystone user-role-add --tenant=service --user=cinder --role=admin
# keystone user-role-add --tenant=service --user=glance --role=admin
# keystone user-role-add --tenant=service --user=neutron --role=admin
# keystone user-role-add --tenant=service --user=nova --role=admin

Create the individual services

# keystone service-create --name=cinder --type=volume
# keystone service-create --name=glance --type=image
# keystone service-create --name=keystone --type=identity
# keystone service-create --name=neutron --type=network
# keystone service-create --name=nova --type=compute

Create the individual service endpoints

Ok here's where this procedure becomes a PITA in my opinion. Replace the ADD_*_SVC_ID text with the service IDs created in the previous step. Make sure that get the URI strings correct else you will have to kill them and add them again.
# keystone endpoint-create --region=RegionOne --service-id=ADD_CINDER_SVC_ID --adminurl='http://192.168.1.110:8776/v1/%(tenant_id)s' --internalurl='http://192.168.1.110:8776/v1/%(tenant_id)s' --publicurl='http://192.168.1.110:8776/v1/%(tenant_id)s'
# keystone endpoint-create --region=RegionOne --service-id=ADD_GLANCE_SVC_ID --adminurl=http://192.168.1.110:9292/ --internalurl=http://192.168.1.110:9292/ --publicurl=http://192.168.1.110:9292/
# keystone endpoint-create --region=RegionOne --service-id=ADD_KEYSTONE_SVC_ID --adminurl=http://192.168.1.110:35357/v2.0 --internalurl=http://192.168.1.110:5000/v2.0 --publicurl=http://192.168.1.110:5000/v2.0
# keystone endpoint-create --region=RegionOne --service-id=ADD_NEUTRON_SVC_ID --adminurl=http://192.168.1.111:9696/ --internalurl=http://192.168.1.111:9696/ --publicurl=http://192.168.1.111:9696/
# keystone endpoint-create --region=RegionOne --service-id=ADD_NOVA_SVC_ID --adminurl='http://192.168.1.110:8774/v2/%(tenant_id)s' --internalurl='http://192.168.1.110:8774/v2/%(tenant_id)s' --publicurl='http://192.168.1.110:8774/v2/%(tenant_id)s'

Clear the cached OpenStack credentials

So Keystone should be configured now for the default OpenStack services. We need to now unset the sourced OpenStack credentials and then source your own personal credentials file (which we created earlier under the *Create the files* section under Keystone.
# unset OS_SERVICE_TOKEN
# unset OS_SERVICE_ENDPOINT
# source admin.user.creds

Glance


File configuration

Four Glance files require configuration with the following values, for this lab I didn't alter any of the other values. Note, for some strange reason the official version of the OpenStack Installation Guide for Ubuntu 12.04(LTS) does not update the MySQL connection string in either of the glance-api.conf or the glance-registry.conf and a bug has been filed.

Glance API


Open the /etc/glance/glance-api.conf file with your favorite text editor. Under the [DEFAULT] section change the sql_connection string to point to the Glance database in MySQL.
sql_connection = mysql://glance:password@192.168.1.110/glance
Under the # ============ Notification System Options ===================== subsection replace the RabbitMQ server value (normally localhost with the IP address of the RabbitMQ server, in my case it is 192.168.1.110.
rabbit_host = 192.168.1.110
Update the [keystone_authtoken] section to point to the Keystone internal API IP address and change the default admin_tenant_name, admin_user, and admin_password to the correct values for the Glance service account.
auth_host = 192.168.1.110
admin_tenant_name = service
admin_user = glance
admin_password = password
Finally enable the flavor property under the [paste_deploy] section and set the value to keystone. Save the file.
flavor = keystone
Open the /etc/glance/glance-api-paste.ini file with a text editor and add the [keystone_authtoken] values to the [filter:authtoken] section, it doesn't normally exist by default. Save the file once completed.
auth_host = 192.168.1.110
admin_tenant_name = service
admin_user = glance
admin_password = password
Save the file and move on to the Glance Registry configuration.

Glance Registry

Open the /etc/glance/glance-registry.conf file in your favorite text editor and add the same data with the exception of the rabbit_host property. Under the [DEFAULT] section change the sql_connection string to point to the Glance database in MySQL.
sql_connection = mysql://glance:password@192.168.1.110/glance
Add the [keystone_authtoken] section.
auth_host = 192.168.1.110
admin_tenant_name = service
admin_user = glance
admin_password = password
Finally enable the flavor property under the [paste_deploy] section, set the value to keystone then save the file.
flavor = keystone
Open the /etc/glance/glance-registry-paste.ini file with a text editor and add the [keystone_authtoken] values to the [filter:authtoken] section, save the file once completed.
auth_host = 192.168.1.110
admin_tenant_name = service
admin_user = glance
admin_password = password
The Glance configuration file updates are done, let's move on to syncing the Glance database and testing the services.

Sync the Glance database

The Glance utility glance-manage has the same database sync option as the Keystone utility we used earlier: db_sync.
# glance-manage db_sync

Glance configuration cleanup and verification

Next we need to restart the glance-api and glance-registry services so that the updated config values will be used.
# restart glance-api && restart glance-registry
Once the services have been restarted successfully you should be able to import an image. The cirros Linux image is really small so let's use it.
# glance image-create --name=cirros --disk-format=qcow2 --container-format=bare --is-public=false --location=https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
You should receive the following output indicating a successful import.
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | d972013792949d0d3ba628fbe8685bce     |
| container_format | bare                                 |
| created_at       | 2013-10-25T15:48:19.801268           |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | qcow2                                |
| id               | 88a35ffe-d00d-46b4-a726-4fcd3d0e52c7 |
| is_public        | False                                |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | cirros                               |
| owner            | None                                 |
| protected        | False                                |
| size             | 13147648                             |
| status           | active                               |
| updated_at       | 2013-10-25T15:48:20.326698           |
+------------------+--------------------------------------+

Troubleshooting Glance

Log files


If you didn't receive a similar output start by looking at the Glance logs files:
  • /var/log/glance/api.log
  • /var/log/glance/registry.log
The logs may not have any info in them. If not you can enable verbose or debug logging by editing the /etc/glance/glance-api.conf and /etc/glance/glance-registry.conf and uncommenting then changing either of the logging values to True, and restarting the services. For verbose logging do this:
verbose = True
For debug logging do this:
debug = True
Then restart the services.
# restart glance-api
# restart glance-registry
If after the restart there are still no log entries in either log file check the /var/log/upstart/glance-api.log and /var/log/upstart/glance-registry.log log files, they contain the log entries for the Upstart service manager and should tell you why either of the services wouldn't start.

Nova


There are several services in the nova family and all of them use the same two configuration files: /etc/nova/nova.conf and /etc/nova/api-paste.ini. We need to update several values in each file, sync the database, and then restart the services.

Nova configuration

The default /etc/nova/nova.conf config file is, well, sparse. No comments and you have to add several property key pairs. I'm not entirely sure why the Nova developers go this way but oh well. Open the /etc/nova/nova.conf file in your editor of choice and update/add the following properties and values.

api-paste file location


Validate that there is a property called api_paste_config and it's value is set to /etc/nova/api-paste.ini. If it doesn't exist append the location of Nova's api-paste.ini config file at the end of the [DEFAULT] section. Normally this file is located at /etc/nova/api-paste.ini on Ubuntu systems.
api_paste_config = /etc/nova/api-paste.ini

Keystone auth


Create a subsection called Auth and set the authentication strategy to keystone.
# Auth
auth_strategy = keystone

VNC configuration


Create a subsection called VNC and add entries to configure it.
# VNC
novncproxy_base_url=http://192.168.1.110:6080/vnc_auto.html
vnc_enabled = true
vnc_keymap = en_us
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 192.168.1.110

RabbitMQ


Create a subsection called Messaging Queuing and configure the nova services to use RabbitMQ for messaging and point Nova to the RabbitMQ server.
# Messaging
rpc_backend = nova.rpc.impl_kombu
rabbit_host = 192.168.1.110

Neutron


Add a section for the Neutron configuration and copy and paste the text.
# Neutron
network_api_class=nova.network.neutronv2.api.API
neutron_url=http://192.168.1.111:9696
neutron_auth_strategy=keystone
neutron_admin_tenant_name=service
neutron_admin_username=neutron
neutron_admin_password=password
neutron_admin_auth_url=http://192.168.1.110:5000/v2.0
firewall_driver=nova.virt.firewall.NoopFirewallDriver
security_group_api=neutron
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtGenericVIFDriver

Nova metadata


The neutron-metadata-agent acts as a proxy and forwards any requests to the Nova-API. For this process to work properly we need to add a few lines. Note that the value of the neutron_metadata_proxy_shared_secret property must match the value of the metadata_proxy_shared_secret property in the /etc/neutron/metadata_agent.ini file on the host running the neutron-metadata-agent.
...
# Nova Metadata
service_neutron_metadata_proxy = True
neutron_metadata_proxy_shared_secret = helloOpenStack
...

Add the database connection string

We are going to create a new section called [DATABASE] at the end of the file and add the following string, make sure that you replace the IP address value with the the IP of your environment's MySQL IP.
[DATABASE]
connection = mysql://nova:password@192.168.1.110/nova
Save the file and open the /etc/nova/api-paste.ini file for editing. We only need to update the [filter:authtoken] values in this file.
...
[filter_authtoken]
auth_host = 192.168.1.110
...
auth_tenant_name = service
auth_user = nova
auth_password = password

Sync the database

Guess what? Nova has a utility called nova-manage that has a command that sync its database. But the Nova developers went their own way and the command is different from the others.
# nova-manage db sync
The output of the sync will look similar (if not the same) as below:
2013-10-25 18:26:05.174 16766 INFO migrate.versioning.api [-] 132 -> 133...
2013-10-25 18:26:20.030 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:20.031 16766 INFO migrate.versioning.api [-] 133 -> 134...
2013-10-25 18:26:20.999 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:21.000 16766 INFO migrate.versioning.api [-] 134 -> 135...
2013-10-25 18:26:21.200 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:21.201 16766 INFO migrate.versioning.api [-] 135 -> 136...
2013-10-25 18:26:21.343 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:21.344 16766 INFO migrate.versioning.api [-] 136 -> 137...
2013-10-25 18:26:21.486 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:21.486 16766 INFO migrate.versioning.api [-] 137 -> 138...
2013-10-25 18:26:21.654 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:21.654 16766 INFO migrate.versioning.api [-] 138 -> 139...
2013-10-25 18:26:21.840 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:21.840 16766 INFO migrate.versioning.api [-] 139 -> 140...
2013-10-25 18:26:21.873 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:21.873 16766 INFO migrate.versioning.api [-] 140 -> 141...
2013-10-25 18:26:22.099 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:22.100 16766 INFO migrate.versioning.api [-] 141 -> 142...
2013-10-25 18:26:22.242 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:22.242 16766 INFO migrate.versioning.api [-] 142 -> 143...
2013-10-25 18:26:22.477 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:22.477 16766 INFO migrate.versioning.api [-] 143 -> 144...
2013-10-25 18:26:22.999 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:22.999 16766 INFO migrate.versioning.api [-] 144 -> 145...
2013-10-25 18:26:23.133 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:23.133 16766 INFO migrate.versioning.api [-] 145 -> 146...
2013-10-25 18:26:23.317 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:23.318 16766 INFO migrate.versioning.api [-] 146 -> 147...
2013-10-25 18:26:23.493 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:23.494 16766 INFO migrate.versioning.api [-] 147 -> 148...
2013-10-25 18:26:23.871 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:23.872 16766 INFO migrate.versioning.api [-] 148 -> 149...
2013-10-25 18:26:27.506 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:27.506 16766 INFO migrate.versioning.api [-] 149 -> 150...
2013-10-25 18:26:27.867 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:27.868 16766 INFO migrate.versioning.api [-] 150 -> 151...
2013-10-25 18:26:28.229 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:28.229 16766 INFO migrate.versioning.api [-] 151 -> 152...
2013-10-25 18:26:50.678 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:50.679 16766 INFO migrate.versioning.api [-] 152 -> 153...
2013-10-25 18:26:50.754 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:50.754 16766 INFO migrate.versioning.api [-] 153 -> 154...
2013-10-25 18:26:54.856 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:54.856 16766 INFO migrate.versioning.api [-] 154 -> 155...
2013-10-25 18:26:55.007 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:55.008 16766 INFO migrate.versioning.api [-] 155 -> 156...
2013-10-25 18:26:55.359 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:55.360 16766 INFO migrate.versioning.api [-] 156 -> 157...
2013-10-25 18:26:55.477 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:55.477 16766 INFO migrate.versioning.api [-] 157 -> 158...
2013-10-25 18:26:55.636 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:55.637 16766 INFO migrate.versioning.api [-] 158 -> 159...
2013-10-25 18:26:58.201 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:58.201 16766 INFO migrate.versioning.api [-] 159 -> 160...
2013-10-25 18:26:58.251 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:58.252 16766 INFO migrate.versioning.api [-] 160 -> 161...
2013-10-25 18:26:58.310 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:58.310 16766 INFO migrate.versioning.api [-] 161 -> 162...
2013-10-25 18:26:58.343 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:58.343 16766 INFO migrate.versioning.api [-] 162 -> 163...
2013-10-25 18:26:58.376 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:58.377 16766 INFO migrate.versioning.api [-] 163 -> 164...
2013-10-25 18:26:58.401 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:58.402 16766 INFO migrate.versioning.api [-] 164 -> 165...
2013-10-25 18:26:58.435 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:58.436 16766 INFO migrate.versioning.api [-] 165 -> 166...
2013-10-25 18:26:58.474 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:58.475 16766 INFO migrate.versioning.api [-] 166 -> 167...
2013-10-25 18:26:58.661 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:58.661 16766 INFO migrate.versioning.api [-] 167 -> 168...
2013-10-25 18:26:58.753 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:58.754 16766 INFO migrate.versioning.api [-] 168 -> 169...
2013-10-25 18:26:58.844 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:58.845 16766 INFO migrate.versioning.api [-] 169 -> 170...
2013-10-25 18:26:58.894 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:58.895 16766 INFO migrate.versioning.api [-] 170 -> 171...
2013-10-25 18:26:58.928 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:58.928 16766 INFO migrate.versioning.api [-] 171 -> 172...
2013-10-25 18:26:59.205 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:59.205 16766 INFO migrate.versioning.api [-] 172 -> 173...
2013-10-25 18:26:59.465 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:59.466 16766 INFO migrate.versioning.api [-] 173 -> 174...
2013-10-25 18:26:59.651 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:26:59.651 16766 INFO migrate.versioning.api [-] 174 -> 175...
2013-10-25 18:27:00.305 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:00.306 16766 INFO migrate.versioning.api [-] 175 -> 176...
2013-10-25 18:27:00.498 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:00.498 16766 INFO migrate.versioning.api [-] 176 -> 177...
2013-10-25 18:27:00.658 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:00.658 16766 INFO migrate.versioning.api [-] 177 -> 178...
2013-10-25 18:27:00.851 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:00.851 16766 INFO migrate.versioning.api [-] 178 -> 179...
2013-10-25 18:27:01.338 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:01.339 16766 INFO migrate.versioning.api [-] 179 -> 180...
2013-10-25 18:27:02.178 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:02.179 16766 INFO migrate.versioning.api [-] 180 -> 181...
2013-10-25 18:27:02.657 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:02.657 16766 INFO migrate.versioning.api [-] 181 -> 182...
2013-10-25 18:27:02.859 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:02.859 16766 INFO migrate.versioning.api [-] 182 -> 183...
2013-10-25 18:27:02.968 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:02.968 16766 INFO migrate.versioning.api [-] 183 -> 184...
2013-10-25 18:27:05.454 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:05.454 16766 INFO migrate.versioning.api [-] 184 -> 185...
2013-10-25 18:27:08.377 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:08.377 16766 INFO migrate.versioning.api [-] 185 -> 186...
2013-10-25 18:27:11.591 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:11.591 16766 INFO migrate.versioning.api [-] 186 -> 187...
2013-10-25 18:27:12.565 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:12.565 16766 INFO migrate.versioning.api [-] 187 -> 188...
2013-10-25 18:27:12.909 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:12.909 16766 INFO migrate.versioning.api [-] 188 -> 189...
2013-10-25 18:27:13.060 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:13.061 16766 INFO migrate.versioning.api [-] 189 -> 190...
2013-10-25 18:27:13.211 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:13.212 16766 INFO migrate.versioning.api [-] 190 -> 191...
2013-10-25 18:27:13.371 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:13.371 16766 INFO migrate.versioning.api [-] 191 -> 192...
2013-10-25 18:27:13.733 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:13.733 16766 INFO migrate.versioning.api [-] 192 -> 193...
2013-10-25 18:27:14.162 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:14.163 16766 INFO migrate.versioning.api [-] 193 -> 194...
2013-10-25 18:27:17.698 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:17.698 16766 INFO migrate.versioning.api [-] 194 -> 195...
2013-10-25 18:27:17.975 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:17.975 16766 INFO migrate.versioning.api [-] 195 -> 196...
2013-10-25 18:27:18.253 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:18.253 16766 INFO migrate.versioning.api [-] 196 -> 197...
2013-10-25 18:27:18.413 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:18.413 16766 INFO migrate.versioning.api [-] 197 -> 198...
2013-10-25 18:27:18.564 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:18.564 16766 INFO migrate.versioning.api [-] 198 -> 199...
2013-10-25 18:27:18.740 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:18.740 16766 INFO migrate.versioning.api [-] 199 -> 200...
2013-10-25 18:27:21.408 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:21.409 16766 INFO migrate.versioning.api [-] 200 -> 201...
2013-10-25 18:27:21.475 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:21.475 16766 INFO migrate.versioning.api [-] 201 -> 202...
2013-10-25 18:27:21.634 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:21.635 16766 INFO migrate.versioning.api [-] 202 -> 203...
2013-10-25 18:27:22.878 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:22.878 16766 INFO migrate.versioning.api [-] 203 -> 204...
2013-10-25 18:27:23.029 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:23.029 16766 INFO migrate.versioning.api [-] 204 -> 205...
2013-10-25 18:27:23.382 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:23.383 16766 INFO migrate.versioning.api [-] 205 -> 206...
2013-10-25 18:27:23.902 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:23.903 16766 INFO migrate.versioning.api [-] 206 -> 207...
2013-10-25 18:27:24.129 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:24.130 16766 INFO migrate.versioning.api [-] 207 -> 208...
2013-10-25 18:27:24.768 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:24.769 16766 INFO migrate.versioning.api [-] 208 -> 209...
2013-10-25 18:27:26.086 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:26.087 16766 INFO migrate.versioning.api [-] 209 -> 210...
2013-10-25 18:27:26.380 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:26.381 16766 INFO migrate.versioning.api [-] 210 -> 211...
2013-10-25 18:27:26.607 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:26.607 16766 INFO migrate.versioning.api [-] 211 -> 212...
2013-10-25 18:27:26.850 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:26.851 16766 INFO migrate.versioning.api [-] 212 -> 213...
2013-10-25 18:27:27.615 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:27.616 16766 INFO migrate.versioning.api [-] 213 -> 214...
2013-10-25 18:27:28.279 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:28.280 16766 INFO migrate.versioning.api [-] 214 -> 215...
2013-10-25 18:27:28.321 16766 INFO migrate.versioning.api [-] done
2013-10-25 18:27:28.321 16766 INFO migrate.versioning.api [-] 215 -> 216...
2013-10-25 18:27:28.372 16766 INFO migrate.versioning.api [-] done

Cleanup and Nova validation

Restart services


We need to restart all of the Nova services so that they use the correct values. mseknibilel's OpenStack Grizzly Install Guide provided me an easier method that calling out each service.
# cd /etc/init.d/; for i in $( ls nova-* ); do sudo service $i restart; done
Just make sure that you recognize that you are now in the /etc/init.d directory, not you home directory. :)

Nova service validation


Validating the nova services is pretty easy, it's a single command using the nova-manage utility. Look for the :-) under the State column, xxx in the State column indicates a failed service.
# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-cert        havanafe                             internal         enabled    :-)   2013-10-25 18:34:02
nova-conductor   havanafe                             internal         enabled    :-)   2013-10-25 18:34:03
nova-consoleauth havanafe                             internal         enabled    :-)   2013-10-25 18:34:03
nova-scheduler   havanafe                             internal         enabled    :-)   2013-10-25 18:34:03

Nova troubleshooting

Log files


If you see xxx in the State column the best place to start are the log files. There are several. they are all located in the /var/log/nova directory. I recommend starting with the log file of the failed service (or services if you are really unlucky). Just as with Glance you may want to enable verbose or debug logging. Fyi, verbose logging is enabled out of the box for Nova, if you want to enable debug logging you will need to add an entry in the /etc/nova/nova.conf file and then restart the services.
debug = True
Note, when Nova is set to debug logging the nova-manage service list output looks like this:
# nova-manage service list
2013-10-25 18:42:53.183 16911 DEBUG nova.servicegroup.api [-] ServiceGroup driver defined as an instance of db __new__ /usr/lib/python2.7/dist-packages/nova/servicegroup/api.py:62
2013-10-25 18:42:53.258 16911 DEBUG stevedore.extension [-] found extension EntryPoint.parse('file = nova.image.download.file') _load_plugins /usr/lib/python2.7/dist-packages/stevedore/extension.py:84
2013-10-25 18:42:53.275 16911 DEBUG stevedore.extension [-] found extension EntryPoint.parse('file = nova.image.download.file') _load_plugins /usr/lib/python2.7/dist-packages/stevedore/extension.py:84
2013-10-25 18:42:53.277 16911 DEBUG nova.openstack.common.lockutils [req-b353eb5e-e4b3-4170-897f-e2a55946fee8 None None] Got semaphore "dbapi_backend" lock /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:166
2013-10-25 18:42:53.277 16911 DEBUG nova.openstack.common.lockutils [req-b353eb5e-e4b3-4170-897f-e2a55946fee8 None None] Got semaphore / lock "__get_backend" inner /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:245
Binary           Host                                 Zone             Status     State Updated_At
2013-10-25 18:42:53.643 16911 DEBUG nova.servicegroup.api [req-b353eb5e-e4b3-4170-897f-e2a55946fee8 None None] Check if the given member [{'binary': u'nova-cert', 'availability_zone': 'internal', 'deleted': 0L, 'created_at': datetime.datetime(2013, 10, 25, 18, 33, 12), 'updated_at': datetime.datetime(2013, 10, 25, 18, 42, 53), 'report_count': 58L, 'topic': u'cert', 'host': u'havanafe', 'disabled': False, 'deleted_at': None, 'disabled_reason': None, 'id': 1L}] is part of the ServiceGroup, is up service_is_up /usr/lib/python2.7/dist-packages/nova/servicegroup/api.py:94
2013-10-25 18:42:53.643 16911 DEBUG nova.servicegroup.drivers.db [req-b353eb5e-e4b3-4170-897f-e2a55946fee8 None None] DB_Driver.is_up last_heartbeat = 2013-10-25 18:42:53 elapsed = 0.643761 is_up /usr/lib/python2.7/dist-packages/nova/servicegroup/drivers/db.py:71
nova-cert        havanafe                             internal         enabled    :-)   2013-10-25 18:42:53
2013-10-25 18:42:53.644 16911 DEBUG nova.servicegroup.api [req-b353eb5e-e4b3-4170-897f-e2a55946fee8 None None] Check if the given member [{'binary': u'nova-conductor', 'availability_zone': 'internal', 'deleted': 0L, 'created_at': datetime.datetime(2013, 10, 25, 18, 33, 12), 'updated_at': datetime.datetime(2013, 10, 25, 18, 42, 53), 'report_count': 58L, 'topic': u'conductor', 'host': u'havanafe', 'disabled': False, 'deleted_at': None, 'disabled_reason': None, 'id': 2L}] is part of the ServiceGroup, is up service_is_up /usr/lib/python2.7/dist-packages/nova/servicegroup/api.py:94
2013-10-25 18:42:53.645 16911 DEBUG nova.servicegroup.drivers.db [req-b353eb5e-e4b3-4170-897f-e2a55946fee8 None None] DB_Driver.is_up last_heartbeat = 2013-10-25 18:42:53 elapsed = 0.645005 is_up /usr/lib/python2.7/dist-packages/nova/servicegroup/drivers/db.py:71
nova-conductor   havanafe                             internal         enabled    :-)   2013-10-25 18:42:53
2013-10-25 18:42:53.645 16911 DEBUG nova.servicegroup.api [req-b353eb5e-e4b3-4170-897f-e2a55946fee8 None None] Check if the given member [{'binary': u'nova-consoleauth', 'availability_zone': 'internal', 'deleted': 0L, 'created_at': datetime.datetime(2013, 10, 25, 18, 33, 13), 'updated_at': datetime.datetime(2013, 10, 25, 18, 42, 53), 'report_count': 58L, 'topic': u'consoleauth', 'host': u'havanafe', 'disabled': False, 'deleted_at': None, 'disabled_reason': None, 'id': 3L}] is part of the ServiceGroup, is up service_is_up /usr/lib/python2.7/dist-packages/nova/servicegroup/api.py:94
2013-10-25 18:42:53.645 16911 DEBUG nova.servicegroup.drivers.db [req-b353eb5e-e4b3-4170-897f-e2a55946fee8 None None] DB_Driver.is_up last_heartbeat = 2013-10-25 18:42:53 elapsed = 0.645806 is_up /usr/lib/python2.7/dist-packages/nova/servicegroup/drivers/db.py:71
nova-consoleauth havanafe                             internal         enabled    :-)   2013-10-25 18:42:53
2013-10-25 18:42:53.646 16911 DEBUG nova.servicegroup.api [req-b353eb5e-e4b3-4170-897f-e2a55946fee8 None None] Check if the given member [{'binary': u'nova-scheduler', 'availability_zone': 'internal', 'deleted': 0L, 'created_at': datetime.datetime(2013, 10, 25, 18, 33, 13), 'updated_at': datetime.datetime(2013, 10, 25, 18, 42, 43), 'report_count': 57L, 'topic': u'scheduler', 'host': u'havanafe', 'disabled': False, 'deleted_at': None, 'disabled_reason': None, 'id': 4L}] is part of the ServiceGroup, is up service_is_up /usr/lib/python2.7/dist-packages/nova/servicegroup/api.py:94
2013-10-25 18:42:53.646 16911 DEBUG nova.servicegroup.drivers.db [req-b353eb5e-e4b3-4170-897f-e2a55946fee8 None None] DB_Driver.is_up last_heartbeat = 2013-10-25 18:42:43 elapsed = 10.646609 is_up /usr/lib/python2.7/dist-packages/nova/servicegroup/drivers/db.py:71
nova-scheduler   havanafe                             internal         enabled    :-)   2013-10-25 18:42:43
Fun huh?

As with Glance if the log files aren't providing any information look through the /var/log/upstart/nova-*.log files for what the errors are.

Horizon (OpenStack Dashboard)


The dashboard is optional but IMHO it just makes life easier. In any case it just provides a GUI front-end to the APIs, there's nothing unique about how it calls things.

Update the Django config

OpenStack Horizon uses Django as the web framework, you can find out more information about it at their website. Open the /etc/openstack-dashboard/local_settings.py file in an editor then follow these steps.

Verify the time zone


Open the /etc/openstack-dashboard/local_settings.py and find the TIME_ZONE property, make sure it is set to UTC, if it isn't configure it as such.
TIME_ZONE = "UTC"

Set the host IP


Same file, look for the OPENSTACK_HOST property, we need to change the value to the IP address/DNS A record of the Keystone server, in my case this is 192.168.1.110.
OPENSTACK_HOST = "192.168.1.110"
Save the file.

Remove the Ubuntu theme

It's a bigger PITA than it is worth IMHO and it blows up the nice network map display. This will also restart the Apache service so the config changes will be consumed.
# apt-get purge openstack-dashboard-ubuntu-theme

Validate Horizon

Open up a web browser and connect to http://192.168.1.110/horizon. You should to get an error message when you log in because we haven't configured Neutron yet. That's the next post.