OpenStack lab – Part 2

In the previous post, we described the installation of an OpenStack lab on a single (physical) host. In this post, we will continue.

First, we will perform some checks and see if our OpenStack environment works as expected. I will also show some useful commands for troubleshooting, how to start and stop your environment.

We will also have a look at the GUI components and finally we will create our first stack.

Does it work as expected?

To check the result of our work, from our “normal” user account logon tot the oslcontroller:

$ ssh – A root@oslcontroller

From there run the following command to check the status of OpenStack:

# openstack-status

You will get a very detailed overview as OpenStack admin

# source keystonerc_admin

# openstack-status

20161223_003

Fig. 1 – status of the OpenStack services

Note, the first part of the output, most OpenStack services should show as “active”. Some services show as “inactive (disabled on boot)”. There should be no services with status “failed”.

20161223_002

Fig. 2 – command openstack-status. Two running nova instances were added later…

In case the installation doesn’t went well?

First check the status as described.

You can check the install log install.out in the folder /root on the oslcontroller.

If a single service has failed, you can check the log files on the oslcontroller under /var/log for more information. Each service has it’s own section.

In my case I encountered some authentication errors related to the keystone service. The steps to renew the OpenStack installation:

$ ssh -A root@oslcontroller

Empty the MySQL database:

# mysql

> use keystone;

> delete from token;

> delete from user;

quit

Stop Openstack and rerun the installation:

# openstack-service stop

# packstack --answer-file=packstack-answers-multinode.txt 2>&1|tee -a install.out

Back on the physical host, the folder /root/bin contains two scripts for starting and stopping the five nodes: oslstart.sh and oslstop.sh. The oslstop.sh script does stop the five containers and also removes the virtual network interfaces.

You can access the five nodes with SSH , as mentioned before. Another way, as user root on the physical host you can enter the nodes with:

# vzctl enter <node>, e.g. # vzctl enter oslnode1

You can also stop and restart a single node with these commands:

# vzctl stop <node>

# vzctl start <node>

Gui

In case you choose to work directly on the physical host, from the graphical desktop, you can start a browser.

However, I wanted to access the GUI components from my workstation. The virtual switch virbr0 works in the (default) NAT mode. By default, computers that are placed externally to the host physical machine cannot communicate to the guests inside when the virtual network switch is operating in NAT mode, so to access the oslcontroller from the workstation, I must add a few firewall rules

Script 5_patchfw.sh must be placed on the physical host in the folder /root/bin. Edit the following line in the script to match your situation:

Host_ipaddr=192.168.2.42

Host_ipaddr must match the IP address of your physical host. You can also add a second IP address to the NIC of the host and use that for this purpose. As a result, traffic to port 80, 443 and 6080 (we come to that later) will be forwarded to the oslcontroller on IP 192.168.124.110.
Run the script as user root on the physical host.

Now on the workstation, start a browser and use the following URL: https: //<IP address physical host>/dashboard

The OpenStack logon window should welcome you. Two accounts heave been created during the installation: “admin” and “demo”, the passwords can be found on the oslcontroller, folder /root in the files keystonerc_admin and keystonerc_demo.

User “demo” is a normal user, where “admin” is the admin user.

20161225_001

Fig. 3

When you logon with the “admin” account, under System, you will find an overview of your environment.

20161225_002

Fig. 4

Well known Nagios is used for monitoring the cluster, provide the following URL: http: //<IP address physical host>/nagios

The credentials can be found at the bottom of the install.out file.

20161225_003

Fig. 5

Building your first stack

Now it is time to create our first stack.

On the oslcontroller create a template file named demo-template.yml. It will create one instance with the cirros image, flavor m1.tiny.

heat_template_version: 2015-10-15
description: Launch a basic instance with CirrOS image using the
``m1.tiny`` flavor, ``mykey`` key, and one network.

parameters:
  NetID:
    type: string
    description: Network ID to use for the instance.

resources:
  server:
    type: OS::Nova::Server
    properties:
      image: cirros
      flavor: m1.tiny
      key_name: demokey
      networks:
      - network: { get_param: NetID }

outputs:
  instance_name:
    description: Name of the instance.
    value: { get_attr: [ server, name ] }
  instance_ip:
    description: IP address of the instance.
    value: { get_attr: [ server, first_address ] }

Second create a keyfile, named demokey.

# source keystonerc_demo

# nova keypair-add demokey > demokey.priv

# chmod 600 demokey.priv

We will create the instance in the “private” network, we need the network ID.

# export NET_ID=$(openstack network list | awk '/ private / { print $2 }')

We can check the NET_ID:

# echo $NET_ID

Now we will create our stack and name it stack1

# openstack stack create -t demo-template.yml --parameter \
"NetID=$NET_ID" stack1

The progress will be shown:

20161225_004

Fig. 6

You can view the status of al stacks with:

# openstack stack list

See for more info, the OpenStack documentation.

To connect to your instance.

Logon to the OpenStack GUI als user demo.

Under Compute, Instances, you will find the newly created instance. Clicking on the Instance brings you to the Overview, Logs and Console.

20161225_005

Fig. 7

Go to the Console tab and click on the link “Click here to show only console”. Now a new tab opens, directing to the oslcontroller. As a drawback from my action to redirect traffic, now I need to adjust the IP address to the physical host’s IP.

20161225_006

Fig. 8

Now I can logon to the instance. Note that this instance had Internet access.

At this point, you should be able to further explore the OpenStack environment, enjoy.

I hope these posts were useful. As always I thank you for reading.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: