OpenStack is a well known open source cloud platform, most often in the form of IaaS. The development of OpenStack is controlled by the OpenStack Foundation and is supported by dozens of well known companies.
Although VMware has its own proprietary cloud platform, the company is also an active contributor to the OpenStack community and also developed VMware Integrated OpenStack. VMware Integrated OpenStack lets you rapidly deploy an OpenStack environment based on VMware’s own ESXi hypervizor and NSX. Fine for a production environment but not suitable for a simple lab. Apart from VMware licenses, the biggest drawback, you will need an awful lot of hardware for such a lab.
For some time I was searching for a way to have an OpenStack lab environment with some real life features. Recently, the dutch version of Computer magazine C’T published an interesting article how to build an all-in-one OpenStack environment, some specifications:
- The whole environment runs on a single host with Fedora 24.
- The host runs 5 Open VZ containers. The containers run 1 controller node, 3 compute nodes and 1 network node.
- The nodes are connected with each other and the outside world by means of Virtual Networks based on libvirt, see for more information this link.
- On these nodes are installed and configured the following OpenStack services: Nova, Glance, Keystone, Horizon, Neutron, Swift, Cinder, Ceilometer and Heat.
- OpenStack components are installed and configured with help of Packstack, which uses Puppet modules controlled by an answer file to deploy the OpenStack Components.
I started with installing an old HDD drive in a old ESXi host (HP ML 110 G5) with only 8 GB of RAM. After finishing the installation and configuration of the host computer, the complete environment is created by running just 4 scripts!
Preparing the physical host
Download Fedora 24 from here.
The installation of Fedora is pretty straightforward, as there aren’t many configurable options during the installation.
It is a good practice to run an OS update after finishing the installation, but watch out, the kernel will be updated from version 4.5.5 to 4.6.x. Unfortunately Open VZ cannot handle this kernel version at this time. You can check you current kernel version with this command:
# uname -a
To re-apply the previous kernel run this command:
# dnf install kernel-4.5.5
In case you didn’t, it is a good practice to configure a static IP address.
As I planned not to work on the console of the physical host, but from my own notebook, you will need to setup access over SSH. To start the SSH daemon and to make it automatically start on the next reboot perform these commands:
# systemctl start sshd # systemctl enable sshd # ln -s '/usr/lib/systemd/system/sshd.service'
I also changed the runlevel to non-graphical
To check the current runlevel
# systemctl get-default
To change the runlevel
# systemctl set-default multi-user.target # rm '/etc/systemd/system/default.target' # ln -s '/usr/lib/systemd/system/multi-user.target'
The scripts for the installation of OpenStack can be downloaded from here.
Copy the archive with the scripts to the physical host.
During the run of the first script (as discussed in the next section) I encountered this error: “Failed to synchronize cache for repo ‘fedora’”.
The solution was to uncomment the bold printed line in the file: /etc/yum.repos.d/fedora.repo.
[fedora] name=Fedora $releasever - $basearch failovermethod=priority baseurl=http://download.fedoraproject.org/pub/fedora/linux/releases/$releasever/Everything/$basearch/os/ metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-$releasever&arch=$basearch
If you haven’t already configured it, for a successful installation the physical host must have access to the Internet!
It is good practice to work from a “normal” account instead of the root account. In my case I created the account “paul”, home folder /home/paul.
Create folder /root/bin.
Unpack the archive and move the following files to the folder /root/bin: 1_host.add, 1_hostSetup.sh, 2_veSetup.sh, 3_nodeSetup.sh, changeSecrets.sh, oslstart.sh and oslstop.sh
Run the following command:
# cd /root/bin # chmod 744 *.sh
The file packstack-answers-multinode.txt must be placed in the folder /root.
The file 4_patchOpenStack.sh must be placed under the “normal” user home folder, in my case: /home/paul.
For SSH to function correctly, the key must be stored in the file /root/.ssh/authorized_keys.
As “normal” user and NOT as user root perform the following commands:
$ ssh-keygen $ eval $(sh-agent) $ ssh-add $ ssh-copy-id root@localhost
You can now test your work, with the following command you gain root access without entering a password:
$ ssh -A root@localhost
We are now prepared for the installation of OpenStack.
For the first part of the installation, start as the “normal” user.
Become root user on the physical host. The following command should not thrown errors if you performed the previous steps correct.
$ ssh -A root@localhost
cd to folder /root/bin
# cd bin
Start the first script
After successful execution, the script must end with the following message:
inet 192.168.125.1/24 brd 192.168.125.255 scope global vzbr1
inet 192.168.126.1/24 brd 192.168.126.255 scope global vzbr2
inet 192.168.127.1/24 brd 192.168.127.255 scope global vzbr3
inet 192.168.124.1/24 brd 192.168.124.255 scope global vzbr0
The main function of this script is the creation of four virtual network bridges:
192.168.124.0 is the access network for the containers (eth0).
192.168.125.0 is the external network for the instances (eth1).
192.168.126.0 is the tunnel network for the containers (eth2).
192.168.127.0 is the storage network (eth3).
((eth0 to eth3 directs to the four network interfaces of the 5 nodes).
The file 1_hosts.add contains the network addresses for the 5 nodes. The script adds the content to the /etc/hosts file of the physical host to make life easier.
For the second part of the installation stay root user and run the second script 2_ve_Setup.sh:
This script creates the 5 Open VZ containers; one controller node “oslcontroller”, three Nova compute nodes “oslnode1 , “oslnode2”, “oslnode3” and one Neutron network node “oslnet”.
The script will also configures the four network interfaces for each node and copies the authorized_keys file from the local host to the new nodes. When this script has finished, the nodes should be running. Now you should be able to login without providing credentials. Try to login to the oslcontroller. If you’re still root user, first return to your “normal” user account.
$ ssh -A root@oslcontroller
Do not forget to logout from the oslcontroller. The third script 3_nodeSetup.sh executes some additional steps for the installation of OpenStack and installs version Mitaka by default. Alternatively the script can also install older versions Kilo or Liberty. The latest version Newton is not supported at this time. Again, you need to run the script as user root on the physical host.
The final part is the installation and configuration of OpenStack with help of Packstack. Packstack uses Puppet modules to deploy the various parts of OpenStack on multiple pre-installed servers, currently only on RHEL and CentOS. The most important part is the answer file packstack-answers-multinode.txt which defines the complete OpenStack Configuration.
To run this script, start as “normal” user and execute:
$ ssh -A root@oslcontroller
On the oslcontroller run the following command as user root:
# packstack --answer-file=packstack-answers-multinode.txt
2>&1|tee -a install.out
You can repeat the packstack installation at any time and you should always end with a fully configured installation.
The packstack installer should end with the message
“**** Installation completed successfully ******”
and some additional information regarding user accounts and passwords.
The final step is to run the script 4_patchOpenStack.sh as “normal” user.
In the next post , we will review the result of all our hard work.