SlideShare uma empresa Scribd logo
1 de 74
Baixar para ler offline
OPENSTACK Deployment And
Administration
by
Ashish Sharma
Contents
1) Introduction
2) Installation Of OpenStack
3) Message Broker
4) Keystone
5) Swift
6) Glance
7) Cinder
8) Neutron
9) Nova
Introduction
Welcome to Class !
Thank you for attending this training session. Please let me know if you have any
special needs while attending this training.
About This Course
Openstack Administration
The Openstack Administration course begins by explaining the openstack architecture
and terms used throughout the course. The course shows how to install and configure
openstack, including basic concepts of cloud computing, the message broker
(RabbitMQ), identity service (keystone), object storage (swift), image service (glance),
block storage service (cinder), networking service (neutron), compute service (nova),
orchestration service (heat) and metering service (ceilometer). The course finishes with
comprehensive review, implementing the services after a fresh installation of the
operating system.
Audience and prerequisites
This course is intended for the people who have knowledge about Linux System
Administration and cloud administration, who wants to implement and manage their
private cloud. No prior knowledge of Openstack is required and using this course you
can quickly setup your own openstack environment and see whether it fits your
requirement or not.
Cloud Computing
Cloud Computing is a technology that uses the internet and central remote servers to maintain data and
applications. Cloud computing allows consumers and businesses to use applications without
installation and access their personal files at any computer with internet access. This technology allows
for much more efficient computing by centralizing data storage, processing and bandwidth.
A simple example of cloud computing is Yahoo email, Gmail, or Hotmail etc. All you need is just an
internet connection and you can start sending emails. The server and email management software is all
on the cloud ( internet) and is totally managed by the cloud service provider Yahoo , Google etc. The
consumer gets to use the software alone and enjoy the benefits.
Fig 1.1: Cloud Computing Environment
Cloud computing relies on sharing of resources to achieve coherence and economies of
scale, similar to a utility (like the electricity grid) over a network. At the foundation of
cloud computing is the broader concept of converged infrastructure and shared service.
It has basically three service model:
1) IaaS -- Infrastructure as a Service
2) Paas -- Platform as a Service
3) SaaS -- Software as a Service
Fig 1.2: Cloud Computing Layer
And these can be deployed in three models:
1) Private Cloud
2) Public Cloud
3) Hybrid Cloud
Fig 1.3: Cloud Computing Types
Openstack Basics
This section gives an introduction to the components of OpenStack.
What Is OpenStack?
OpenStack is open source virtualization management software that allows users to
connect various technologies and components from different vendors and expose a
unified API, regardless of the underlying technology. With OpenStack, users can
manage different types of hypervisors, network devices and services, storage
components, and more using a single API that creates a unified data center fabric.
OpenStack is, therefore, a pluggable framework that allows vendors to write plug-ins
that implement a solution using their own technology, and which allows users to
integrate their technology of choice.
OpenStack Services
To achieve this agility, OpenStack is built as a set of distributed services. These services
communicate with each other and are responsible for the various functions expected
from virtualization/cloud management software. The following are some of the key
services of OpenStack:
1) Nova: A compute service responsible for creating instances and managing their
lifecycle, as well as managing the hypervisor of choice. The hypervisors are
pluggable to Nova, while the Nova API remains the same, regardless of the
underlying hypervisor.
2) Neutron: A network service responsible for creating network connectivity and
network services. It is capable of connecting with vendor network hardware
through plug-ins. Neutron comes with a set of default services implemented by
common tools. Network vendors can create plug-ins to replace any one of the
services with their own implementation, adding value to their users.
3) Cinder: A storage service responsible for creating and managing external storage,
including block devices and NFS. It is capable of connecting to vendor storage
hardware through plug-ins. Cinder has several generic plug-ins, which can
connect to NFS and iSCSI, for example. Vendors add value by creating dedicated
plug-ins for their storage devices.
4) Keystone: An identity management system responsible for user and service
authentication. Keystone is capable of integrating with third-party directory
services and LDAP.
5) Glance: An image service responsible for managing images uploaded by users.
Glance is not a storage service, but it is responsible for saving image attributes,
making a virtual catalog of the images.
6) Horizon: A dashboard that creates a GUI for users to control the OpenStack
deployment. This is an extensible framework that allows vendors to add features
to it. Horizon uses the same APIs exposed to users.
7) Swift: Stores and retrieves arbitrary unstructured data objects via a RESTful,
HTTP based API. It is highly fault tolerant with its data replication and scale out
architecture. Its implementation is not like a file server with mountable
directories.
8) Heat: Orchestrates multiple composite cloud applications by using either the
native HOTtemplate format or the AWS CloudFormation template format,
through both an OpenStack-native REST API and a CloudFormation-compatible
Query API.
9) Ceilometer: Monitors and meters the OpenStack cloud for billing, benchmarking,
scalability, and statistical purposes.
Fig 1.4: Openstack Services
Installation of Openstack
Consideration to make before installing Openstack on Linux (RedHat or Oracle Linux).
Hardware requirements
Control Node Requirement:
Hardware Requirement
Processor 64-bit x86 processor with support for the intel 64 or AMD64
CPU extensions, and the AMD-V or Intel VT hardware
virtualization extension enabled
Memory 2GB RAM
Disk Space 50GB
Add additional disk space to this requirement based on the
amount of space that you intend to make available to virtual
machine instances. This size varies based on both the size of
each disk image you intend to create and whether you intend
to share one or more disk images between multiple instances.
1TB of disk space is recommended for a realistic environment
capable of hosting multiple instances of varying sizes.
Network 2x 1Gbps network interface card (NIC) -- For all-in-one setup
else 4x 1 Gbps network interface card (NIC) -- for multi-node
Compute Node Requirements
Hardware Requirements
Processor 64-bit x86 processor with support for the intel 64 or AMD64
CPU extensions, and the AMD-V or Intel VT hardware
virtualization extension enabled
Memory 2GB RAM minimum
For the compute node, 2GB RAM is the minimum necessary
to deploy the m1.small instance on a node or three m1.tiny
instances without memory swapping, so this is the minimum
requirement based on the amount of memory that you intend
to make avavailable to virtual machine instances.
Disk Space 50GB
Add additional disk space to this requirement based on the
amount of space that you intend to make available to virtual
machine instances. This size varies based on both the size of
each disk image you intend to create and whether you intend
to share one or more disk images between multiple instances.
1TB of disk space is recommended for a realistic environment
capable of hosting multiple instances of varying sizes.
Network 2x 1Gbps network interface card (NIC) -- For all-in-one setup
else 3x 1 Gbps network interface card (NIC) -- for multi-node
Software Requirements
To deploy openstack on Linux systems (either Oracle Linux or Redhat Linux), you need
to have at least two machines with Redhat Enterprise Linux 64-bit or Oracle Linux 64-
bit, version 6.5 or newer. If you are using Oracle Linux 64-bit, version 6.5 or higher then
you can also choose to go with UEK (unbreakable Enterprise Kernel), version 3 and is
commonly known as UEK3. One machine can act as a dedicated cloud controller node
and the second machine can act as a Nova compute node. For production environment a
multi node setup is recommended but for testing and training purpose we can make use
of All-In-One environment.
For compute node, one can also choose the xen hypervisors, but make sure you go with
the latest and to try one can make use of Oracle VM servers, version 3.3.1 which is Xen
based and uses UEK3 kernel.
Note: Make sure that your machines have their clock synced via Network Time Protocol
(NTP).
Deployment Options
OpenStack supports various flexible deployment models, where each service can be
deployed separately on a different node or where services can be installed together with
other services. A user can set up any number of compute and control nodes to test the
OpenStack environment.
OpenStack supports the following deployment models:
1) All-in-one node: A complete installation of all the OpenStack services on an
Oracle/Redhat Linux node. This deployment model is commonly used to get started with
OpenStack or for development purposes. In this model, the user has
fewer options to configure, and the deployment does not require more than one node.
This deployment model is not supported for production use.
2) One control node and one or more compute nodes: This is a common deployment
across multiple servers. In this case, all the control services are installed on
Oracle/Redhat Linux, while separate compute nodes are set up to run
Oracle VM Server or Oracle/Redhat Linux for the sole purpose of running virtual
machines.
3) One control node, one network node, and one or more compute nodes:
Another common deployment configuration is when the network node is required
to be separate from the rest of the services. This can be due to compliance or
performance requirements. In this scenario, the network node is installed on
Oracle /Redhat Linux, and the rest of the management services are installed on a
separate controller node. Compute nodes can be installed as required, as in all
other cases.
4) Multiple control nodes with different services and one or more compute
nodes: As mentioned, OpenStack is very flexible, and there is no technical
limitation that stops users from experimenting with more sophisticated
deployment models. However, using one of the supported configurations
reduces complexity.
To get started, we recommend using either the all-in-one model or the model with one
control node and one or more compute nodes.
Installing Openstack with packstack
This installation is called as All-In-One installation as we will be deploying the control
and compute nodes of openstack on one server and will be configuring all serivces on
one server itself. Remember that this is only or testing and training purpose and is not
supported for production setup.
Like said earlier we need to have a server installed with Redhat or Oracle Linux version
6.5 or higher and have.
For an all-in-one deployment, two physical network interface cards are required. The
first network interface card must be configured with an IP address for managing the
server and accessing the API and dashboard. The second card is used to allow instances
to access the public network. The second network card will not have an IP address
configured. If there are no plans to allow instances external connectivity, there is no
need to have the second network interface card:
Ethernet port IPAddress Purpose
eth0 yes Connected to the management or public network to
allow access to the OpenStack API
eth1 no Connected to the public network and used by
OpenStack to connect instances to the public
network
1) The openstack-packstack package includes the packstack utility to quickly deploy
openstack either interactively, or non-interactively by creating and using an
answer file that can be tuned. Install packstack-package on your server using
yum.
# yum install -y openstack-packstack
2) Before we start with openstack packstack deployment via packstack, SSH keys
are generated for easy access to the Nova compute nodes from the cloud
controller node. We will not include a passphrase because the installation will
require this passphrase hundreds of times during this process.
# ssh-keygen
3) Explore some of the options of the packstack command.
# packstack -h | less
4) The recommended way to do an installation is non-interactive, because this way
the installation settings are documented. And asnwer file with default settings can
be generated with the packstack command.
# packstack --gen-answer-file /root/answer.txt
5) Before we start the actual installation, edit the /root.answer.txt file and ensure the
followingitems are configured:
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_NTP_SERVERS=<IP address of your NTP Server>
CONFIG_SWIFT_INSTALL=y
CONFIG_HORIZON_SSL=y
6) Now, we are ready to start our actual deployment of openstack cloud controller.
# packstack --answer-file /root/answer.txt
welcome to Installer setup utility
Installing:
clean up ... [Done]
setting up ssh keys...root@<your server IP>'s password: ashish
...
7) Verify that the services are running:
# for i in /etc/init.d/open* /etc/init.d/neutron* ; do $i status; done
Fig. 2.1 : All-In-One Openstack Setup
Message Broker
The AMQP (Advanced Messaging Queuing Protocol) is an open standard application
layer protocol for message oriented middleware. The defining feature of AMQP are
message orientation, queuing, routing, relaibility and security.
Messaging Server:
Openstack uses a message broker to co-ordinate operations and status information
among services. The message broker service typically runs on the controller node.
Openstack supports below message broker:
• RabbitMQ
• Qpid
• ZeroMQ
Till Havana release Qpid was being used a default message broker but from Icehouse
onwards, RabbitMQ has become the default message broker.
Qpid Message Broker:
Openstack services use the Qpid messaging system to communicate. There are two ways
to secure the Qpid communication.
1) Requiring username and password can communicate with other openstack
services.
2) Using SSL to encrypt communication helps to prevent snooping and injection of
rauge commands in the communication channels.
Qpid Installation:
1) Install required packages:
#yum install -y qpid-cpp-server qpid-cpp-server-ssl cyrus-sasl-md5
2) Create a new SASL user and password for use with Qpid (qpidauth:ashish). Note
that SASL uses the QPID realm by default.
#saslpasswd2 -f /var/lib/qpidd/qpidd.sasldb -u QPID qpidauth
password: ashish
Again : ashish
3) Verify user:
#sasldblistusers2 -f /var/lib/qpidd/qpidd.sasldb
qpidauth@QPID: userPassword
4) Provide authorization for the qpidauth user.
#echo 'acl allow qpidauth@QPID all all' > /etc/qpid/qpidauth.acl
#echo “QPIDD_OPTIONS='--acl-file /etc/qpid/qpidauth.acl'” >>
/etc/sysconfig/qpidd
# chown qpidd /etc/qpid/qpidauth.acl
# chmod 600 /etc/qpid/qpidauth.acl
5) Disable anonymous connection in /etc/qpidd.conf (remove ANONYMOUS).
The /etc/qpidd.conf files should contain:
cluster-mechanism=DIGEST-MD5
auth=yes
6) Now that the username and password file has been configured, lets work with
SSL.
#mkdir /etc/pki/tls/qpid
# chmod 700 /etc/pki/tls/qpid/
# chown qpidd /etc/pki/tls/qpid/
7) Create password file for certificate.
# echo ashish > /etc/qpid/qpid.pass
# chmod 600 /etc/qpid/qpid.pass
# chown qpidd /etc/qpid/qpid.pass
8) Generate certifcate database and make sure that you enter the correct
HOSTNAME.
#echo $HOSTANME
# certutil -N -d /etc/pki/tls/qpid/ -f /etc/qpid/qpid.pass
# certutil -S -d /etc/pki/tls/qpid/ -n $HOSTNAME -s “CN=$HOSTNAME” -t
“CT, ,” -x -f /etc/qpid/qpid.pass -z /usr/bin/certutil
Generating key. This may take a few moments...
9) Make sure that the certificate directory is readable by qpidd user:
#chown -R qpidd /etc/pki/tls/qpid/
10) Add the following line to /etc/qpidd.conf:
ssl-cert-db=/etc/pki/tls/qpid/
ssl-cert-name=<enter your server hostname>
ssl-cert-password-file=/etc/qpid/qpid.pass
require-encryption=yes
11) Start the qpid service, check for errors and make sure it is persistent.
#service qpidd start
#tail /var/log/messages
#chkconfig qpidd on
Manual RabbitMQ Setup for OpenStack Platform
If you are deploying a full OpenStack cloud service, you will need to set up a working
message broker for the following OpenStack components:
• Block Storage
• Compute
• Openstack Networking
• Orchestration
• Image Service
• Telemetry
From Icehouse release the default message broker is RabbitMQ.
Migration Prerequisites
If you are migrating to RabbitMQ from QPid, you will first have to shut down the
OpenStack service along with QPid:
# openstack-service stop
# service qpidd stop
Prevent QPid from starting at boot:
# chkconfig qpidd off
CONFIGURE THE FIREWALL FOR MESSAGE BROKER TRAFFIC
Before installing and configuring the message broker, you must allow incoming
connections on the port it will use. The default port for message broker (AMQP) traffic
is 5672.
To allow this the firewall must be altered to allow network traffic on the required port.
All steps must be run while logged in to the server as the root user.
Configuring the firewall for message broker traffic
1) Open the /etc/sysconfig/iptables file in a text editor.
2) Add an INPUT rule allowing incoming connections on port 5672 to the file. The new
rule must appear before any INPUT rules that REJECT traffic.
3) -A INPUT -p tcp -m tcp --dport 5672 -j ACCEPT
4) Save the changes to the /etc/sysconfig/iptables file.
5) Restart the iptables service for firewall changes to take effect.
#service iptables restart
The firewall is now configured to allow incoming connections to the MariaDB/MySQL
database service on port 5672.
INSTALLAND CONFIGURE THE RABBITMQ MESSAGE BROKER
RabbitMQ replaces QPid as the default (and recommended) message broker. The
RabbitMQ messaging service is provided by the rabbitmq-server package.
To install RabbitMQ, run:
# yum install rabbitmq-server
Important
When installing the rabbitmq-server package, a guest user with a default guest password
will automatically be created for the RabbitMQ service. We strongly advise that you
change this default password, especially if you have IPv6 available. With IPv6,
RabbitMQ may be accessible from outside the network.
You should be able to change the guest password after launching the rabbitmq-server
server.
Manually Create RabbitMQ Configuration Files
When manually installing the RabbitMQ packages, the required RabbitMQ
configuration files will not be created. This is a known issue, and will be addressed in an
upcoming update.
To work around this, manually create the two required RabbitMQ configuration files.
These files, along with their required default contents, are as follows:
/etc/rabbitmq/rabbitmq.config
% This file managed by Puppet
% Template Path: rabbitmq/templates/rabbitmq.config
[
{rabbit, [
{default_user, <<"guest">>},
{default_pass, <<"guest">>}
]},
{kernel, [
]}
].
% EOF
/etc/rabbitmq/rabbitmq-env.conf
RABBITMQ_NODE_PORT=5672
LAUNCH THE RABBITMQ MESSAGE BROKER
After installing the RabbitMQ message broker and configuring the firewall to accept
message broker traffic, launch the rabbitmq-server service and configure it to launch on
boot:
# service rabbitmq-server start
# chkconfig rabbitmq-server on
Important
When installing the rabbitmq-server package, a guest user with a default guest password
will automatically be created for the RabbitMQ service. Red Hat strongly advises that
you change this default password, especially if you have IPv6 available. With IPv6,
RabbitMQ may be accessible from outside the network.
To change the default guest password of RabbitMQ:
# rabbitmqctl change_password guest NEW_RABBITMQ_PASS
Replace NEW_RABBITMQ_PASS with a more secure password.
KEYSTONE
Keystone Identity Service:
It is a project which provides identity, token, catalog, and policy services for use with
Openstack. Keystone provides token and password based authentication (authN) and
high level authorization (authZ), and a central directory of users mapped to the
services they can access.
Introduction to Keystone service:
• Keystone is an OpenStack project that provides Identity, Token, Catalog and
Policy services for use specifically by projects in the OpenStack family. It
implements OpenStack’s Identity API.
• The OpenStack Identity API is implemented using a RESTful web service
interface. All requests to authenticate and operate against the OpenStack Identity
API should be performed using SSL over HTTP (HTTPS) on TCP port 443.
• Keystone is organized as a group of internal services exposed on one or many
endpoints. Many of these services are used in a combined fashion by the frontend,
for example an authenticate call will validate user/project credentials with the
Identity service and, upon success, create and return a token with the Token
service.
Add services to the Keystone service catalog and register their endpoints
The keystone service-create command needs three options to register a service:
# keystone service-create --name=SERVICENAME --type=SERVICETYPE
--description=”DESCRIPTION OF SERVICE”
While --name and --description can be user-selected strings, the argument passed with
the --type switch must be one of the identity, compute, network, image, or object-store.
After a service is registered in the service catalog, the end point of the service can be
defined:
#keystone endpoint-create --service-id SERVICEID --publicurl 'URL' --adminurl
'URL' --internalurl 'URL'
The --service-id can be obtained from the output of the service-create command shown
previously or by getting the information from the keystone service catalog with the
command keystone service-list.
Remove service and end points
Of course, it is possible to remove service end points and services from the Keystone
catalog.
To delete an end point, figure out its id with:
#keystone endpoint-list
Next delete the endpoint of choice with:
keystone endpoint-delete ENDPOINTID
Deleting service is quite similar.
#keystone service-list
#keystone service-delete SERVICEID
Openstack configuration files use the INI format, where there can be separate selections.
Each section uses a name enclosed in square brackets ([]). All Openstack configuration
files include a DEFAULT section; some Openstack configuration files include other
sections. The Openstack configuration files can be managed with the openstack-config
command. The example that follows explains the options and arguments used with
openstack-config.
#openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token
abcdefgh12345
Deploying the Keystone identity service
We are going to install keystone without packstack command.
1) Install the required packages.
#yum install openstack-keystone openstack-selinux
2) Next, its tiime to get the database backend installed. The openstack-db command
will take care of installing and initializing MySQL for keystone service.
#yum install -y openstack-utils
#openstack-db --init --service keyston
3) Setup PKI infrastructure for keystone.
#keystone-manage pki_setup --keystone-user keystone --keystone-group
keystone
4) To be able to administrate the Keystone identity service, specify the
SERVICE_TOKEN and SERVICE_ENDPOINT environment variables. Save
the value of the generated SERVICE_TOKEN to a file for later use.
#export SERVICE_TOKEN=$(openssl rand -hex 10)
#export SERVICE_ENDPOINT=http://<IP or server hostname>:35357/v2.0
#echo $SERVICE_TOKEN > /root/ks_admin_token
#cat /root/ks_admin_token
5) The generated SERVICE_TOKEN must corresspond to the admin_token
setting in the /etc/keystone/keystone.conf file
#openstack-config-set /etc/keystone/keystone.conf DEFAULT admin_token
$SERVICE_TOKEN
6) Start the openstack-keystone service and make sure it is persistent.
#service openstack-keystone start
#chkconfig openstack-keystone on
7) To verify success, check if the keystone-all process is running
#ps -ef | grep keystone-all
#grep ERROR /var/log/keystone/keystone.log
8) Add keystone as an endpoint in the registry of end points in keystone, which is
required for the Horizon web dashboard.
#keystone service-create --name=keystone --type=identity
--description=”keystone Identity Service”
#keystone endpoint-create --service-id <enter the service id here> --publicurl
'http://<IP or server hostname>:5000/v2.0 --adminurl 'http://<IP or server
hostname>:35357/v2.0' --privateurl 'http://<IP or server hostane>:5000/v2.0'
Check the output carefully for mistakes.If needed, delete the end point (keystone
endpoint-delete ID), then recreate it.
Managing users with the keystone command
The keystone command can be used to create, delete and modify users.
Before starting to use the command, it is importtant to source our environment variables
to have administrative permissions.
#source ~/keystonerc_admin
Adding a new user:
#keystone user-create --name USERNAME --pass PASSWORD
To list existing users:
#keystone user-list
To delete a user:
#keystone user-delete USERID
Managing tenants with the keystone command
To create a tenant:
#keystone tenant-create --name TENANTNAME
To list a tenant:
#keystone tenant-list
To delete:
#keystone tenant-delete TENANTID
Roles In Keystone
By default there are two standard ro;es defined in keystone:
– admin : a role with administrative previlleges
– member : a role of a project member
Even though the definitions for the roles are present, they still have to be added
manually to the keystone catalog ifkeystone is manually deployed.
For example, to add a role member to the keystone catalog, use:
# keystone role-create --name Member
Associate a user from a specific tenant with a role
Of course, we also have to be able to add one or more roles to a user.
To accoplish this, it is necessary to have the USERID, the TENANTID, and the
ROLEID we want to attach the user to, then connect them with:
#keystone user-role-add --user-id USERID --role-id ROLEID --tenant-id
TENANTID
Creating the keystone admin user
1) #keystone user-create --name admin --pass <your passphrase>
2) #keystone role-create --name admin
3) #keystone tenant-create --name admin
4) #keystone user-role-add --user admin --role admin --tenant admin
5) # cat >> /root/keystonerc_admin << EOF
> export OS_USERNAME=admin
> export OS_TENANT_NAME=admin
> export OS_PASSWORD=<your passphrase>
> export OS_AUTH_URL=http://<server hostname or IP>:35357/v2.0/
> export PS1='[u@h W(keystone_admin)]$'
> EOF
6) To test:
#unset SERVICE_TOKEN
#unset SERVICE_ENDPOINT
#source /root/keystonerc_admin
(keystonerc_admin)#keystone user-list
SWIFT
What is the Swift object storage service?
The object storage service provides object storage in virtual containers, which allows
user to store and retrieve files. The service's distributed architecture supports horizontal
scaling, redundancy as failure proofing is provided through software-based data
replication. Because it supports asynchronous eventual consistency replication, it is well
suited to multiple data center deployment.
Object storage uses the concept of:
• Storage Replicas: used to maintain the state of objects in the case of outage. A
minimum of three replicas is recommended.
• Storage Zones: used to host replicas. Zones ensure that each replica of a given
object canbe stored separately. A zone might represent an individual disk drive or
array, a server, all the servers in a rack, or even an entire data center.
• Storage Regions: essentially a group of zones sharing a location. Regions can be,
for example, groups of servers or server farms, usually located in the same
geographical area. Regions have a separate API end point per object storage
service installation, which allows for a discrete separation of services.
Architecture of the object storage service
The object storage service is a modular service with the following components:
• openstack-swift-proxy: The proxy service uses the object ring to decide where to
direct newly uploaded objects. It updates the relevant container database to reflect
the presence of a new object. If a newly uploaded object goes to a new container,
the proxy service also updates the relevant account database to reflect the new
conatiner. The proxy service also directs get requests to one of the nodes where a
replica of the requested object is stored, either randomly or based on response
time from the node. It exposes the public API, and is responsible for hhandling
requests and routing them accordingly. Objects are streamed through the proxy
server to the user (not spooled). Object can also be served out via HTTP.
• Openstack-swift-object: The object service is responsible for storing data objects
in partitions on disk devices. Each partition is a directory, and each object is held
in a subdirectory of its partition directory. A MD5 hash of the path to the object is
used to identify the object itself. The service stores, retrieves, and deletes objects.
• Openstack-swift-container: The container service maintains databases of objects
in containers. There is one database file for each container, and they are replicated
across the cluster. Containers are defined when objects are put in them. Containers
make finding objects faster by limiting object listings to specific conatiner
namespaces. The container service is responsible for listings of containers using
the account database.
• Openstack-swift-account: The account service maintains databases of all of the
containers accessible by any given account. There is one database file for each
account, and they are replicated across the cluster. Any account has access to a
particular group of containers. An account maps to a tenant in the identity service.
The account service handles listings of objects (what objects are in a specific
container) using the container database.
All of the services can be installed on each node or alternatively on dedicated machines.
In addition, the following components are in place for proper operation:
• Ring Files: contain details of all the storage devices, and are used to deduce
where a particular piece of data is stored (maps the names of stored entities to
their physical location). One file is created for each object, account and container
server.
• Object Storage: With either ext4 (recommended) or XFS file systems. The
mount point is expected to be /srv/node.
• Housekeeping processes: for example replication and auditors.
Installing the Swift Object Storage Service
We are going to prepare the keystone identity service to be used with the Swift object
storage service.
1) Install the necessary components for the swift object storage service:
#yum install -y openstack-swift-proxy openstack-swift-object openstack-
swift-container openstack-swift-account memcached
2) Make sure that the keystone environment variables with the authentication
information are loaded.
#source /root/keystonerc_admin
3) Create a Swift user with the password “password”.
#keystone user-create --name swift --pass <password>
4) Make sure the admin role exists before proceeding.
#keystone role-list | grep admin
If there is no admin role, create one.
#keystone role-create --name admin
5) Make sure the services tenant exists before proceeding.
#keystone tenant-list | grep services
If there is no services tenant, create one.
#keystone tenant-create --name services
6) Add the Swift user to the services tenant with the admin role.
#keystone user-role-add --role admin --tenant services --user swift
7) Check if the object store service already exists in Keystone.
#keystone service-list
If it does not exist, create it.
#keystone service-create --name swift --type object-store --description “swift
storage service”
8) Create the end point for the swift object storage service.
#keystone endpoint-create --service-id <xxxxxxxxxxx> --publicurl
“http://<hostanme or IP>:8080/v1/AUTH_%(tenant_id)s” --adminurl
“http://<hostname or IP>:8080/v1/AUTH_%(tenant_id)s” --internalurl
“http://<hostname or IP>:8080/v1/AUTH_%(tenant_id)s”
Deploying a SWIFT storage node
The object storage service stores objects on the file system, usually on a number of
connected physical storage devices. All of the devices which will be used for object
storage must be formatted with either ext4 or XFS, and mounted under the /srv/node/
directory.
Any dediccated storage node needs to have the following packages installed:
• openstack-swift-object
• openstack-swift-container
• openstack-swift-account
Deploying a Swift Storage Node:
1) Create a single partition on new disk presented to the server. Like we usually do
in Linux.
#fdisk /dev/sdb
2) Create ext4 filesystem on /dev/sdb1
#mkfs.ext4 /dev/sdb1
3) Create a mount point and mount the devices persistently to the appropriate zone.
#mkdir -p /srv/node/z1d1
#cp /etc/fstab /etc/fstab.bak
echo “/dev/sdb1 /srv/node/z1d1 ext4 acl,user_xattr 0 0” >> /etc/fstab
4) Mount the new Swift storage:
#mount -a
5) Change the ownership of the contents of /srv/node to swift:swift
#chown -R swift:swift /srv/node/
6) Restore SELinux context of /srv
#restorecon -R /srv
7) Make backups of the files that will be changed:
#cp /etc/swift/swift.conf /etc/swift/swift.conf.orig
#cp /etc/swift/account-server.conf /etc/swift/account-server.conf.orig
#cp /etc/swift/container-server.conf /etc/swift/container-server.conf.orig
#cp /etc/swift/object-server.conf /etc/swift/object-server.conf.orig
8) Use openstack-config command to add hash prefix and suffix to
/etc/swift/swift.conf.
#openstack-config --set /etc/swift/swift.conf swift-hash swift_hash_path_prefix $
(openssl rand -hex 10)
#openstack-config --set /etc/swift/swift.conf swift-hash swift_hash_path_suffix $
(openssl rand -hex 10)
9) The account, container and object swift service need to bind to the same IP used
for mapping the rings later on. Localhost only works for a single storage node
configuration.
#openstack-config --set /etc/swift/account-server.conf DEFAULT bind_ip
<mention your IP>
#openstack-config --set /etc/swift/container-server.conf DEFAULT bind_ip
<mention your IP>
#openstack-config --set /etc/swift/object-server.conf DEFAULT bind_ip <mention
your IP>
10) Start up the services now:
# service openstack-swift-account start
#service openstack-swift-container start
#service openstack-swift-object start
11) Tail the /var/log/messages and check if everything looks fine or not.
Configuring Swift Obeject Storage service rings
Rings determine where data is stored in a cluster nodes. Ring files are generated
unsing the swift-ring-builder tool.
Three ring files are required:
• Object
• Container
• Account services
Each storage device in a cluster is divided into partitions, with a recommended
minimum of 100 partitions per device. Each partition is physically a directory on a disk.
A configurable number of bits from the MD5 hash of the file system path to the partition
directory, known as the partition power, is used as a partition index for the device. The
partition count of a cluster with 1000 devices with 100 partition on each device is
100000. The partition count is used to calculate the partition power, where 2 to the
partition power is the partition count. When the partition power is a fraction, it is
rounded up. If the partition count is 100,000, the partition power is 17 (16,610 rounded
up).
Expressed mathematically: 2 ^ (partition power) = partition count.
Ring files are generated using three parameters:
• Partition power: The value is calculated as shown previously and rounded up after
calculation.
• Replica count: This represents the number of times the data gets replicated in the
cluster.
• min_part_hours: This is the minium number of hours before a partition can be
moved. It ensures availablity by not moving more than one copy of a given data
item within the min_part_hours time period.
A fourth parameter, zone, is used when adding devices to rings. Zones are a flexible
abstraction, where each zone should be separated from other zones. You can use a zone
to represent sites, cabinets, nodes, or even devices.
Configuring Swift Object Storage Servive rings:
1) Source the keystonerc_admin file first.
#source /root/keystonerc_admin
2) Use swift ring builder command to build one ring for each service.
#swift-ring-builder /etc/swift/account.builder create 12 2 1
#swift-ring-builder /etc/swift/container.builder create 12 2 1
#swift-ring-builder /etc/swift/object.builder create 12 2 1
In above, 12 is partition power, 2 is replica count and 1 is min_part_hours
3) Add the device to the account service.
#for i in 1 2;do
> swift-ring-builder /etc/swift/account.builder add z${i}-<your server IP:6002/z$
{i}d1 100
>done
Above, 100 informs, minimum partition per device.
4) Add the device to the container service.
#for i in 1 2; do
> swift-ring-builder /etc/swift/container.builder add z${i}-<your server
IP>:6001/z{i}d1 100
> done
5) Add device to the object serbice.
#for i in 1 2; do
> swift-ring-builder /etc/swift/object.builder add z${i}-<your server
IP>:6000/z{i}d1 100
> done
6) After successfully adding the devices, rebalance the rings.
#swift-ring-builder /etc/swift/account.builder rebalance
#swift-rin-builder /etc/swift/container.builder rebalance
#swift-ring-builder /etc/swift/object.builder rebalance
7) Verify the ring files have been successfully created.
#ls /etc/swift/*gz
8) Make sure /etc/swift directory is owned by root:swift.
#chown -R root:swift /etc/swift
Deploying the Swift Object Storage Proxy service.
The object storage proxy service determines to which node gets and puts are directed.
While it can be installed alongside the account, container, and object services, it will
usually end up on a separate system in production deployments.
1) Make backup of orginal config files:
#cp /etc/swift/proxy-server.conf /etc/swift/proxy-server.conf.orig
2) Update the configuration file for the swift proxy server with the correct
authentication details for the appropriate keystone user.
#openstack-config --set /etc/swift/proxy-server.conf filter:authtoken
admin_tenant_name services
#openstack-config --set /etc/swift/proxy-server.conf filter:authtoken auth_host
<your server IP>
#openstack-config --set /etc/swift/proxy-server.conf filter:authtoken admin_user
swift
#openstack-config --set /etc/swift/proxy-server.cong filter:authtoken
admin_password <your password>
3) Enable memcached and openstack-swift-proxy services permanently.
#service memcached start
#service openstack-swift-proxy start
#chkconfig memcached on
#chkconfig openstack-swift-proxy on
GLANCE
The Glance project provides a service where users can upload and discover data assets that are meant to
be used with other services. This currently includes images and metadata definitions.
Glance image services include discovering, registering, and retrieving virtual machine images. Glance has
a RESTful API that allows querying of VM image metadata as well as retrieval of the actual image.
VM images made available through Glance can be stored in a variety of locations from simple filesystems
to object-storage systems like the OpenStack Swift project.
Glance, as with all OpenStack projects, is written with the following design guidelines in mind:
• Component based architecture: Quickly add new behaviors
• Highly available: Scale to very serious workloads
• Fault tolerant: Isolated processes avoid cascading failures
• Recoverable: Failures should be easy to diagnose, debug, and rectify
• Open standards: Be a reference implementation for a community-driven api
Basic Architecture
OpenStack Glance has a client-server architecture and provides a user REST API through which requests
to the server are performed.
Internal server operations are managed by a Glance Domain Controller divided into layers. Each layer
implements its own task.
All the files operations are performed using glance_store library which is responsible for interaction with
external storage back ends or local filesystem, and provides a uniform interface to access.
Glance uses an sql-based central database (Glance DB) that is shared with all the components in the
system.
Fig. 6.1: OpenStack Glance Architecture
The Glance architecture consists of several components:
•A client— any application that uses Glance server.
•REST API— exposes Glance functionality via REST.
•Database Abstraction Layer (DAL)— an application programming interface which unifies the
communication between Glance and databases.
•Glance Domain Controller— middleware that implements the main Glance functionalities:
authorization, notifications, policies, database connections.
•Glance Store— organizes interactions between Glance and various data stores.
•Registry Layer— optional layer organizing secure communication between the domain and the
DAL by using a separate service.
Deploying the Glance Image Service:
The Glance image service requires keystone to be in place for identity management
and authorization. It uses MySQL database to store the metadata information for
the images. Glance supports a variety of disk formats, such as:
• raw
• vhd
• vmdk
• vdi
• iso
• qcow2
• aki, ari and ami
And variety of container formats:
• bare
• ovf
• aki, ari and ami
Installation of Glance Service Manually
#yum install -y openstack-glance
#cp /usr/share/glance/glance-registery-dist.conf /etc/glance/glance-registery.conf
Initialize the database:
#openstack-db --init --service glance --password <your_password>
Once the above is done, then you need to update the glance configuration to make use of
keystone as the identity service:
#openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken
admin_tenant_name admin
# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken
admin_user admin
# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken
admin_password <your password>
# openstack-config --set /etc/glance/glance-registery.conf paste_deploy flavor
keystone
#openstack-config --set /etc/glance/glance-registery.conf keystone_authtoken
admin_tenant_name admin
#openstack-config --set /etc/glance/glance-registery.conf keystone_authtoken
admin_user admin
#openstack-config --set /etc/glance/glance-registery.conf keystone_authtoken
admin_password <your password>
Once the above changes has been made then we need to start and enable the services:
#service openstack-glance-registery start
#chkconfig openstack-glance-registery on
#service openstack-glance-api start
#chkconfig openstack-galance-api on
Now, finally we will be adding the service to Keystone catalog:
#source /root/keystonerc_admin
#keystone service-create --name galnce --type image --description “Glance Image
Service”
And, once the service is created then we need to create the endpoint:
#keystone endpoint-create --service-id <specify the service id generated from above
command output> --publicurl http://<hostname or IP of your server>:9292
--adminurl http://<hostname or IP of your server>:9292 --internalurl
http://<hostname or IP of your server>:9292
In case now, you want to add one more another node to provide redundant service
availablity for Glance then this is even more simpler. You just have to install the
openstack-glance packages shown earlier and have to simple copy the
/etc/glance/glance-api.conf and /etc/glance/glance-registery.conf files from the
previous node to the new node, and after that start and enable the service.
Once the services have started, either create new end points for the new Glance server or
place a load balancer in front of the two Glance servers to balance the load. The load
balancer can either be hardware (such as F5) or software (such as HAproxy). If you are
using a load balancer, use a single set of endpoint for the Glance service using the front-
end IP address of the load balancer.
Some basic command line operations using Glance Service:
1) Sourcing the admin file:
[root@control-node ~]# source /root/keystonerc_admin
2) How to list available images:
[root@control-node ~(keystone_admin)]# glance image-list
+--------------------------------------+----------+-------------+------------------+------------
+--------+
| ID | Name | Disk Format | Container Format | Size |
Status |
+--------------------------------------+----------+-------------+------------------+------------
+--------+
| ad77e426-965b-40e4-93b4-807cc7bd0f67 | hvm | raw | bare |
6442450944 | active |
| a58fea4c-e2f2-4d8e-8388-80b96569781f | system | raw | bare |
3145728000 | active |
| 3e5176e4-da36-4122-9220-5a45116b9559 | test_iso | iso | bare |
254947328 | active |
+--------------------------------------+----------+-------------+------------------+------------
+--------+
3) How to upload an image:
[root@control-node ~(keystone_admin)]# glance image-create --name system
--disk-format=raw --container-format=bare < /mnt/system.img
+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| checksum | 87fdf8731d7af378f688c1fb93709bb6 |
| container_format | bare |
| created_at | 2014-10-01T21:43:34 |
| deleted | False |
| disk_format | raw |
| id | a58fea4c-e2f2-4d8e-8388-80b96569781f |
| is_public | False |
| min_disk | 0 |
| min_ram | 0 |
| name | system |
| owner | ffdefd08b3c842a28967f646036253f8 |
| protected | False |
| size | 3145728000 |
| status | active |
| updated_at | 2014-10-01T21:44:58 |
+------------------+--------------------------------------+
4) How to see details about specific image:
[root@control-node ~(keystone_admin)]# glance image-show system
+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| checksum | 87fdf8731d7af378f688c1fb93709bb6 |
| container_format | bare |
| created_at | 2014-10-01T21:43:34 |
| deleted | False |
| disk_format | raw |
| id | a58fea4c-e2f2-4d8e-8388-80b96569781f |
| is_public | False |
| min_disk | 0 |
| min_ram | 0 |
| name | system |
| owner | ffdefd08b3c842a28967f646036253f8 |
| protected | False |
| size | 3145728000 |
| status | active |
| updated_at | 2014-10-01T21:44:58 |
+------------------+--------------------------------------+
5) To view more options or actions associated with glance command you can check
the help and it will list the list of available option:
#glance --help
6) Also you can get into MySQL DB and verify things over there. Below is some
useful tips:
[root@control-node ~(keystone_admin)]# mysql -u root -p <<< This way you connect
to MySQL DB
Enter password: < enter here the password for your MySQL DB and is available from
your packstack answerfile>
Welcome to the MySQL monitor. Commands end with ; or g.
Your MySQL connection id is 9211
Server version: 5.1.73 Source distribution
Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.
mysql> <<<<< This says that you are now connected to DB.
To list databases:
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| cinder |
| glance |
| heat |
| keystone |
| mysql |
| nova |
| ovs_neutron |
| test |
+--------------------+
9 rows in set (0.00 sec)
To list DB specific Users and hosts:
mysql> SELECT User,host,password FROM mysql.user;
+----------------+-----------+-------------------------------------------+
| User | host | password |
+----------------+-----------+-------------------------------------------+
| root | localhost | *2DB9A8025FC3AB6656D9FA0C0516A35D0DDCAB50 |
| cinder | % | *C40F824DE4E1AC3850660914EB44B35F2C69B819 |
| keystone_admin | 127.0.0.1 |
*8D80B63D60A8F84BAA940BFF836DFBA382895F90 |
| nova | % | *E01B1E5009C2ACD39DB27AF619313E824C451134 |
| glance | % | *4510B28F08B897398FB03B9AC43027D0815E86FB |
| nova | 127.0.0.1 | *E01B1E5009C2ACD39DB27AF619313E824C451134 |
| neutron | % | *0B4051CB7295D26D48A9D20EE1664F0660F0C3E7 |
| neutron | 127.0.0.1 | *0B4051CB7295D26D48A9D20EE1664F0660F0C3E7 |
| cinder | 127.0.0.1 | *C40F824DE4E1AC3850660914EB44B35F2C69B819 |
| keystone_admin | % | *8D80B63D60A8F84BAA940BFF836DFBA382895F90
|
| glance | 127.0.0.1 | *4510B28F08B897398FB03B9AC43027D0815E86FB |
| heat | % | *69355EB9C45A11DA21E6F6A52E7E4E105714D062 |
| heat | localhost | *69355EB9C45A11DA21E6F6A52E7E4E105714D062 |
+----------------+-----------+-------------------------------------------+
13 rows in set (0.00 sec)
To use any specific DB, like here we are using Glance DB:
mysql> use glance;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
To view tables under glance DB:
mysql> show tables;
+------------------+
| Tables_in_glance |
+------------------+
| image_locations |
| image_members |
| image_properties |
| image_tags |
| images |
| migrate_version |
| task_info |
| tasks |
+------------------+
8 rows in set (0.00 sec)
To view id and status of images:
mysql> select id,status from images;
+--------------------------------------+--------+
| id | status |
+--------------------------------------+--------+
| 3e5176e4-da36-4122-9220-5a45116b9559 | active |
| a58fea4c-e2f2-4d8e-8388-80b96569781f | active |
| acac429b-f353-4ef0-a7f0-2d7d74c81530 | killed |
| ad77e426-965b-40e4-93b4-807cc7bd0f67 | active |
| e82b82b8-6905-4561-b030-893edc79128b | killed |
+--------------------------------------+--------+
5 rows in set (0.01 sec)
To come out:
mysql>quit
CINDER
Cinder provides an infrastructure for managing volumes in OpenStack. It was originally a
Nova component called nova-volume, but has become an independent project since the
Folsom release.
Block Storage allows devices to be exposed and connected to compute instances for
expanded storage, better performance and integration with enterprise storage platform.
OpenStack provides persistent block level storgae devices for use with OpenStacj compute
instances. Can be exposed to applications as well.
The block Storage system manages the creation, attaching and detaching of the block
devices to servers.
Cinder Components:
Block storage functionality is provided in OpenStack by three separate services, collectively
referred to as the block stoarge service or Cinder. The three services are:
• Openstack-cinder-api: The Api service provides an HTTP end point for block storage
requests. When an incoming request is recieved, the API verifies identity requirements
are met and translates the request into a message denoting the required block storage
actions. The message is then sent to the message broker for processing by the other
block storage services.
• Openstack-cinder-scheduler: The scheduler service reads requests from the
message queue and determines on which block storage host the request must be
performed. The scheduler then communicates with the volume service on the selected
host to process the request.
• Openstack-cinder-volume: The volume service manages the interaction with the
block storage devices. As requests come in from the scheduler, the volume service
creates, modifies, and removes volumes as required.
Fig. 7.1: Cinder Components
Fig. 7.2: Cinder Flow
Installing Cinder Service:
1) Install the required package:
#yum install -y openstack-cinder
2) Copy the /usr/share/cinder/cinder-dist.conf file to /etc/cinder/cinder.conf to set
some default values:
#cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.orig
#cp /usr/share/cinder/cinder-dist.conf /etc/cinder/cinder.conf
3) To be able to authenticate with administrate privileges, source the keystonerc_admin
file:
#source /root/keystonerc_admin
4) Initialize the database for use with cinder with a password of <your choice>. For a
production deployment, be sure to pick a more difficult password.
#openstack-db --init --service cinder --pasword <your_password> --rootpw
<your_password>
5) Create a cinder user, then link the cinder user and the admin role within the services
tenant.
#keystone user-create --name cinder --pass <your_password>
#keystone user-role-add --user cinder --role admin --tennant services
6) Add the service to the keystone catalog.
#keystone service-create --name=cinder --type=volume
--description=”OpenStack Block Storage Service”
7) Create the end point for the service.
#keystone ebdpoint-create --service-id <service id generated from above step6>
--publicurl 'http://<IP or Hostname>:8776/v1/%(tenant_id)s' --adminurl 'http://<IP or
Hostname>:8776/v1/%(tenant_id)s' --internalurl 'http://<IP or hostname>:8776/v1/%
(tenant_id)s'
8) Update the cinder configuration to use keystone as an identity service.
#openstack-config --set /etc/cinder/cinder.conf keystone_authtoken
admin_tenant_name services
#openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_user
cinder
#openstack-config --set /etc/cinder/cinder.conf keystone_authtoken
admin_password <your_password>
#openstack-config --set /etc/cinder/cinder.conf DEFAULT qpid_username
qpidauth
#openstack-config --set /etc/cinder/cinder.conf DEFAULT qpid_password
<your_password>
#openstack-config --set /etc/cinder/cinder.conf DEFAULT qpid_protocol ssl
#openstack-config --set /etc/cinder/cinder.conf DEFAULT qpid_port 5671
9) Start and enable the services. Check for any errors.
#service openstack-cinder-scheduler start
#service openstack-cinder-api start
#service openstack-cinder-volume start
#tail /var/log/cinder/*
#chkconfig openstack-cinder-scheduler on
#chkconfig openstack-cinder-api on
#chkconfig openstack-cinder-volume on
10) Edit the /etc/tgt/targets.conf file to include include /etc/cinder/volume/* in order to
configure iSCSI to include Cinder volumes.
#echo 'include /etc/cinder/volumes/*' >> /etc/tgt/targets.conf
11) Start and enable the tgtd service.
#service tgtd start
#tail /var/log/messgaes
#chkconfig tgtd on
12) Check the status of all the OpenStack services:
#openstack-status
13) Creating a cinder volume:
#cinder create --display-name vol1 2
14) To list a volume:
#cinder list
15) To delete a volume:
#cinder delete <volume_name>
Configuring External Stoarge
We can configure and add a vendor specific driver and make use of an external storage. As
OpenStack provides a wide range of flexibility to choose the external storage and here are
few of them listed over like:
• NetApp
• EMC
• IBM
• ZFS, etc
We can choose any of the external stoarge and configure it to make use with CInder, but to
do this we will require a vendor specifc (needs to be provided by storage vendor).
Here I will show you how we can configure ZFFSA storage:
1. The first step is to enable REST Access at the ZFS SA. Otherwise Cinder will not be
able to communicate with the appliance :
From the Web based management interface of ZFS SA, go to "Configuration->Services" and at
the end of the page there is ‘REST’ button, enable it.
Now, to verify :
OracleZFS:configuration services rest> show
Properties:
<status> = online
For advanced users, additional features in the appliance could be configured but for the purpose of this
demonstration this is sufficient.
2. At the ZFS SA, create a pool - go to Configuration -> Storage and add a pool as shown below
the pool is named “default”:
OracleZFS:configuration storage> show
Properties:
pool = Default
status = online
errors = 0
profile = mirror
log_profile = -
cache_profile = -
scrub = none requested
3. The final step at the ZFSSA is to download and run the workflow file "cinder.akwf".
Download the file "cinder.akwf" from "https://java.net/projects/solaris-
userland/sources/gate/show/components/openstack/cinder/files/zfssa".
Run the workflow on your Oracle ZFS Storage Appliance.
The workflow:
a) Creates the user if the user does not exist
b) Sets Role authorizations for performing Cinder driver operations
c) Enables the RESTful service if currently disabled
The workflow can be run from the Command Line Interface (CLI) Or from the Browser User
Interface (BUI) of the appliance.
* From the CLI:
zfssa:maintenance workflows> download
zfssa:maintenance workflows download (uncommitted)> show
Properties:
url = (unset)
user = (unset)
password = (unset)
zfssa:maintenance workflows download (uncommitted)> set url="url to the cinder.akwf file"
url = "url to the cinder.akwf file"
zfssa:maintenance workflows download (uncommitted)> commit
Transferred 2.64K of 2.64K (100%) ... done
zfssa:maintenance workflows> ls
Properties:
showhidden = false
Workflows:
WORKFLOW NAME OWNER SETID ORIGIN VERSION
workflow-000 Clear locks root false Oracle Corporation 1.0.0
workflow-001 Configuration for OpenStack Cinder Driver root false Oracle Corporation 1.0.0
zfssa:maintenance workflows> select workflow-001
zfssa:maintenance workflow-001 execute (uncommitted)> set name=openstack
name = openstack
zfssa:maintenance workflow-001 execute (uncommitted)> set password=devstack
password = ********
zfssa:maintenance workflow-001 execute (uncommitted)> commit
User openstack created.
For information on downloading and running the workflow over BUI, refer:
https://openstack.java.net/ZFSSACinderDriver.README
Configuring Cinder:
1. Copy the files from below URL to the directory : /usr/lib/python2.6/site-
packages/cinder/volume/drivers/zfssa .
"https://java.net/projects/solaris-userland/sources/gate/show/components/openstack/cinder/files/zfssa"
[root@control-node drivers]# mkdir /usr/lib/python2.6/site-packages/cinder/volume/drivers/zfssa
Copy the files to this directory
[root@control-node ~]# cd /usr/lib/python2.6/site-packages/cinder/volume/drivers/zfssa
[root@control-node zfssa]# ls
__init__.py restclient.py zfssaiscsi.py zfssarest.py
2. Configure the cinder plugin by editing /etc/cinder/cinder.conf or by using "openstack-config"
command (which is section-aware and thus recommended):
Note: Please take backup of /etc/cinder/cinder.conf before editing this file.
#openstack-config --set /etc/cinder/cinder.conf DEFAULT volume_driver
cinder.volume.drivers.zfssa.zfssaiscsi.ZFSSAISCSIDriver
#openstack-config --set /etc/cinder/cinder.conf DEFAULT zfssa_host <ZFS HOST IP>
#openstack-config --set /etc/cinder/cinder.conf DEFAULT zfssa_auth_user openstack
#openstack-config --set /etc/cinder/cinder.conf DEFAULT zfssa_auth_password <PASSWORD>
#openstack-config --set /etc/cinder/cinder.conf DEFAULT zfssa_pool <default>
#openstack-config --set /etc/cinder/cinder.conf DEFAULT zfssa_target_portal <HOST IP>:3260
#openstack-config --set /etc/cinder/cinder.conf DEFAULT zfssa_project test
#openstack-config --set /etc/cinder/cinder.conf DEFAULT zfssa_initiator_group default
#openstack-config --set /etc/cinder/cinder.conf DEFAULT zfssa_target_interfaces e1000g0
After executing above commands the /etc/cinder/cinder.conf will look something like below:
--snip from /etc/cinder/cinder.conf----
# Driver to use for volume creation (string value)
#volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
volume_driver=cinder.volume.drivers.zfssa.zfssaiscsi.ZFSSAISCSIDriver
zfssa_host = <HOST IP>
zfssa_auth_user = openstack
zfssa_auth_password = <Password>
zfssa_pool = default
zfssa_target_portal = <HOST IP>:3260
zfssa_project = test
zfssa_initiator_group = default
zfssa_target_interfaces = e1000g0
----end-----
3. Restart the cinder-volume service:
#service openstack-cinder-volume restart
4. Take a look at log files: /var/log/cinder/scheduler.log and /var/log/cinder/volume.log, to check for
any error. If errors are found in the logs, fix them before continuing.
5. Install iscsi-initiator-utils package on Control node and the Compute nodes, this is important since
the storage plugin uses iscsi commands from this package:
# yum install -y iscsi-initiator-utils
The installation and configuration are very simple, we do not need to have a “project” in the ZFSSA
but we do need to define a pool.
Creating and Using Volumes in OpenStack:
Now we are ready to create a cinder volume.
[root@control-node drivers(keystone_admin)]# cinder create 2 --display-name my-vol-1
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2014-08-12T04:24:37.806752 |
| display_description | None |
| display_name | my-vol-1 |
| encrypted | False |
| id | 768fbc56-1d27-46d8-a1e0-772bf23c7797 |
| metadata | {} |
| size | 2 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| volume_type | None |
+---------------------+--------------------------------------+
Extending the volume to 5G
[root@control-node drivers(keystone_admin)]# cinder extend 768fbc56-1d27-46d8-a1e0-
772bf23c7797 5
[root@control-node drivers(keystone_admin)]# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 484a117d-304c-4280-b127-688100fbdb98 | available | vol2 | 2 | None | false | |
| 768fbc56-1d27-46d8-a1e0-772bf23c7797 | available | my-vol-1 | 5 | None | false | |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
Creating templates using Cinder Volumes
By default OpenStack supports ephemeral storage where an image is copied into the run area during
instance launch and deleted when the instance is terminated. With Cinder we can create persistent
storage and launch instances from a Cinder volume. Booting from volume has several advantages, one
of the main advantages of booting from volumes is speed. No matter how large the volume is the
launch operation is immediate there is no copying of an image to a run areas, an operation which can
take a long time when using ephemeral storage (depending on image size).
[root@control-node drivers(keystone_admin)]# glance image-list
+--------------------------------------+--------+-------------+------------------+------------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+--------------------------------------+--------+-------------+------------------+------------+--------+
| ad77e426-965b-40e4-93b4-807cc7bd0f67 | hvm | raw | bare | 6442450944 | active |
| a58fea4c-e2f2-4d8e-8388-80b96569781f | system | raw | bare | 3145728000 | active |
+--------------------------------------+--------+-------------+------------------+------------+--------+
[root@control-node drivers(keystone_admin)]# cinder create --image-id a58fea4c-e2f2-4d8e-8388-
80b96569781f --display-name system 5
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2014-10-29T11:05:42.824849 |
| display_description | None |
| display_name | system |
| encrypted | False |
| id | a4b6f0ab-b897-4bd4-8ef4-330e0eb2d92b |
| image_id | a58fea4c-e2f2-4d8e-8388-80b96569781f |
| metadata | {} |
| size | 5 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| volume_type | None |
+---------------------+--------------------------------------+
[root@control-node drivers(keystone_admin)]# cinder list
+--------------------------------------+-------------+--------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-------------+--------------+------+-------------+----------+-------------+
| 484a117d-304c-4280-b127-688100fbdb98 | available | vol2 | 2 | None | false | |
| 768fbc56-1d27-46d8-a1e0-772bf23c7797 | available | my-vol-1 | 5 | None | false | |
| a4b6f0ab-b897-4bd4-8ef4-330e0eb2d92b | downloading | system | 5 | None | false |
|
+--------------------------------------+-------------+--------------+------+-------------+----------+-------------+
After the download is complete we will see that the volume status changed to “available” and that the
bootable state is “true”.
[root@control-node ~(keystone_admin)]# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 484a117d-304c-4280-b127-688100fbdb98 | available | vol2 | 2 | None | false | |
| 768fbc56-1d27-46d8-a1e0-772bf23c7797 | available | my-vol-1 | 5 | None | false | |
| a4b6f0ab-b897-4bd4-8ef4-330e0eb2d92b | available | system | 5 | None | true | |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
The new volume can be used to boot an instance from, or it can be used it as a template.
Cinder can create a volume from another volume and ZFSSA can replicate volumes instantly in the
back end. The result is an efficient template model where users can spawn an instance from a
“template” instantly even if the template is very large in size.
Let’s try replicating the bootable volume with the Oracle Linux 6.5 on it creating additional 1 bootable
volumes:
[root@control-node ~(keystone_admin)]# cinder create 5 --source-volid a4b6f0ab-b897-4bd4-8ef4-
330e0eb2d92b --display-name system-bootable-1
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2014-10-29T11:48:30.468041 |
| display_description | None |
| display_name | system-bootable-1 |
| encrypted | False |
| id | 9e75e706-c752-44c6-b767-6e835113964a |
| metadata | {} |
| size | 5 |
| snapshot_id | None |
| source_volid | a4b6f0ab-b897-4bd4-8ef4-330e0eb2d92b |
| status | creating |
| volume_type | None |
+---------------------+--------------------------------------+
[root@control-node ~(keystone_admin)]# cinder list
+--------------------------------------+-----------+-------------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+-------------------+------+-------------+----------+-------------+
| 3f4eabe4-d457-41b3-90ef-2e530bcf5be8 | available | system-bootable-1 | 5 | None | true |
|
| 484a117d-304c-4280-b127-688100fbdb98 | available | vol2 | 2 | None | false | |
| 768fbc56-1d27-46d8-a1e0-772bf23c7797 | available | my-vol-1 | 5 | None | false |
|
| d934c458-202d-46c3-8c69-211fdd473686 | available | system-bootable | 5 | None | true |
|
| fddd3794-e5ff-485e-a0bf-5c69218ebc66 | available | my-test-vol | 2 | None | false |
|
+--------------------------------------+-----------+-------------------+------+-------------+----------+-------
Note that the creation of last volume was almost immediate, no need to download or copy, ZFSSA
takes care of the volume copy for us.
Now let's try to boot the instance using our bootable volume:
[root@control-node ~(keystone_admin)]# nova boot --boot-volume 9e75e706-c752-44c6-b767-
6e835113964a --flavor 4659934e-7b26-4e68-a649-2ccd9d47cc9e system-instance-1 --nic net-
id=87f96ebf-4847-4f72-98b5-41ddcf743486
+--------------------------------------+--------------------------------------------------+
| Property | Value |
+--------------------------------------+--------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | - |
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
| OS-EXT-SRV-ATTR:instance_name | instance-00000004 |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | pT8WhCoEubew |
| config_drive | |
| created | 2014-10-29T11:56:32Z |
| flavor | m2.tiny (4659934e-7b26-4e68-a649-2ccd9d47cc9e) |
| hostId | |
| id | fc909461-fc79-4054-a354-fe492b82db49 |
| image | Attempt to boot from volume - no image supplied |
| key_name | - |
| metadata | {} |
| name | system-instance-1 |
| os-extended-volumes:volumes_attached | [{"id": "9e75e706-c752-44c6-b767-6e835113964a"}] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | ffdefd08b3c842a28967f646036253f8 |
| updated | 2014-10-29T11:56:32Z |
| user_id | 9df6d9a317ec4f55a839168e5568cda2 |
+--------------------------------------+--------------------------------------------------+
In the above command, a custom created flavor was used, and we can get the list of all available
flavors using the below command:
[root@control-node ~(keystone_admin)]# nova flavor-list
+--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------
+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor
| Is_Public |
+--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------
+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
| 4659934e-7b26-4e68-a649-2ccd9d47cc9e | m2.tiny | 512 | 20 | 5 | 512 | 1 | 1.0 |
True |
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
| 7e4202a7-0fa4-4d5c-b7e6-af29de71509d | m2.small | 1024 | 20 | 5 | 1024 | 1 | 1.0 |
True |
| 89db9029-8071-4b93-9305-e486dfedc40d | m3.small | 1024 | 8 | 2 | 512 | 1 | 1.0 |
True |
+--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------
+-----------+
And list of available network can be viewed using:
[root@control-node ~(keystone_admin)]# neutron net-list
+--------------------------------------+--------+-----------------------------------------------------+
| id | name | subnets |
+--------------------------------------+--------+-----------------------------------------------------+
| 87f96ebf-4847-4f72-98b5-41ddcf743486 | net1 | 67ead5de-6a2d-4bba-bd5b-cd1d670f18cb
10.10.10.0/24 |
| b603d4ec-da2b-4a5c-9d9b-9d36d8f3331f | net2 | d553c00d-ef4d-42ff-ab87-6d2413f462a0
20.20.20.0/24 |
| d9723f00-c322-4de0-b237-40b0c1210fea | public | 2a4a9731-a40a-493a-a04c-23cc5facb98f
192.168.1.0/24 |
+--------------------------------------+--------+-----------------------------------------------------+
Once the volumes are created using the OpenStack cinder plugin, the corresponding ZFS volumes can
also be checked from the CLI login to the ZFSSA :
OracleZFS:shares test> show
Properties:
aclinherit = restricted
aclmode = discard
atime = true
checksum = fletcher4
compression = off
dedup = false
compressratio = 100
copies = 1
creation = Wed Oct 01 2014 17:38:58 GMT+0000 (UTC)
logbias = latency
mountpoint = /export
quota = 0
readonly = false
recordsize = 128K
reservation = 0
rstchown = true
secondarycache = all
nbmand = false
sharesmb = off
sharenfs = on
snapdir = hidden
vscan = false
snaplabel =
sharedav = off
shareftp = off
sharesftp = off
sharetftp = off
pool = Default
canonical_name = Default/local/test
default_group = other
default_permissions = 700
default_sparse = false
default_user = nobody
default_volblocksize = 8K
default_volsize = 0
exported = true
nodestroy = false
maxblocksize = 1M
space_data = 11.9G
space_unused_res = 0
space_unused_res_shares = 0
space_snapshots = 0
space_available = 22.3G
space_total = 11.9G
origin =
Shares:
LUNs:
NAME VOLSIZE GUID
volume-768fbc56-1d27-46d8-a1e0-772bf23c7797 5G 600144F09C9F77890000542C3C940001
volume-484a117d-304c-4280-b127-688100fbdb98 2G 600144F09C9F77890000543D31330002
volume-a4b6f0ab-b897-4bd4-8ef4-330e0eb2d92b 5G 600144F09C9F778900005450CADC0004
volume-9e75e706-c752-44c6-b767-6e835113964a 5G 600144F09C9F778900005450D4E20005
Children:
groups => View per-group usage and manage group
quotas
replication => Manage remote replication
snapshots => Manage snapshots
users => View per-user usage and manage user quotas
NFS Based Cinder Volumes
Now I will show you how we can use NFS based cinder volumes:
To use NFS, use the standard NFS driver. This driver is not geared towards a specific driver, and can be
used for any NFS storage.
1) The first step is to configure cinder to use NFS and tell it where the NFS shares are located.
Take a backup of the /etc/cinder/cinder.conf
Then start editing this file, avoid using “vi” and perform this using below command as these are section
specific and aware:
#openstack-config --set /etc/cinder/cinder.conf DEFAULT volume_driver
cinder.volume.drivers.nfs.NfsDriver
#openstack-config --set /etc/cinder/cinder.conf DEFAULT nfs_shares_config
/etc/cinder/shares.conf
2) Add the NFS shares information to /etc/cinder/shares.conf
#echo <NFS_Server_IP>:/<nfs_exported_shares> > /etc/cinder/shares.conf
e.g: echo 192.168.104.120:/vmlocal/nfs_share > /etc/cinder/shares.conf
3) Restart the cinder service.
#service openstack-cinder-volume restart
NEUTRON
Neutron service implements "Networking-as-a-service" in the OpenStack project, which is meant to
create, configure, and manage Software-Defined Networks.
Networking in OpenStack has more powerful capabilities and at the same time it's more complicated.
Neutron provides APIs to define network connectivity and addressing in the cloud. The Networking
service enables operators to leverage different networking technologies to power their cloud
networking. The Networking service also provides APIs to configure and manage a variety of network
services such as L3 forwarding, NAT etc.
The Networking server uses the "neutron-server" daemon to expose the Networking APIs and enable
administration of the configured Networking plug-in.
A standard architectural design includes a cloud controller host, a network gateway host, and a number
of hypervisors that runs nova-compute service, for hosting virtual machines. The cloud controller and
network gateway can be on the same host. However, when the VMs are expected to send/receive
significant traffic to or from the Internet, a dedicated network gateway host helps avoid CPU
contention between the neutron-l3-agent and other OpenStack services that forward packets.
Fig. 8.1: Neutron Flow
Network Description
Managment
Network
Provides internal communication between OpenStack components. IP addresses on
this network should be reachable only within the data center.
Data Network Provides VM data communication within the cloud deployment. The IP addressing
requirements of this network depend on the Networking plug-in that is used.
External
Network
Provides VMs with Internet access in some deployment scenarios. Anyone on the
Internet can reach IP addresses on this network.
API network Exposes all OpenStack APIs to tenants. The API network might be the same as the
external network, because it is possible to create an external-network subnet that
has allocated IP ranges that use less than the full range of IP addresses in an IP
block.
Plug-in architecture:
Networking introduces support for vendor plug-ins, which offer a custom back-end implementation of
the Networking API. A plug-in can use a variety of technologies to implement the logical API requests.
Some Networking plug-ins might use basic Linux VLANs and IP tables while others might use more
advanced technologies, such as L2-in-L3 tunneling or OpenFlow, to provide similar benefits. For
example, Network plug-ins are available for ML2 plug-in (Open vSwitch and Linux Bridge plug-ins),
Cisco, VMWare NSX, etc.
Network Plug-ins typically have requirements for particular software that must be run on each node
that handles data packets. This includes any node that runs "nova-compute" and nodes that run
dedicated OpenStack Networking service agents such as neutron-dhcp-agent, neutron-l3-agent or
neutron-metering-agent. Depending on the configuration, Networking can also include the following
agents:
Agent Description
plug-in agent
(neutron-*-agent)
Runs on each hypervisor to perform local vSwitch configuration. The agent that
runs, depends on the plug-in that you use. Certain plug-ins do not require an
agent.
dhcp agent (neutron-
dhcp-agent)
Provides DHCP services to tenant networks. Required by certain plug-ins.
l3 agent (neutron-l3-
agent)
Provides L3/NAT forwarding to provide external network access for VMs on
tenant networks. Required by certain plug-ins.
metering agent
(neutron-metering-
agent)
Provides L3 traffic metering for tenant networks.
These agents interact with the neutron server process through RPC (for example, RabbitMQ or Qpid)
or through the standard Networking API.
In addition, Networking integrates with OpenStack components in a number of ways:
• Networking relies on the Identity service (keystone) for the authentication and authorization
of all API requests.
• Compute (nova) interacts with Networking through calls to its standard API. As part of
creating a VM, the nova-compute service communicates with the Networking API to plug
each virtual NIC on the VM into a particular network.
• The dashboard (horizon) integrates with the Networking API, enabling administrators and
tenant users to create and manage network services through a web-based GUI.
Services related to OpenStack Neutron:
Common neutron services that runs on the Network Node
1) neutron-server: Provides APIs to request and configure virtual networks
2) neutron-openvswitch-agent: Support virtual networks using Open vSwitch
3) neutron-metadata-agent: Provides Metadata services to the Instances.
4) neutron-l3-agent: OpenStack Neutron Layer 3 Agent
5) neutron-dhcp-agent: OpenStack Neutron DHCP Agent
OpenVswitch
Open vSwitch is a opens source openFlow capable virtual switch, that is typically used with
hypervisors to interconnect VMs within host or VMs in different hosts across the network .
Below are the features of Open vSwitch :
• VLAN tagging and 802.1q trunking.
• Standard spanning tree protocol.
• LACP
• Port Mirroring (SPAN and RSPAN)
• Tunnelling (GRE, VXLAN and IPSEC)
Why OpenVswitch?
When it comes to virtualization open vswitch is attractive because it provides the ability for a single
controller to manage your virtual network across all your servers. It is also very useful in easily
allowing for live migration of virtual machines while maintaining network state such as firewall rules,
addresses and open network connections.
https://github.com/openvswitch/ovs/blob/master/WHY-OVS.md
Fig. 8.2: Open vSwitch Components
OpenVswitch Tools
ovs-vsctl :
• ovs-vsctl show
• ovs-vsctl list-br
• ovs-vsctl list-ports <bridge>
ovs-oftcl :
• ovs-ofctl show <bridge>
• ovs-ofctl dump-ports <bridge>
• ovs-ofctl show <bridge>
• ovs-ofctl dump-flows <bridge>
Network Name Space
• Namespaces enables multiple instances of a routing table to co-exist within the same Linux box
• Network namespaces are isolated containers which can hold a network configuration and is not
seen from outside of the namespace.
• Network namespaces make it possible to separate network domains (network interfaces, routing
tables, iptables) into completely separate and independent domains.
• One can create/delete name space with below commands.
# ip netns add <my-ns>
To list the name space.
# ip netns list
To delete name space
# ip netns delete <my-ns>
Namespace is an isolated container, we can perform all the normal actions in the namespace context
using the exec command for example:
# ip netns exec my-ns ifconfig
# ip netns exec ip route
# ip netns exec ping <IP>
OpenStack uses a combination of Open vSwitch and network namespaces. OVS defines the interfaces
and then we can add those interfaces to namespace.
Advantages of Network Namespaces
Overlapping IPs: A big advantage of namespaces implementation in neutron is that tenants can create
overlapping IP addresses. Linux network namespace is required on nodes running neutron-l3-agent or
neutron-dhcp-agent if overlapping IPs is in use. Hence the hosts running these processes must support
network namespaces.
Fig. 8.3 : A look inside Network Namespace
Installing OpenStack Networking:
1) #source /root/keystonerc_admin
#keystone service create –name neutron –type network –description 'openstack
networking service'
2) Create end point in keystone using id from previous output
#keystone endpoint-create –service-id <enter service id from previous output> --publicurl
http://<IP or hostname>:9696 –adminurl http://<IP or hostname>:9696 –internalurl
http://<IP or hostname>:9696
#keystone catalog
3) Create openstack networking service user named neutron using your own password.
#keystone user-create –name neutron –pass <your password>
4) Link the neutron user and admin role within the services tenant.
#keystone user-role-add –user neutron –role admin –tenant services
5) Verify the user role:
#keystone –os-username neutron –os-password <your neutron password> --os-tenant-
name services user-role-list
6) Install the openstack networking service and the open vSwitch plugin on server.
#yum -y install openstack-neutron openstack-neutron-openvswitch
7) Openstack neutron service must be connected with the AMQP which you are using either qpid
or Rabbitmq and ensure either of one is running
#service qpidd status
8) Update the openstack networking service to use the keystone information.
#cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
#openstack-config –set /etc/neutron/neutron.conf DEFAULT rpc_backend
quantum.openstack.common.rpc.impl_qpid
#openstack-config –set /etc/neutron/neutron.conf DEFAULT qpid_hostname <IP>
9) Configure openstack networking to use Qpid and keystone using the values configured
previously.
#openstack-config –set /etc/neutron/neutron.conf DEFAULT qpid_username qpidauth
#openstack-config –set /etc/neutron/neutron.conf DEAFULT qpid_password <your qpid
password>
#openstack-config –set /etc/neutron/neutron.conf DEAFULT qpid_protocol ssl
#openstack-config –set /etc/neutron/neutron.conf DEAFULT qpid_port 5671
#openstack-config –set /etc/neutron/neutron.conf keystone_authtoken
admin_tenant_name services
#openstack-config –set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron
#openstack-config –set /etc/neutron/neutron.conf keystone_authtoken admin_password
<your password>
#openstack-config –set /etc/neutron/neutron.conf agent root_helper “sudo neutron-
rootwrap /etc/neutron/rootwrap.conf”
10) Create a /root/keystonerc_neutron file for user neutron:
export OS_USERNAME=neutron
export OS_TENANT_NAME=services
export OS_PASSWORD=<your password>
export OS_AUTH_URL=http://<IP>:35357/v2.0/
export PS1='[u@h W(keystone_neutron)]$'
11) Now source the keystonerc_neutron file:
#source /root/keystonerc_neutron
12) The Openstack networking setup scripts will make several changes to the /etc/nova/nova.conf
file as well. However, we will not cover Nova until next chapter, where we will focus on Nova
(compute services) in more detail.
#yum install -y openstack-nova-compute
13) Run the OpenStack networking setup script using the openvswtch plug-in.
#neutron-server-setup –yes –rootpw <your password> --pluin openvswitch
Neutron plugin: openvswitch
Plugin: openvswitch => Database: ovs_neutron
Verified connectivity to MySQL.
Configuration updates complete!
#neutron-db-manage –config-file /usr/share/neutron/neutron-dist.conf –config-file
/etc/neutron/neutron.conf –config-file /etc/neutron/plugin.ini stamp head
14) Start and enable the neutron-server service after checkking errors.
#service neutron-server start
#egrep 'ERROR|CRITICAL' /var/log/neutron/server.log
#chkconfig neutron-server on
#openstack-status
15) Configure openvswitch and start the necessary services.
#neutron-node-setup –plugin openvswitch –qhost <your IP>
Neutron plugin: openvswitch
would you like to update the nova configuration files? (y/n): y
Configuration updates complete!
#service openvswitch start
# egrep 'ERROR|CRITICAL' /var/log/openvswitch/*
#chkconfig openvswitch on
16) Create an Open v Switch bridge named br-int. This is the integration bridge that will be used as
a patch panel to assign interface(s) to an instance.
#ovs-vsctl add-br br-int
#ovs-vsctl show
17) Configure the Open vSwitch plugin to use br-int as the integration bridge. Start and enable the
neutron-openvswitch-agent service.
#cp /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini.backup
#openstack-config –set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini OVS
integration_bridge br-int
#service neutron-openvswitch-agent start
#egrep 'ERROR|CRITICAL' /var/log/neutron/openvswitch-agent.log
#chkconfig neutron-openvswitch-agent on
18) Enable the neutron-ovs-cleanup service. When started at boot time, this service ensures that
the OpenStack networking agents maintain full control over the creation and management of
tap devices.
#chkconfig neutron-ovs-cleanup on
19) Configure and enable the OpenStack networking DHCP agent (neutron-dhcp-agent) on your
server.
#neutron-dhcp-setup –plugin openvswitch –qhost <your IP>
Neutron plugin: openvswitch
Configuration updtaes complete!
#service neutron-dhcp-agent start
#egrep 'ERROR|CRITICAL' /var/log/neutron/dhcp-agent.log
#chkconfig neutron-dhcp-agent on
20) Create the br-ex bridge that will be used for external network traffic in Open vSwitch.
#ovs-vsctl add-br br-ex
21) Before attaching eth0 to br-ex bridge, configure the br-ex network device configuration file.
#cp /etc/sysconfig/network-scripts/ifcfg-eth0 /root/
#cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-br-ex
Note: If you are using HWADDR in config files then please make sure you change it as
appropriate else comment that out.
Below is the sample file:
DEVICE=br-ex
IPADDR=<your IP Address>
PREFIX=24
GATEWAY= <your gateway IP>
DNS1= <your DNS server IP> <<< if you don't have DNS then can comment out this line
SEARCH1=<domain name>
ONBOOT=yes
22) Once you have verified all the configuration files then you are ready to add eth0 to br-ex.
#ovs-vsctl add-port br-ex eth0 ; service network restart
#ovs-vsctl show
Note: If by cahnce your network stopped working then you can delete the port as well using
below and revert back the changes made to network config files to bring up your network back.
#ovs-vsctl del-port br-ex eth0
#cp /etc/sysconfig/network-scripts/ifcfg-br-ex /etc/sysconfig/network-scripts/ifcfg-eth0
#service network restart
23) Run the neutron-l3-setup script to configure teh OpenStack networking L3 agent (neutron-l3-
agent).
#neutron-l3-setup –plugin openvswitch –qhost <your IP>
Neutron plugin: openvswitch
Configuration updates combination!
24) Start and enable the neutron-l3-agent service.
#service neutron-l3-agent start
#egrep 'ERROR|CRITICAL' /var/log/neutron/l3-agent.log
#chkconfig neutron-l3-agent on
25) The openstack networking service is now running, and you can verify using below:
#openstack-status
Few helpful commands
1) How to list the existing networks:
[root@localhost ~(keystone_admin)]# neutron net-list
+--------------------------------------+------+----------------------------------------------------+
| id | name | subnets |
+--------------------------------------+------+----------------------------------------------------+
| 85c3f30a-2491-467f-a3b0-d1e8f4b1b7ec | net2 | 58bdc8ed-970b-44ca-bb9d-f81a5bb63851
20.20.20.0/24 |
| 9de3e304-5b67-4039-a53e-ded4784287cd | net1 | 4f32c145-c9b5-40a8-bdbe-432ebfb88b69
10.10.10.0/24 |
+--------------------------------------+------+----------------------------------------------------+
2) How to cretae a new network:
[root@localhost ~(keystone_admin)]# neutron net-create private
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | 1562e7f5-d0b5-4dca-a8e6-77a96b89aa47 |
| name | private |
| provider:network_type | local |
| provider:physical_network | |
| provider:segmentation_id | |
| shared | False |
| status | ACTIVE |
| subnets | |
| tenant_id | fa55991d5f4449139db2d5de410b0c81 |
+---------------------------+--------------------------------------+
[root@localhost ~(keystone_admin)]# neutron net-list
+--------------------------------------+---------+----------------------------------------------------+
| id | name | subnets |
+--------------------------------------+---------+----------------------------------------------------+
| 1562e7f5-d0b5-4dca-a8e6-77a96b89aa47 | private | |
| 85c3f30a-2491-467f-a3b0-d1e8f4b1b7ec | net2 | 58bdc8ed-970b-44ca-bb9d-f81a5bb63851
20.20.20.0/24 |
| 9de3e304-5b67-4039-a53e-ded4784287cd | net1 | 4f32c145-c9b5-40a8-bdbe-432ebfb88b69
10.10.10.0/24 |
+--------------------------------------+---------+----------------------------------------------------+
3) How to create a new subnet and assign to the new network created as above:
[root@localhost ~(keystone_admin)]# neutron subnet-create --name subprivate private
172.24.1.0/24
Created a new subnet:
+------------------+------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------+
| allocation_pools | {"start": "172.24.1.2", "end": "172.24.1.254"} |
| cidr | 172.24.1.0/24 |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | 172.24.1.1 |
| host_routes | |
| id | 7ed1a971-88bb-4b1b-bd50-5c295f3510bd |
| ip_version | 4 |
| name | subprivate |
| network_id | 1562e7f5-d0b5-4dca-a8e6-77a96b89aa47 |
| tenant_id | fa55991d5f4449139db2d5de410b0c81 |
+------------------+------------------------------------------------+
[root@localhost ~(keystone_admin)]# neutron net-list
+--------------------------------------+---------+----------------------------------------------------+
| id | name | subnets |
+--------------------------------------+---------+----------------------------------------------------+
| 1562e7f5-d0b5-4dca-a8e6-77a96b89aa47 | private | 7ed1a971-88bb-4b1b-bd50-5c295f3510bd
172.24.1.0/24 |
| 85c3f30a-2491-467f-a3b0-d1e8f4b1b7ec | net2 | 58bdc8ed-970b-44ca-bb9d-f81a5bb63851
20.20.20.0/24 |
| 9de3e304-5b67-4039-a53e-ded4784287cd | net1 | 4f32c145-c9b5-40a8-bdbe-432ebfb88b69
10.10.10.0/24 |
+--------------------------------------+---------+----------------------------------------------------+
4) How to create a router:
[root@localhost ~(keystone_admin)]# neutron router-create router
Created a new router:
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| admin_state_up | True |
| external_gateway_info | |
| id | c8f59b15-4ae6-4594-9eec-e85374189e05 |
| name | router |
| status | ACTIVE |
| tenant_id | fa55991d5f4449139db2d5de410b0c81 |
+-----------------------+--------------------------------------+
6) How to list existing routers:
[root@localhost ~(keystone_admin)]# neutron router-list
+--------------------------------------+--------+-----------------------+
| id | name | external_gateway_info |
+--------------------------------------+--------+-----------------------+
| c8f59b15-4ae6-4594-9eec-e85374189e05 | router | null |
+--------------------------------------+--------+-----------------------+
7) How to add interface to router:
[root@localhost ~(keystone_admin)]# neutron router-interface-add router subprivate
Added interface 84a1fad1-0748-45f5-829f-076d45345e08 to router router.
8) How to list ports:
[root@localhost ~(keystone_admin)]# neutron port-list
+--------------------------------------+------+-------------------
+-----------------------------------------------------------------------------------+
| id | name | mac_address | fixed_ips
|
+--------------------------------------+------+-------------------
+-----------------------------------------------------------------------------------+
| 1281f263-e251-4714-bf8b-2dc15f26b923 | | fa:16:3e:69:e3:c7 | {"subnet_id": "4f32c145-c9b5-
40a8-bdbe-432ebfb88b69", "ip_address": "10.10.10.2"} |
| 57fd9afd-fb53-4d4c-826e-fd8168e0aba8 | | fa:16:3e:97:64:ed | {"subnet_id": "4f32c145-c9b5-
40a8-bdbe-432ebfb88b69", "ip_address": "10.10.10.4"} |
| 7845c4c9-f022-4027-8c02-359854198c11 | | fa:16:3e:ae:90:2d | {"subnet_id": "58bdc8ed-
970b-44ca-bb9d-f81a5bb63851", "ip_address": "20.20.20.2"} |
| 7e8dd705-6ab9-4d22-b8c6-1b5b165e459b | | fa:16:3e:87:f9:94 | {"subnet_id": "58bdc8ed-
970b-44ca-bb9d-f81a5bb63851", "ip_address": "20.20.20.3"} |
| 84a1fad1-0748-45f5-829f-076d45345e08 | | fa:16:3e:1b:51:c2 | {"subnet_id": "7ed1a971-
88bb-4b1b-bd50-5c295f3510bd", "ip_address": "172.24.1.1"} |
+--------------------------------------+------+-------------------
+-----------------------------------------------------------------------------------+
9) How to create public network and setting it for external network:
[root@localhost ~(keystone_admin)]# neutron net-create --tenant-id services public
--router:external=True
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | 01aa3874-ea24-4ba9-b02d-e80acfab3173 |
| name | public |
| provider:network_type | local |
| provider:physical_network | |
| provider:segmentation_id | |
| router:external | True |
| shared | False |
| status | ACTIVE |
| subnets | |
| tenant_id | services |
+---------------------------+--------------------------------------+
10) How to allocate the floating IP range and assigning it:
[root@localhost ~(keystone_admin)]# neutron subnet-create --tenant-id services --allocation-pool
start=10.176.246.1,end=10.176.246.10 --gateway 10.176.246.254 --disable-dhcp --name subpub
public 10.176.246.0/24
Created a new subnet:
+------------------+---------------------------------------------------+
| Field | Value |
+------------------+---------------------------------------------------+
| allocation_pools | {"start": "10.176.246.1", "end": "10.176.246.10"} |
| cidr | 10.176.246.0/24 |
| dns_nameservers | |
| enable_dhcp | False |
| gateway_ip | 10.176.246.254 |
| host_routes | |
| id | 43db4465-0da9-4f44-8942-5f5a5b4ba506 |
| ip_version | 4 |
| name | subpub |
| network_id | 01aa3874-ea24-4ba9-b02d-e80acfab3173 |
| tenant_id | services |
+------------------+---------------------------------------------------+
11) How to set gateway to router:
[root@localhost ~(keystone_admin)]# neutron router-gateway-set router public
Set gateway for router router
[root@localhost ~(keystone_admin)]# neutron port-list
+--------------------------------------+------+-------------------
+-------------------------------------------------------------------------------------+
| id | name | mac_address | fixed_ips
|
+--------------------------------------+------+-------------------
+-------------------------------------------------------------------------------------+
| 1281f263-e251-4714-bf8b-2dc15f26b923 | | fa:16:3e:69:e3:c7 | {"subnet_id": "4f32c145-c9b5-
40a8-bdbe-432ebfb88b69", "ip_address": "10.10.10.2"} |
| 4ba58e4e-c90f-4025-be3b-7d0385e451de | | fa:16:3e:08:6e:36 | {"subnet_id": "43db4465-
0da9-4f44-8942-5f5a5b4ba506", "ip_address": "10.176.246.1"} |
| 57fd9afd-fb53-4d4c-826e-fd8168e0aba8 | | fa:16:3e:97:64:ed | {"subnet_id": "4f32c145-c9b5-
40a8-bdbe-432ebfb88b69", "ip_address": "10.10.10.4"} |
| 7845c4c9-f022-4027-8c02-359854198c11 | | fa:16:3e:ae:90:2d | {"subnet_id": "58bdc8ed-970b-
44ca-bb9d-f81a5bb63851", "ip_address": "20.20.20.2"} |
| 7e8dd705-6ab9-4d22-b8c6-1b5b165e459b | | fa:16:3e:87:f9:94 | {"subnet_id": "58bdc8ed-970b-
44ca-bb9d-f81a5bb63851", "ip_address": "20.20.20.3"} |
| 84a1fad1-0748-45f5-829f-076d45345e08 | | fa:16:3e:1b:51:c2 | {"subnet_id": "7ed1a971-88bb-
4b1b-bd50-5c295f3510bd", "ip_address": "172.24.1.1"} |
+--------------------------------------+------+-------------------
+-------------------------------------------------------------------------------------+
[root@localhost ~(keystone_admin)]# neutron router-list
+--------------------------------------+--------
+-----------------------------------------------------------------------------+
| id | name | external_gateway_info |
+--------------------------------------+--------
+-----------------------------------------------------------------------------+
| c8f59b15-4ae6-4594-9eec-e85374189e05 | router | {"network_id": "01aa3874-ea24-4ba9-b02d-
e80acfab3173", "enable_snat": true} |
+--------------------------------------+--------
+-----------------------------------------------------------------------------+
12) How to create a floating IP:
[root@localhost ~(keystone_admin)]# neutron floatingip-create public
Created a new floatingip:
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| fixed_ip_address | |
| floating_ip_address | 10.176.246.2 |
| floating_network_id | 01aa3874-ea24-4ba9-b02d-e80acfab3173 |
| id | 578a5bfe-1794-4519-b32e-8a85c5827a41 |
| port_id | |
| router_id | |
| status | ACTIVE |
| tenant_id | fa55991d5f4449139db2d5de410b0c81 |
+---------------------+--------------------------------------+
13) How to list floating IPs:
[root@localhost ~(keystone_admin)]# neutron floatingip-list
+--------------------------------------+------------------+---------------------+---------+
| id | fixed_ip_address | floating_ip_address | port_id |
+--------------------------------------+------------------+---------------------+---------+
| 578a5bfe-1794-4519-b32e-8a85c5827a41 | | 10.176.246.2 | |
+--------------------------------------+------------------+---------------------+---------+
NOTE: All the above operations can be easily done through GUI mode as well and that will look
very easy and simple.
OpenStack Installation and Administration Guide
OpenStack Installation and Administration Guide
OpenStack Installation and Administration Guide
OpenStack Installation and Administration Guide
OpenStack Installation and Administration Guide
OpenStack Installation and Administration Guide
OpenStack Installation and Administration Guide
OpenStack Installation and Administration Guide
OpenStack Installation and Administration Guide

Mais conteúdo relacionado

Mais procurados

OpenStack - Infrastructure as a service
OpenStack - Infrastructure as a serviceOpenStack - Infrastructure as a service
OpenStack - Infrastructure as a serviceDenis Cavalcante
 
OpenStack Architecture
OpenStack ArchitectureOpenStack Architecture
OpenStack ArchitectureMirantis
 
Openstack training material
Openstack training materialOpenstack training material
Openstack training materialchenvi123
 
Quick overview of Openstack architecture
Quick overview of Openstack architectureQuick overview of Openstack architecture
Quick overview of Openstack architectureToni Ramirez
 
VNG/IRD - Cloud computing & Openstack discussion 3/5/2014
VNG/IRD - Cloud computing & Openstack discussion 3/5/2014VNG/IRD - Cloud computing & Openstack discussion 3/5/2014
VNG/IRD - Cloud computing & Openstack discussion 3/5/2014Tran Nhan
 
Cloud orchestration major tools comparision
Cloud orchestration major tools comparisionCloud orchestration major tools comparision
Cloud orchestration major tools comparisionRavi Kiran
 
OpenStack Explained: Learn OpenStack architecture and the secret of a success...
OpenStack Explained: Learn OpenStack architecture and the secret of a success...OpenStack Explained: Learn OpenStack architecture and the secret of a success...
OpenStack Explained: Learn OpenStack architecture and the secret of a success...Giuseppe Paterno'
 
Webinar "Introduction to OpenStack"
Webinar "Introduction to OpenStack"Webinar "Introduction to OpenStack"
Webinar "Introduction to OpenStack"CREATE-NET
 
OpenStack in Action 4! Franz Meyer - What Use Case does Red Hat Enterprise ...
OpenStack in Action 4!   Franz Meyer - What Use Case does Red Hat Enterprise ...OpenStack in Action 4!   Franz Meyer - What Use Case does Red Hat Enterprise ...
OpenStack in Action 4! Franz Meyer - What Use Case does Red Hat Enterprise ...eNovance
 
Getting started with OpenStack
Getting started with OpenStackGetting started with OpenStack
Getting started with OpenStackKnoldus Inc.
 
Openstack architecture for the enterprise (Openstack Ireland Meet-up)
Openstack architecture for the enterprise (Openstack Ireland Meet-up)Openstack architecture for the enterprise (Openstack Ireland Meet-up)
Openstack architecture for the enterprise (Openstack Ireland Meet-up)Keith Tobin
 
Architecture Openstack for the Enterprise
Architecture Openstack for the EnterpriseArchitecture Openstack for the Enterprise
Architecture Openstack for the EnterpriseKeith Tobin
 
CIS13: OpenStack API Security
CIS13: OpenStack API SecurityCIS13: OpenStack API Security
CIS13: OpenStack API SecurityCloudIDSummit
 
OpenStack Identity - Keystone (liberty) by Lorenzo Carnevale and Silvio Tavilla
OpenStack Identity - Keystone (liberty) by Lorenzo Carnevale and Silvio TavillaOpenStack Identity - Keystone (liberty) by Lorenzo Carnevale and Silvio Tavilla
OpenStack Identity - Keystone (liberty) by Lorenzo Carnevale and Silvio TavillaLorenzo Carnevale
 
Openstack Fundamentals by CloudZone @Back2School
Openstack Fundamentals by CloudZone @Back2SchoolOpenstack Fundamentals by CloudZone @Back2School
Openstack Fundamentals by CloudZone @Back2SchoolAsaf Abres
 
OpenStack 101 Technical Overview
OpenStack 101 Technical OverviewOpenStack 101 Technical Overview
OpenStack 101 Technical OverviewOpen Stack
 
State of the Stack v4 - OpenStack in All It's Glory
State of the Stack v4 - OpenStack in All It's GloryState of the Stack v4 - OpenStack in All It's Glory
State of the Stack v4 - OpenStack in All It's GloryRandy Bias
 
Openstack architure part 1
Openstack architure part 1Openstack architure part 1
Openstack architure part 1Nhan Cao Thanh
 
An Intrudction to OpenStack 2017
An Intrudction to OpenStack 2017An Intrudction to OpenStack 2017
An Intrudction to OpenStack 2017Haim Ateya
 

Mais procurados (20)

OpenStack - Infrastructure as a service
OpenStack - Infrastructure as a serviceOpenStack - Infrastructure as a service
OpenStack - Infrastructure as a service
 
OpenStack Architecture
OpenStack ArchitectureOpenStack Architecture
OpenStack Architecture
 
Openstack training material
Openstack training materialOpenstack training material
Openstack training material
 
Quick overview of Openstack architecture
Quick overview of Openstack architectureQuick overview of Openstack architecture
Quick overview of Openstack architecture
 
VNG/IRD - Cloud computing & Openstack discussion 3/5/2014
VNG/IRD - Cloud computing & Openstack discussion 3/5/2014VNG/IRD - Cloud computing & Openstack discussion 3/5/2014
VNG/IRD - Cloud computing & Openstack discussion 3/5/2014
 
Cloud orchestration major tools comparision
Cloud orchestration major tools comparisionCloud orchestration major tools comparision
Cloud orchestration major tools comparision
 
OpenStack Explained: Learn OpenStack architecture and the secret of a success...
OpenStack Explained: Learn OpenStack architecture and the secret of a success...OpenStack Explained: Learn OpenStack architecture and the secret of a success...
OpenStack Explained: Learn OpenStack architecture and the secret of a success...
 
Webinar "Introduction to OpenStack"
Webinar "Introduction to OpenStack"Webinar "Introduction to OpenStack"
Webinar "Introduction to OpenStack"
 
OpenStack in Action 4! Franz Meyer - What Use Case does Red Hat Enterprise ...
OpenStack in Action 4!   Franz Meyer - What Use Case does Red Hat Enterprise ...OpenStack in Action 4!   Franz Meyer - What Use Case does Red Hat Enterprise ...
OpenStack in Action 4! Franz Meyer - What Use Case does Red Hat Enterprise ...
 
Getting started with OpenStack
Getting started with OpenStackGetting started with OpenStack
Getting started with OpenStack
 
Cloud Computing using OpenStack
Cloud Computing using OpenStackCloud Computing using OpenStack
Cloud Computing using OpenStack
 
Openstack architecture for the enterprise (Openstack Ireland Meet-up)
Openstack architecture for the enterprise (Openstack Ireland Meet-up)Openstack architecture for the enterprise (Openstack Ireland Meet-up)
Openstack architecture for the enterprise (Openstack Ireland Meet-up)
 
Architecture Openstack for the Enterprise
Architecture Openstack for the EnterpriseArchitecture Openstack for the Enterprise
Architecture Openstack for the Enterprise
 
CIS13: OpenStack API Security
CIS13: OpenStack API SecurityCIS13: OpenStack API Security
CIS13: OpenStack API Security
 
OpenStack Identity - Keystone (liberty) by Lorenzo Carnevale and Silvio Tavilla
OpenStack Identity - Keystone (liberty) by Lorenzo Carnevale and Silvio TavillaOpenStack Identity - Keystone (liberty) by Lorenzo Carnevale and Silvio Tavilla
OpenStack Identity - Keystone (liberty) by Lorenzo Carnevale and Silvio Tavilla
 
Openstack Fundamentals by CloudZone @Back2School
Openstack Fundamentals by CloudZone @Back2SchoolOpenstack Fundamentals by CloudZone @Back2School
Openstack Fundamentals by CloudZone @Back2School
 
OpenStack 101 Technical Overview
OpenStack 101 Technical OverviewOpenStack 101 Technical Overview
OpenStack 101 Technical Overview
 
State of the Stack v4 - OpenStack in All It's Glory
State of the Stack v4 - OpenStack in All It's GloryState of the Stack v4 - OpenStack in All It's Glory
State of the Stack v4 - OpenStack in All It's Glory
 
Openstack architure part 1
Openstack architure part 1Openstack architure part 1
Openstack architure part 1
 
An Intrudction to OpenStack 2017
An Intrudction to OpenStack 2017An Intrudction to OpenStack 2017
An Intrudction to OpenStack 2017
 

Destaque

Brian R. McNevin Resume
Brian R. McNevin ResumeBrian R. McNevin Resume
Brian R. McNevin ResumeBrian Mc Nevin
 
[Webinar Slides] How Paper Free Can Help You Find What You Need, When You Nee...
[Webinar Slides] How Paper Free Can Help You Find What You Need, When You Nee...[Webinar Slides] How Paper Free Can Help You Find What You Need, When You Nee...
[Webinar Slides] How Paper Free Can Help You Find What You Need, When You Nee...AIIM International
 
Hpe data protector deduplication
Hpe data protector deduplicationHpe data protector deduplication
Hpe data protector deduplicationAndrey Karpov
 
Information Storage Associate Version 3 (EMCISA) certificate
Information Storage Associate Version 3 (EMCISA) certificateInformation Storage Associate Version 3 (EMCISA) certificate
Information Storage Associate Version 3 (EMCISA) certificateAsif Nawaz
 
Ituc leaks sharan burrow and gemma swart corporate politics exposed
Ituc leaks sharan burrow and gemma swart corporate politics exposedItuc leaks sharan burrow and gemma swart corporate politics exposed
Ituc leaks sharan burrow and gemma swart corporate politics exposedbusinessupdates123
 
MiA16 - Manufacturing in America (Ford Field, Detroit)
MiA16 - Manufacturing in America (Ford Field, Detroit)MiA16 - Manufacturing in America (Ford Field, Detroit)
MiA16 - Manufacturing in America (Ford Field, Detroit)Alisa Coffey
 
Y1299-90001_StartupGuide
Y1299-90001_StartupGuideY1299-90001_StartupGuide
Y1299-90001_StartupGuideChris Muntzer
 
Ice Storm 2013 ORCGA Presentation (Final Copy)
Ice Storm 2013 ORCGA Presentation (Final Copy)Ice Storm 2013 ORCGA Presentation (Final Copy)
Ice Storm 2013 ORCGA Presentation (Final Copy)Brandon Smith
 
VM Farms Thrive with Dedicated IP Storage Networks
VM Farms Thrive with Dedicated IP Storage NetworksVM Farms Thrive with Dedicated IP Storage Networks
VM Farms Thrive with Dedicated IP Storage NetworksBrocade
 
IoT Ecosystem for improved shrimp Production / DELLEMC World 2016
IoT Ecosystem for improved shrimp Production / DELLEMC World 2016 IoT Ecosystem for improved shrimp Production / DELLEMC World 2016
IoT Ecosystem for improved shrimp Production / DELLEMC World 2016 AZLOGICA
 
Koenig-Corporate-Brochure
Koenig-Corporate-BrochureKoenig-Corporate-Brochure
Koenig-Corporate-BrochureBhargavi Saha
 
Real time data processing with kafla spark integration
Real time data processing with kafla spark integrationReal time data processing with kafla spark integration
Real time data processing with kafla spark integrationTCS
 
Controllers manual-controller carrier-owners-manual-of-kjr-12_b-1438306116
Controllers manual-controller carrier-owners-manual-of-kjr-12_b-1438306116Controllers manual-controller carrier-owners-manual-of-kjr-12_b-1438306116
Controllers manual-controller carrier-owners-manual-of-kjr-12_b-1438306116Albert Bylykbashi
 
Information Storage Associate Version 3 (EMCISA) certificate (1) (1)
Information Storage Associate Version 3 (EMCISA) certificate (1) (1)Information Storage Associate Version 3 (EMCISA) certificate (1) (1)
Information Storage Associate Version 3 (EMCISA) certificate (1) (1)Mohammed Sada
 
Catalog Biến tần Frenic5000 vg7s 405c Fuji​-Beeteco.com
Catalog Biến tần Frenic5000 vg7s 405c Fuji​-Beeteco.comCatalog Biến tần Frenic5000 vg7s 405c Fuji​-Beeteco.com
Catalog Biến tần Frenic5000 vg7s 405c Fuji​-Beeteco.comBeeteco
 
CJohnson_Portfolio_3_16
CJohnson_Portfolio_3_16CJohnson_Portfolio_3_16
CJohnson_Portfolio_3_16Casey Johnson
 

Destaque (20)

A_Citterio_CV_EU_13_eng
A_Citterio_CV_EU_13_engA_Citterio_CV_EU_13_eng
A_Citterio_CV_EU_13_eng
 
Brian R. McNevin Resume
Brian R. McNevin ResumeBrian R. McNevin Resume
Brian R. McNevin Resume
 
[Webinar Slides] How Paper Free Can Help You Find What You Need, When You Nee...
[Webinar Slides] How Paper Free Can Help You Find What You Need, When You Nee...[Webinar Slides] How Paper Free Can Help You Find What You Need, When You Nee...
[Webinar Slides] How Paper Free Can Help You Find What You Need, When You Nee...
 
Hpe data protector deduplication
Hpe data protector deduplicationHpe data protector deduplication
Hpe data protector deduplication
 
Awalin-CapWIC
Awalin-CapWICAwalin-CapWIC
Awalin-CapWIC
 
Information Storage Associate Version 3 (EMCISA) certificate
Information Storage Associate Version 3 (EMCISA) certificateInformation Storage Associate Version 3 (EMCISA) certificate
Information Storage Associate Version 3 (EMCISA) certificate
 
Ituc leaks sharan burrow and gemma swart corporate politics exposed
Ituc leaks sharan burrow and gemma swart corporate politics exposedItuc leaks sharan burrow and gemma swart corporate politics exposed
Ituc leaks sharan burrow and gemma swart corporate politics exposed
 
MiA16 - Manufacturing in America (Ford Field, Detroit)
MiA16 - Manufacturing in America (Ford Field, Detroit)MiA16 - Manufacturing in America (Ford Field, Detroit)
MiA16 - Manufacturing in America (Ford Field, Detroit)
 
Y1299-90001_StartupGuide
Y1299-90001_StartupGuideY1299-90001_StartupGuide
Y1299-90001_StartupGuide
 
Ice Storm 2013 ORCGA Presentation (Final Copy)
Ice Storm 2013 ORCGA Presentation (Final Copy)Ice Storm 2013 ORCGA Presentation (Final Copy)
Ice Storm 2013 ORCGA Presentation (Final Copy)
 
Ece brochure
Ece brochureEce brochure
Ece brochure
 
VM Farms Thrive with Dedicated IP Storage Networks
VM Farms Thrive with Dedicated IP Storage NetworksVM Farms Thrive with Dedicated IP Storage Networks
VM Farms Thrive with Dedicated IP Storage Networks
 
IoT Ecosystem for improved shrimp Production / DELLEMC World 2016
IoT Ecosystem for improved shrimp Production / DELLEMC World 2016 IoT Ecosystem for improved shrimp Production / DELLEMC World 2016
IoT Ecosystem for improved shrimp Production / DELLEMC World 2016
 
Koenig-Corporate-Brochure
Koenig-Corporate-BrochureKoenig-Corporate-Brochure
Koenig-Corporate-Brochure
 
Real time data processing with kafla spark integration
Real time data processing with kafla spark integrationReal time data processing with kafla spark integration
Real time data processing with kafla spark integration
 
Controllers manual-controller carrier-owners-manual-of-kjr-12_b-1438306116
Controllers manual-controller carrier-owners-manual-of-kjr-12_b-1438306116Controllers manual-controller carrier-owners-manual-of-kjr-12_b-1438306116
Controllers manual-controller carrier-owners-manual-of-kjr-12_b-1438306116
 
Information Storage Associate Version 3 (EMCISA) certificate (1) (1)
Information Storage Associate Version 3 (EMCISA) certificate (1) (1)Information Storage Associate Version 3 (EMCISA) certificate (1) (1)
Information Storage Associate Version 3 (EMCISA) certificate (1) (1)
 
Boski CV
Boski CVBoski CV
Boski CV
 
Catalog Biến tần Frenic5000 vg7s 405c Fuji​-Beeteco.com
Catalog Biến tần Frenic5000 vg7s 405c Fuji​-Beeteco.comCatalog Biến tần Frenic5000 vg7s 405c Fuji​-Beeteco.com
Catalog Biến tần Frenic5000 vg7s 405c Fuji​-Beeteco.com
 
CJohnson_Portfolio_3_16
CJohnson_Portfolio_3_16CJohnson_Portfolio_3_16
CJohnson_Portfolio_3_16
 

Semelhante a OpenStack Installation and Administration Guide

Survey of open source cloud architectures
Survey of open source cloud architecturesSurvey of open source cloud architectures
Survey of open source cloud architecturesabhinav vedanbhatla
 
Open stack
Open stackOpen stack
Open stacksvm
 
OpenStack - An Overview
OpenStack - An OverviewOpenStack - An Overview
OpenStack - An Overviewgraziol
 
HPC and cloud distributed computing, as a journey
HPC and cloud distributed computing, as a journeyHPC and cloud distributed computing, as a journey
HPC and cloud distributed computing, as a journeyPeter Clapham
 
Cloud computing and OpenStack
Cloud computing and OpenStackCloud computing and OpenStack
Cloud computing and OpenStackEdgar Magana
 
Introduction Openstack
Introduction OpenstackIntroduction Openstack
Introduction OpenstackRanjith Kumar
 
Mastering OpenStack - Episode 03 - Simple Architectures
Mastering OpenStack - Episode 03 - Simple ArchitecturesMastering OpenStack - Episode 03 - Simple Architectures
Mastering OpenStack - Episode 03 - Simple ArchitecturesRoozbeh Shafiee
 
The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...
The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...
The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...OpenStack
 
Sanger, upcoming Openstack for Bio-informaticians
Sanger, upcoming Openstack for Bio-informaticiansSanger, upcoming Openstack for Bio-informaticians
Sanger, upcoming Openstack for Bio-informaticiansPeter Clapham
 
Build cloud native solution using open source
Build cloud native solution using open source Build cloud native solution using open source
Build cloud native solution using open source Nitesh Jadhav
 
Openstack starter-guide-diablo
Openstack starter-guide-diabloOpenstack starter-guide-diablo
Openstack starter-guide-diablobabycat_feifei
 
Openstack starter-guide-diablo
Openstack starter-guide-diabloOpenstack starter-guide-diablo
Openstack starter-guide-diablo锐 张
 
Workshop - Openstack, Cloud Computing, Virtualization
Workshop - Openstack, Cloud Computing, VirtualizationWorkshop - Openstack, Cloud Computing, Virtualization
Workshop - Openstack, Cloud Computing, VirtualizationJayaprakash R
 

Semelhante a OpenStack Installation and Administration Guide (20)

Survey of open source cloud architectures
Survey of open source cloud architecturesSurvey of open source cloud architectures
Survey of open source cloud architectures
 
Open stack
Open stackOpen stack
Open stack
 
OpenStack - An Overview
OpenStack - An OverviewOpenStack - An Overview
OpenStack - An Overview
 
As34269277
As34269277As34269277
As34269277
 
HPC and cloud distributed computing, as a journey
HPC and cloud distributed computing, as a journeyHPC and cloud distributed computing, as a journey
HPC and cloud distributed computing, as a journey
 
Cloud computing and OpenStack
Cloud computing and OpenStackCloud computing and OpenStack
Cloud computing and OpenStack
 
Openstack
OpenstackOpenstack
Openstack
 
Introduction Openstack
Introduction OpenstackIntroduction Openstack
Introduction Openstack
 
ppt
pptppt
ppt
 
Mastering OpenStack - Episode 03 - Simple Architectures
Mastering OpenStack - Episode 03 - Simple ArchitecturesMastering OpenStack - Episode 03 - Simple Architectures
Mastering OpenStack - Episode 03 - Simple Architectures
 
The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...
The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...
The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...
 
Research Paper
Research PaperResearch Paper
Research Paper
 
final proposal-cloud storage
final proposal-cloud storagefinal proposal-cloud storage
final proposal-cloud storage
 
Sanger, upcoming Openstack for Bio-informaticians
Sanger, upcoming Openstack for Bio-informaticiansSanger, upcoming Openstack for Bio-informaticians
Sanger, upcoming Openstack for Bio-informaticians
 
Flexible compute
Flexible computeFlexible compute
Flexible compute
 
Build cloud native solution using open source
Build cloud native solution using open source Build cloud native solution using open source
Build cloud native solution using open source
 
Opensource tools for OpenStack IAAS
Opensource tools for OpenStack IAASOpensource tools for OpenStack IAAS
Opensource tools for OpenStack IAAS
 
Openstack starter-guide-diablo
Openstack starter-guide-diabloOpenstack starter-guide-diablo
Openstack starter-guide-diablo
 
Openstack starter-guide-diablo
Openstack starter-guide-diabloOpenstack starter-guide-diablo
Openstack starter-guide-diablo
 
Workshop - Openstack, Cloud Computing, Virtualization
Workshop - Openstack, Cloud Computing, VirtualizationWorkshop - Openstack, Cloud Computing, Virtualization
Workshop - Openstack, Cloud Computing, Virtualization
 

OpenStack Installation and Administration Guide

  • 2. Contents 1) Introduction 2) Installation Of OpenStack 3) Message Broker 4) Keystone 5) Swift 6) Glance 7) Cinder 8) Neutron 9) Nova
  • 3. Introduction Welcome to Class ! Thank you for attending this training session. Please let me know if you have any special needs while attending this training. About This Course Openstack Administration The Openstack Administration course begins by explaining the openstack architecture and terms used throughout the course. The course shows how to install and configure openstack, including basic concepts of cloud computing, the message broker (RabbitMQ), identity service (keystone), object storage (swift), image service (glance), block storage service (cinder), networking service (neutron), compute service (nova), orchestration service (heat) and metering service (ceilometer). The course finishes with comprehensive review, implementing the services after a fresh installation of the operating system. Audience and prerequisites This course is intended for the people who have knowledge about Linux System Administration and cloud administration, who wants to implement and manage their private cloud. No prior knowledge of Openstack is required and using this course you can quickly setup your own openstack environment and see whether it fits your requirement or not. Cloud Computing Cloud Computing is a technology that uses the internet and central remote servers to maintain data and applications. Cloud computing allows consumers and businesses to use applications without installation and access their personal files at any computer with internet access. This technology allows for much more efficient computing by centralizing data storage, processing and bandwidth. A simple example of cloud computing is Yahoo email, Gmail, or Hotmail etc. All you need is just an internet connection and you can start sending emails. The server and email management software is all on the cloud ( internet) and is totally managed by the cloud service provider Yahoo , Google etc. The consumer gets to use the software alone and enjoy the benefits.
  • 4. Fig 1.1: Cloud Computing Environment Cloud computing relies on sharing of resources to achieve coherence and economies of scale, similar to a utility (like the electricity grid) over a network. At the foundation of cloud computing is the broader concept of converged infrastructure and shared service. It has basically three service model: 1) IaaS -- Infrastructure as a Service 2) Paas -- Platform as a Service 3) SaaS -- Software as a Service
  • 5. Fig 1.2: Cloud Computing Layer And these can be deployed in three models: 1) Private Cloud 2) Public Cloud 3) Hybrid Cloud Fig 1.3: Cloud Computing Types
  • 6. Openstack Basics This section gives an introduction to the components of OpenStack. What Is OpenStack? OpenStack is open source virtualization management software that allows users to connect various technologies and components from different vendors and expose a unified API, regardless of the underlying technology. With OpenStack, users can manage different types of hypervisors, network devices and services, storage components, and more using a single API that creates a unified data center fabric. OpenStack is, therefore, a pluggable framework that allows vendors to write plug-ins that implement a solution using their own technology, and which allows users to integrate their technology of choice. OpenStack Services To achieve this agility, OpenStack is built as a set of distributed services. These services communicate with each other and are responsible for the various functions expected from virtualization/cloud management software. The following are some of the key services of OpenStack: 1) Nova: A compute service responsible for creating instances and managing their lifecycle, as well as managing the hypervisor of choice. The hypervisors are pluggable to Nova, while the Nova API remains the same, regardless of the underlying hypervisor. 2) Neutron: A network service responsible for creating network connectivity and network services. It is capable of connecting with vendor network hardware through plug-ins. Neutron comes with a set of default services implemented by common tools. Network vendors can create plug-ins to replace any one of the services with their own implementation, adding value to their users. 3) Cinder: A storage service responsible for creating and managing external storage, including block devices and NFS. It is capable of connecting to vendor storage hardware through plug-ins. Cinder has several generic plug-ins, which can connect to NFS and iSCSI, for example. Vendors add value by creating dedicated plug-ins for their storage devices. 4) Keystone: An identity management system responsible for user and service authentication. Keystone is capable of integrating with third-party directory services and LDAP. 5) Glance: An image service responsible for managing images uploaded by users. Glance is not a storage service, but it is responsible for saving image attributes, making a virtual catalog of the images. 6) Horizon: A dashboard that creates a GUI for users to control the OpenStack deployment. This is an extensible framework that allows vendors to add features
  • 7. to it. Horizon uses the same APIs exposed to users. 7) Swift: Stores and retrieves arbitrary unstructured data objects via a RESTful, HTTP based API. It is highly fault tolerant with its data replication and scale out architecture. Its implementation is not like a file server with mountable directories. 8) Heat: Orchestrates multiple composite cloud applications by using either the native HOTtemplate format or the AWS CloudFormation template format, through both an OpenStack-native REST API and a CloudFormation-compatible Query API. 9) Ceilometer: Monitors and meters the OpenStack cloud for billing, benchmarking, scalability, and statistical purposes. Fig 1.4: Openstack Services
  • 8. Installation of Openstack Consideration to make before installing Openstack on Linux (RedHat or Oracle Linux). Hardware requirements Control Node Requirement: Hardware Requirement Processor 64-bit x86 processor with support for the intel 64 or AMD64 CPU extensions, and the AMD-V or Intel VT hardware virtualization extension enabled Memory 2GB RAM Disk Space 50GB Add additional disk space to this requirement based on the amount of space that you intend to make available to virtual machine instances. This size varies based on both the size of each disk image you intend to create and whether you intend to share one or more disk images between multiple instances. 1TB of disk space is recommended for a realistic environment capable of hosting multiple instances of varying sizes. Network 2x 1Gbps network interface card (NIC) -- For all-in-one setup else 4x 1 Gbps network interface card (NIC) -- for multi-node Compute Node Requirements Hardware Requirements Processor 64-bit x86 processor with support for the intel 64 or AMD64 CPU extensions, and the AMD-V or Intel VT hardware virtualization extension enabled Memory 2GB RAM minimum For the compute node, 2GB RAM is the minimum necessary to deploy the m1.small instance on a node or three m1.tiny instances without memory swapping, so this is the minimum requirement based on the amount of memory that you intend to make avavailable to virtual machine instances. Disk Space 50GB Add additional disk space to this requirement based on the amount of space that you intend to make available to virtual machine instances. This size varies based on both the size of each disk image you intend to create and whether you intend
  • 9. to share one or more disk images between multiple instances. 1TB of disk space is recommended for a realistic environment capable of hosting multiple instances of varying sizes. Network 2x 1Gbps network interface card (NIC) -- For all-in-one setup else 3x 1 Gbps network interface card (NIC) -- for multi-node Software Requirements To deploy openstack on Linux systems (either Oracle Linux or Redhat Linux), you need to have at least two machines with Redhat Enterprise Linux 64-bit or Oracle Linux 64- bit, version 6.5 or newer. If you are using Oracle Linux 64-bit, version 6.5 or higher then you can also choose to go with UEK (unbreakable Enterprise Kernel), version 3 and is commonly known as UEK3. One machine can act as a dedicated cloud controller node and the second machine can act as a Nova compute node. For production environment a multi node setup is recommended but for testing and training purpose we can make use of All-In-One environment. For compute node, one can also choose the xen hypervisors, but make sure you go with the latest and to try one can make use of Oracle VM servers, version 3.3.1 which is Xen based and uses UEK3 kernel. Note: Make sure that your machines have their clock synced via Network Time Protocol (NTP). Deployment Options OpenStack supports various flexible deployment models, where each service can be deployed separately on a different node or where services can be installed together with other services. A user can set up any number of compute and control nodes to test the OpenStack environment. OpenStack supports the following deployment models: 1) All-in-one node: A complete installation of all the OpenStack services on an Oracle/Redhat Linux node. This deployment model is commonly used to get started with OpenStack or for development purposes. In this model, the user has fewer options to configure, and the deployment does not require more than one node. This deployment model is not supported for production use. 2) One control node and one or more compute nodes: This is a common deployment across multiple servers. In this case, all the control services are installed on Oracle/Redhat Linux, while separate compute nodes are set up to run Oracle VM Server or Oracle/Redhat Linux for the sole purpose of running virtual machines.
  • 10. 3) One control node, one network node, and one or more compute nodes: Another common deployment configuration is when the network node is required to be separate from the rest of the services. This can be due to compliance or performance requirements. In this scenario, the network node is installed on Oracle /Redhat Linux, and the rest of the management services are installed on a separate controller node. Compute nodes can be installed as required, as in all other cases. 4) Multiple control nodes with different services and one or more compute nodes: As mentioned, OpenStack is very flexible, and there is no technical limitation that stops users from experimenting with more sophisticated deployment models. However, using one of the supported configurations reduces complexity. To get started, we recommend using either the all-in-one model or the model with one control node and one or more compute nodes. Installing Openstack with packstack This installation is called as All-In-One installation as we will be deploying the control and compute nodes of openstack on one server and will be configuring all serivces on one server itself. Remember that this is only or testing and training purpose and is not supported for production setup. Like said earlier we need to have a server installed with Redhat or Oracle Linux version 6.5 or higher and have. For an all-in-one deployment, two physical network interface cards are required. The first network interface card must be configured with an IP address for managing the server and accessing the API and dashboard. The second card is used to allow instances to access the public network. The second network card will not have an IP address configured. If there are no plans to allow instances external connectivity, there is no need to have the second network interface card: Ethernet port IPAddress Purpose eth0 yes Connected to the management or public network to allow access to the OpenStack API eth1 no Connected to the public network and used by OpenStack to connect instances to the public network 1) The openstack-packstack package includes the packstack utility to quickly deploy openstack either interactively, or non-interactively by creating and using an
  • 11. answer file that can be tuned. Install packstack-package on your server using yum. # yum install -y openstack-packstack 2) Before we start with openstack packstack deployment via packstack, SSH keys are generated for easy access to the Nova compute nodes from the cloud controller node. We will not include a passphrase because the installation will require this passphrase hundreds of times during this process. # ssh-keygen 3) Explore some of the options of the packstack command. # packstack -h | less 4) The recommended way to do an installation is non-interactive, because this way the installation settings are documented. And asnwer file with default settings can be generated with the packstack command. # packstack --gen-answer-file /root/answer.txt 5) Before we start the actual installation, edit the /root.answer.txt file and ensure the followingitems are configured: CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub CONFIG_NTP_SERVERS=<IP address of your NTP Server> CONFIG_SWIFT_INSTALL=y CONFIG_HORIZON_SSL=y 6) Now, we are ready to start our actual deployment of openstack cloud controller. # packstack --answer-file /root/answer.txt welcome to Installer setup utility Installing: clean up ... [Done] setting up ssh keys...root@<your server IP>'s password: ashish ... 7) Verify that the services are running: # for i in /etc/init.d/open* /etc/init.d/neutron* ; do $i status; done
  • 12. Fig. 2.1 : All-In-One Openstack Setup
  • 13. Message Broker The AMQP (Advanced Messaging Queuing Protocol) is an open standard application layer protocol for message oriented middleware. The defining feature of AMQP are message orientation, queuing, routing, relaibility and security. Messaging Server: Openstack uses a message broker to co-ordinate operations and status information among services. The message broker service typically runs on the controller node. Openstack supports below message broker: • RabbitMQ • Qpid • ZeroMQ Till Havana release Qpid was being used a default message broker but from Icehouse onwards, RabbitMQ has become the default message broker. Qpid Message Broker: Openstack services use the Qpid messaging system to communicate. There are two ways to secure the Qpid communication. 1) Requiring username and password can communicate with other openstack services. 2) Using SSL to encrypt communication helps to prevent snooping and injection of rauge commands in the communication channels. Qpid Installation: 1) Install required packages: #yum install -y qpid-cpp-server qpid-cpp-server-ssl cyrus-sasl-md5 2) Create a new SASL user and password for use with Qpid (qpidauth:ashish). Note that SASL uses the QPID realm by default. #saslpasswd2 -f /var/lib/qpidd/qpidd.sasldb -u QPID qpidauth password: ashish Again : ashish 3) Verify user: #sasldblistusers2 -f /var/lib/qpidd/qpidd.sasldb qpidauth@QPID: userPassword
  • 14. 4) Provide authorization for the qpidauth user. #echo 'acl allow qpidauth@QPID all all' > /etc/qpid/qpidauth.acl #echo “QPIDD_OPTIONS='--acl-file /etc/qpid/qpidauth.acl'” >> /etc/sysconfig/qpidd # chown qpidd /etc/qpid/qpidauth.acl # chmod 600 /etc/qpid/qpidauth.acl 5) Disable anonymous connection in /etc/qpidd.conf (remove ANONYMOUS). The /etc/qpidd.conf files should contain: cluster-mechanism=DIGEST-MD5 auth=yes 6) Now that the username and password file has been configured, lets work with SSL. #mkdir /etc/pki/tls/qpid # chmod 700 /etc/pki/tls/qpid/ # chown qpidd /etc/pki/tls/qpid/ 7) Create password file for certificate. # echo ashish > /etc/qpid/qpid.pass # chmod 600 /etc/qpid/qpid.pass # chown qpidd /etc/qpid/qpid.pass 8) Generate certifcate database and make sure that you enter the correct HOSTNAME. #echo $HOSTANME # certutil -N -d /etc/pki/tls/qpid/ -f /etc/qpid/qpid.pass # certutil -S -d /etc/pki/tls/qpid/ -n $HOSTNAME -s “CN=$HOSTNAME” -t “CT, ,” -x -f /etc/qpid/qpid.pass -z /usr/bin/certutil Generating key. This may take a few moments... 9) Make sure that the certificate directory is readable by qpidd user: #chown -R qpidd /etc/pki/tls/qpid/ 10) Add the following line to /etc/qpidd.conf: ssl-cert-db=/etc/pki/tls/qpid/
  • 15. ssl-cert-name=<enter your server hostname> ssl-cert-password-file=/etc/qpid/qpid.pass require-encryption=yes 11) Start the qpid service, check for errors and make sure it is persistent. #service qpidd start #tail /var/log/messages #chkconfig qpidd on Manual RabbitMQ Setup for OpenStack Platform If you are deploying a full OpenStack cloud service, you will need to set up a working message broker for the following OpenStack components: • Block Storage • Compute • Openstack Networking • Orchestration • Image Service • Telemetry From Icehouse release the default message broker is RabbitMQ. Migration Prerequisites If you are migrating to RabbitMQ from QPid, you will first have to shut down the OpenStack service along with QPid: # openstack-service stop # service qpidd stop Prevent QPid from starting at boot: # chkconfig qpidd off CONFIGURE THE FIREWALL FOR MESSAGE BROKER TRAFFIC Before installing and configuring the message broker, you must allow incoming connections on the port it will use. The default port for message broker (AMQP) traffic is 5672.
  • 16. To allow this the firewall must be altered to allow network traffic on the required port. All steps must be run while logged in to the server as the root user. Configuring the firewall for message broker traffic 1) Open the /etc/sysconfig/iptables file in a text editor. 2) Add an INPUT rule allowing incoming connections on port 5672 to the file. The new rule must appear before any INPUT rules that REJECT traffic. 3) -A INPUT -p tcp -m tcp --dport 5672 -j ACCEPT 4) Save the changes to the /etc/sysconfig/iptables file. 5) Restart the iptables service for firewall changes to take effect. #service iptables restart The firewall is now configured to allow incoming connections to the MariaDB/MySQL database service on port 5672. INSTALLAND CONFIGURE THE RABBITMQ MESSAGE BROKER RabbitMQ replaces QPid as the default (and recommended) message broker. The RabbitMQ messaging service is provided by the rabbitmq-server package. To install RabbitMQ, run: # yum install rabbitmq-server Important When installing the rabbitmq-server package, a guest user with a default guest password will automatically be created for the RabbitMQ service. We strongly advise that you change this default password, especially if you have IPv6 available. With IPv6, RabbitMQ may be accessible from outside the network. You should be able to change the guest password after launching the rabbitmq-server server. Manually Create RabbitMQ Configuration Files When manually installing the RabbitMQ packages, the required RabbitMQ configuration files will not be created. This is a known issue, and will be addressed in an upcoming update. To work around this, manually create the two required RabbitMQ configuration files.
  • 17. These files, along with their required default contents, are as follows: /etc/rabbitmq/rabbitmq.config % This file managed by Puppet % Template Path: rabbitmq/templates/rabbitmq.config [ {rabbit, [ {default_user, <<"guest">>}, {default_pass, <<"guest">>} ]}, {kernel, [ ]} ]. % EOF /etc/rabbitmq/rabbitmq-env.conf RABBITMQ_NODE_PORT=5672 LAUNCH THE RABBITMQ MESSAGE BROKER After installing the RabbitMQ message broker and configuring the firewall to accept message broker traffic, launch the rabbitmq-server service and configure it to launch on boot: # service rabbitmq-server start # chkconfig rabbitmq-server on Important When installing the rabbitmq-server package, a guest user with a default guest password will automatically be created for the RabbitMQ service. Red Hat strongly advises that you change this default password, especially if you have IPv6 available. With IPv6, RabbitMQ may be accessible from outside the network.
  • 18. To change the default guest password of RabbitMQ: # rabbitmqctl change_password guest NEW_RABBITMQ_PASS Replace NEW_RABBITMQ_PASS with a more secure password.
  • 19. KEYSTONE Keystone Identity Service: It is a project which provides identity, token, catalog, and policy services for use with Openstack. Keystone provides token and password based authentication (authN) and high level authorization (authZ), and a central directory of users mapped to the services they can access. Introduction to Keystone service: • Keystone is an OpenStack project that provides Identity, Token, Catalog and Policy services for use specifically by projects in the OpenStack family. It implements OpenStack’s Identity API. • The OpenStack Identity API is implemented using a RESTful web service interface. All requests to authenticate and operate against the OpenStack Identity API should be performed using SSL over HTTP (HTTPS) on TCP port 443. • Keystone is organized as a group of internal services exposed on one or many endpoints. Many of these services are used in a combined fashion by the frontend, for example an authenticate call will validate user/project credentials with the Identity service and, upon success, create and return a token with the Token service. Add services to the Keystone service catalog and register their endpoints The keystone service-create command needs three options to register a service: # keystone service-create --name=SERVICENAME --type=SERVICETYPE --description=”DESCRIPTION OF SERVICE” While --name and --description can be user-selected strings, the argument passed with the --type switch must be one of the identity, compute, network, image, or object-store. After a service is registered in the service catalog, the end point of the service can be defined: #keystone endpoint-create --service-id SERVICEID --publicurl 'URL' --adminurl 'URL' --internalurl 'URL'
  • 20. The --service-id can be obtained from the output of the service-create command shown previously or by getting the information from the keystone service catalog with the command keystone service-list. Remove service and end points Of course, it is possible to remove service end points and services from the Keystone catalog. To delete an end point, figure out its id with: #keystone endpoint-list Next delete the endpoint of choice with: keystone endpoint-delete ENDPOINTID Deleting service is quite similar. #keystone service-list #keystone service-delete SERVICEID Openstack configuration files use the INI format, where there can be separate selections. Each section uses a name enclosed in square brackets ([]). All Openstack configuration files include a DEFAULT section; some Openstack configuration files include other sections. The Openstack configuration files can be managed with the openstack-config command. The example that follows explains the options and arguments used with openstack-config. #openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token abcdefgh12345 Deploying the Keystone identity service We are going to install keystone without packstack command.
  • 21. 1) Install the required packages. #yum install openstack-keystone openstack-selinux 2) Next, its tiime to get the database backend installed. The openstack-db command will take care of installing and initializing MySQL for keystone service. #yum install -y openstack-utils #openstack-db --init --service keyston 3) Setup PKI infrastructure for keystone. #keystone-manage pki_setup --keystone-user keystone --keystone-group keystone 4) To be able to administrate the Keystone identity service, specify the SERVICE_TOKEN and SERVICE_ENDPOINT environment variables. Save the value of the generated SERVICE_TOKEN to a file for later use. #export SERVICE_TOKEN=$(openssl rand -hex 10) #export SERVICE_ENDPOINT=http://<IP or server hostname>:35357/v2.0 #echo $SERVICE_TOKEN > /root/ks_admin_token #cat /root/ks_admin_token 5) The generated SERVICE_TOKEN must corresspond to the admin_token setting in the /etc/keystone/keystone.conf file #openstack-config-set /etc/keystone/keystone.conf DEFAULT admin_token $SERVICE_TOKEN 6) Start the openstack-keystone service and make sure it is persistent. #service openstack-keystone start #chkconfig openstack-keystone on 7) To verify success, check if the keystone-all process is running #ps -ef | grep keystone-all #grep ERROR /var/log/keystone/keystone.log 8) Add keystone as an endpoint in the registry of end points in keystone, which is required for the Horizon web dashboard. #keystone service-create --name=keystone --type=identity --description=”keystone Identity Service” #keystone endpoint-create --service-id <enter the service id here> --publicurl 'http://<IP or server hostname>:5000/v2.0 --adminurl 'http://<IP or server hostname>:35357/v2.0' --privateurl 'http://<IP or server hostane>:5000/v2.0' Check the output carefully for mistakes.If needed, delete the end point (keystone endpoint-delete ID), then recreate it.
  • 22. Managing users with the keystone command The keystone command can be used to create, delete and modify users. Before starting to use the command, it is importtant to source our environment variables to have administrative permissions. #source ~/keystonerc_admin Adding a new user: #keystone user-create --name USERNAME --pass PASSWORD To list existing users: #keystone user-list To delete a user: #keystone user-delete USERID Managing tenants with the keystone command To create a tenant: #keystone tenant-create --name TENANTNAME To list a tenant: #keystone tenant-list To delete: #keystone tenant-delete TENANTID
  • 23. Roles In Keystone By default there are two standard ro;es defined in keystone: – admin : a role with administrative previlleges – member : a role of a project member Even though the definitions for the roles are present, they still have to be added manually to the keystone catalog ifkeystone is manually deployed. For example, to add a role member to the keystone catalog, use: # keystone role-create --name Member Associate a user from a specific tenant with a role Of course, we also have to be able to add one or more roles to a user. To accoplish this, it is necessary to have the USERID, the TENANTID, and the ROLEID we want to attach the user to, then connect them with: #keystone user-role-add --user-id USERID --role-id ROLEID --tenant-id TENANTID Creating the keystone admin user 1) #keystone user-create --name admin --pass <your passphrase> 2) #keystone role-create --name admin 3) #keystone tenant-create --name admin 4) #keystone user-role-add --user admin --role admin --tenant admin 5) # cat >> /root/keystonerc_admin << EOF > export OS_USERNAME=admin > export OS_TENANT_NAME=admin > export OS_PASSWORD=<your passphrase> > export OS_AUTH_URL=http://<server hostname or IP>:35357/v2.0/ > export PS1='[u@h W(keystone_admin)]$' > EOF 6) To test: #unset SERVICE_TOKEN #unset SERVICE_ENDPOINT #source /root/keystonerc_admin (keystonerc_admin)#keystone user-list
  • 24. SWIFT What is the Swift object storage service? The object storage service provides object storage in virtual containers, which allows user to store and retrieve files. The service's distributed architecture supports horizontal scaling, redundancy as failure proofing is provided through software-based data replication. Because it supports asynchronous eventual consistency replication, it is well suited to multiple data center deployment. Object storage uses the concept of: • Storage Replicas: used to maintain the state of objects in the case of outage. A minimum of three replicas is recommended. • Storage Zones: used to host replicas. Zones ensure that each replica of a given object canbe stored separately. A zone might represent an individual disk drive or array, a server, all the servers in a rack, or even an entire data center. • Storage Regions: essentially a group of zones sharing a location. Regions can be, for example, groups of servers or server farms, usually located in the same geographical area. Regions have a separate API end point per object storage service installation, which allows for a discrete separation of services. Architecture of the object storage service The object storage service is a modular service with the following components: • openstack-swift-proxy: The proxy service uses the object ring to decide where to direct newly uploaded objects. It updates the relevant container database to reflect the presence of a new object. If a newly uploaded object goes to a new container, the proxy service also updates the relevant account database to reflect the new conatiner. The proxy service also directs get requests to one of the nodes where a replica of the requested object is stored, either randomly or based on response time from the node. It exposes the public API, and is responsible for hhandling requests and routing them accordingly. Objects are streamed through the proxy server to the user (not spooled). Object can also be served out via HTTP. • Openstack-swift-object: The object service is responsible for storing data objects in partitions on disk devices. Each partition is a directory, and each object is held in a subdirectory of its partition directory. A MD5 hash of the path to the object is used to identify the object itself. The service stores, retrieves, and deletes objects. • Openstack-swift-container: The container service maintains databases of objects in containers. There is one database file for each container, and they are replicated across the cluster. Containers are defined when objects are put in them. Containers
  • 25. make finding objects faster by limiting object listings to specific conatiner namespaces. The container service is responsible for listings of containers using the account database. • Openstack-swift-account: The account service maintains databases of all of the containers accessible by any given account. There is one database file for each account, and they are replicated across the cluster. Any account has access to a particular group of containers. An account maps to a tenant in the identity service. The account service handles listings of objects (what objects are in a specific container) using the container database. All of the services can be installed on each node or alternatively on dedicated machines. In addition, the following components are in place for proper operation: • Ring Files: contain details of all the storage devices, and are used to deduce where a particular piece of data is stored (maps the names of stored entities to their physical location). One file is created for each object, account and container server. • Object Storage: With either ext4 (recommended) or XFS file systems. The mount point is expected to be /srv/node. • Housekeeping processes: for example replication and auditors. Installing the Swift Object Storage Service We are going to prepare the keystone identity service to be used with the Swift object storage service. 1) Install the necessary components for the swift object storage service: #yum install -y openstack-swift-proxy openstack-swift-object openstack- swift-container openstack-swift-account memcached 2) Make sure that the keystone environment variables with the authentication information are loaded. #source /root/keystonerc_admin 3) Create a Swift user with the password “password”. #keystone user-create --name swift --pass <password> 4) Make sure the admin role exists before proceeding. #keystone role-list | grep admin
  • 26. If there is no admin role, create one. #keystone role-create --name admin 5) Make sure the services tenant exists before proceeding. #keystone tenant-list | grep services If there is no services tenant, create one. #keystone tenant-create --name services 6) Add the Swift user to the services tenant with the admin role. #keystone user-role-add --role admin --tenant services --user swift 7) Check if the object store service already exists in Keystone. #keystone service-list If it does not exist, create it. #keystone service-create --name swift --type object-store --description “swift storage service” 8) Create the end point for the swift object storage service. #keystone endpoint-create --service-id <xxxxxxxxxxx> --publicurl “http://<hostanme or IP>:8080/v1/AUTH_%(tenant_id)s” --adminurl “http://<hostname or IP>:8080/v1/AUTH_%(tenant_id)s” --internalurl “http://<hostname or IP>:8080/v1/AUTH_%(tenant_id)s” Deploying a SWIFT storage node The object storage service stores objects on the file system, usually on a number of connected physical storage devices. All of the devices which will be used for object storage must be formatted with either ext4 or XFS, and mounted under the /srv/node/ directory. Any dediccated storage node needs to have the following packages installed: • openstack-swift-object • openstack-swift-container • openstack-swift-account
  • 27. Deploying a Swift Storage Node: 1) Create a single partition on new disk presented to the server. Like we usually do in Linux. #fdisk /dev/sdb 2) Create ext4 filesystem on /dev/sdb1 #mkfs.ext4 /dev/sdb1 3) Create a mount point and mount the devices persistently to the appropriate zone. #mkdir -p /srv/node/z1d1 #cp /etc/fstab /etc/fstab.bak echo “/dev/sdb1 /srv/node/z1d1 ext4 acl,user_xattr 0 0” >> /etc/fstab 4) Mount the new Swift storage: #mount -a 5) Change the ownership of the contents of /srv/node to swift:swift #chown -R swift:swift /srv/node/ 6) Restore SELinux context of /srv #restorecon -R /srv 7) Make backups of the files that will be changed: #cp /etc/swift/swift.conf /etc/swift/swift.conf.orig #cp /etc/swift/account-server.conf /etc/swift/account-server.conf.orig #cp /etc/swift/container-server.conf /etc/swift/container-server.conf.orig #cp /etc/swift/object-server.conf /etc/swift/object-server.conf.orig 8) Use openstack-config command to add hash prefix and suffix to /etc/swift/swift.conf. #openstack-config --set /etc/swift/swift.conf swift-hash swift_hash_path_prefix $ (openssl rand -hex 10) #openstack-config --set /etc/swift/swift.conf swift-hash swift_hash_path_suffix $ (openssl rand -hex 10) 9) The account, container and object swift service need to bind to the same IP used for mapping the rings later on. Localhost only works for a single storage node configuration. #openstack-config --set /etc/swift/account-server.conf DEFAULT bind_ip <mention your IP> #openstack-config --set /etc/swift/container-server.conf DEFAULT bind_ip <mention your IP> #openstack-config --set /etc/swift/object-server.conf DEFAULT bind_ip <mention your IP> 10) Start up the services now: # service openstack-swift-account start #service openstack-swift-container start #service openstack-swift-object start 11) Tail the /var/log/messages and check if everything looks fine or not.
  • 28. Configuring Swift Obeject Storage service rings Rings determine where data is stored in a cluster nodes. Ring files are generated unsing the swift-ring-builder tool. Three ring files are required: • Object • Container • Account services Each storage device in a cluster is divided into partitions, with a recommended minimum of 100 partitions per device. Each partition is physically a directory on a disk. A configurable number of bits from the MD5 hash of the file system path to the partition directory, known as the partition power, is used as a partition index for the device. The partition count of a cluster with 1000 devices with 100 partition on each device is 100000. The partition count is used to calculate the partition power, where 2 to the partition power is the partition count. When the partition power is a fraction, it is rounded up. If the partition count is 100,000, the partition power is 17 (16,610 rounded up). Expressed mathematically: 2 ^ (partition power) = partition count. Ring files are generated using three parameters: • Partition power: The value is calculated as shown previously and rounded up after calculation. • Replica count: This represents the number of times the data gets replicated in the cluster. • min_part_hours: This is the minium number of hours before a partition can be moved. It ensures availablity by not moving more than one copy of a given data item within the min_part_hours time period. A fourth parameter, zone, is used when adding devices to rings. Zones are a flexible abstraction, where each zone should be separated from other zones. You can use a zone to represent sites, cabinets, nodes, or even devices. Configuring Swift Object Storage Servive rings: 1) Source the keystonerc_admin file first. #source /root/keystonerc_admin 2) Use swift ring builder command to build one ring for each service. #swift-ring-builder /etc/swift/account.builder create 12 2 1 #swift-ring-builder /etc/swift/container.builder create 12 2 1
  • 29. #swift-ring-builder /etc/swift/object.builder create 12 2 1 In above, 12 is partition power, 2 is replica count and 1 is min_part_hours 3) Add the device to the account service. #for i in 1 2;do > swift-ring-builder /etc/swift/account.builder add z${i}-<your server IP:6002/z$ {i}d1 100 >done Above, 100 informs, minimum partition per device. 4) Add the device to the container service. #for i in 1 2; do > swift-ring-builder /etc/swift/container.builder add z${i}-<your server IP>:6001/z{i}d1 100 > done 5) Add device to the object serbice. #for i in 1 2; do > swift-ring-builder /etc/swift/object.builder add z${i}-<your server IP>:6000/z{i}d1 100 > done 6) After successfully adding the devices, rebalance the rings. #swift-ring-builder /etc/swift/account.builder rebalance #swift-rin-builder /etc/swift/container.builder rebalance #swift-ring-builder /etc/swift/object.builder rebalance 7) Verify the ring files have been successfully created. #ls /etc/swift/*gz 8) Make sure /etc/swift directory is owned by root:swift. #chown -R root:swift /etc/swift Deploying the Swift Object Storage Proxy service. The object storage proxy service determines to which node gets and puts are directed. While it can be installed alongside the account, container, and object services, it will usually end up on a separate system in production deployments. 1) Make backup of orginal config files: #cp /etc/swift/proxy-server.conf /etc/swift/proxy-server.conf.orig 2) Update the configuration file for the swift proxy server with the correct authentication details for the appropriate keystone user. #openstack-config --set /etc/swift/proxy-server.conf filter:authtoken admin_tenant_name services #openstack-config --set /etc/swift/proxy-server.conf filter:authtoken auth_host
  • 30. <your server IP> #openstack-config --set /etc/swift/proxy-server.conf filter:authtoken admin_user swift #openstack-config --set /etc/swift/proxy-server.cong filter:authtoken admin_password <your password> 3) Enable memcached and openstack-swift-proxy services permanently. #service memcached start #service openstack-swift-proxy start #chkconfig memcached on #chkconfig openstack-swift-proxy on
  • 31. GLANCE The Glance project provides a service where users can upload and discover data assets that are meant to be used with other services. This currently includes images and metadata definitions. Glance image services include discovering, registering, and retrieving virtual machine images. Glance has a RESTful API that allows querying of VM image metadata as well as retrieval of the actual image. VM images made available through Glance can be stored in a variety of locations from simple filesystems to object-storage systems like the OpenStack Swift project. Glance, as with all OpenStack projects, is written with the following design guidelines in mind: • Component based architecture: Quickly add new behaviors • Highly available: Scale to very serious workloads • Fault tolerant: Isolated processes avoid cascading failures • Recoverable: Failures should be easy to diagnose, debug, and rectify • Open standards: Be a reference implementation for a community-driven api Basic Architecture OpenStack Glance has a client-server architecture and provides a user REST API through which requests to the server are performed. Internal server operations are managed by a Glance Domain Controller divided into layers. Each layer implements its own task. All the files operations are performed using glance_store library which is responsible for interaction with external storage back ends or local filesystem, and provides a uniform interface to access. Glance uses an sql-based central database (Glance DB) that is shared with all the components in the system. Fig. 6.1: OpenStack Glance Architecture
  • 32. The Glance architecture consists of several components: •A client— any application that uses Glance server. •REST API— exposes Glance functionality via REST. •Database Abstraction Layer (DAL)— an application programming interface which unifies the communication between Glance and databases. •Glance Domain Controller— middleware that implements the main Glance functionalities: authorization, notifications, policies, database connections. •Glance Store— organizes interactions between Glance and various data stores. •Registry Layer— optional layer organizing secure communication between the domain and the DAL by using a separate service. Deploying the Glance Image Service: The Glance image service requires keystone to be in place for identity management and authorization. It uses MySQL database to store the metadata information for the images. Glance supports a variety of disk formats, such as: • raw • vhd • vmdk • vdi • iso • qcow2 • aki, ari and ami And variety of container formats: • bare • ovf • aki, ari and ami Installation of Glance Service Manually #yum install -y openstack-glance #cp /usr/share/glance/glance-registery-dist.conf /etc/glance/glance-registery.conf Initialize the database: #openstack-db --init --service glance --password <your_password> Once the above is done, then you need to update the glance configuration to make use of keystone as the identity service:
  • 33. #openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone # openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_tenant_name admin # openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_user admin # openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_password <your password> # openstack-config --set /etc/glance/glance-registery.conf paste_deploy flavor keystone #openstack-config --set /etc/glance/glance-registery.conf keystone_authtoken admin_tenant_name admin #openstack-config --set /etc/glance/glance-registery.conf keystone_authtoken admin_user admin #openstack-config --set /etc/glance/glance-registery.conf keystone_authtoken admin_password <your password> Once the above changes has been made then we need to start and enable the services: #service openstack-glance-registery start #chkconfig openstack-glance-registery on #service openstack-glance-api start #chkconfig openstack-galance-api on Now, finally we will be adding the service to Keystone catalog: #source /root/keystonerc_admin #keystone service-create --name galnce --type image --description “Glance Image Service” And, once the service is created then we need to create the endpoint: #keystone endpoint-create --service-id <specify the service id generated from above command output> --publicurl http://<hostname or IP of your server>:9292 --adminurl http://<hostname or IP of your server>:9292 --internalurl http://<hostname or IP of your server>:9292
  • 34. In case now, you want to add one more another node to provide redundant service availablity for Glance then this is even more simpler. You just have to install the openstack-glance packages shown earlier and have to simple copy the /etc/glance/glance-api.conf and /etc/glance/glance-registery.conf files from the previous node to the new node, and after that start and enable the service. Once the services have started, either create new end points for the new Glance server or place a load balancer in front of the two Glance servers to balance the load. The load balancer can either be hardware (such as F5) or software (such as HAproxy). If you are using a load balancer, use a single set of endpoint for the Glance service using the front- end IP address of the load balancer. Some basic command line operations using Glance Service: 1) Sourcing the admin file: [root@control-node ~]# source /root/keystonerc_admin 2) How to list available images: [root@control-node ~(keystone_admin)]# glance image-list +--------------------------------------+----------+-------------+------------------+------------ +--------+ | ID | Name | Disk Format | Container Format | Size | Status | +--------------------------------------+----------+-------------+------------------+------------ +--------+ | ad77e426-965b-40e4-93b4-807cc7bd0f67 | hvm | raw | bare | 6442450944 | active | | a58fea4c-e2f2-4d8e-8388-80b96569781f | system | raw | bare | 3145728000 | active | | 3e5176e4-da36-4122-9220-5a45116b9559 | test_iso | iso | bare | 254947328 | active | +--------------------------------------+----------+-------------+------------------+------------ +--------+ 3) How to upload an image: [root@control-node ~(keystone_admin)]# glance image-create --name system --disk-format=raw --container-format=bare < /mnt/system.img +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+
  • 35. | checksum | 87fdf8731d7af378f688c1fb93709bb6 | | container_format | bare | | created_at | 2014-10-01T21:43:34 | | deleted | False | | disk_format | raw | | id | a58fea4c-e2f2-4d8e-8388-80b96569781f | | is_public | False | | min_disk | 0 | | min_ram | 0 | | name | system | | owner | ffdefd08b3c842a28967f646036253f8 | | protected | False | | size | 3145728000 | | status | active | | updated_at | 2014-10-01T21:44:58 | +------------------+--------------------------------------+ 4) How to see details about specific image: [root@control-node ~(keystone_admin)]# glance image-show system +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | checksum | 87fdf8731d7af378f688c1fb93709bb6 | | container_format | bare | | created_at | 2014-10-01T21:43:34 | | deleted | False | | disk_format | raw | | id | a58fea4c-e2f2-4d8e-8388-80b96569781f | | is_public | False | | min_disk | 0 | | min_ram | 0 | | name | system | | owner | ffdefd08b3c842a28967f646036253f8 | | protected | False | | size | 3145728000 | | status | active | | updated_at | 2014-10-01T21:44:58 | +------------------+--------------------------------------+ 5) To view more options or actions associated with glance command you can check the help and it will list the list of available option: #glance --help
  • 36. 6) Also you can get into MySQL DB and verify things over there. Below is some useful tips: [root@control-node ~(keystone_admin)]# mysql -u root -p <<< This way you connect to MySQL DB Enter password: < enter here the password for your MySQL DB and is available from your packstack answerfile> Welcome to the MySQL monitor. Commands end with ; or g. Your MySQL connection id is 9211 Server version: 5.1.73 Source distribution Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or 'h' for help. Type 'c' to clear the current input statement. mysql> <<<<< This says that you are now connected to DB. To list databases: mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | cinder | | glance | | heat | | keystone | | mysql | | nova | | ovs_neutron | | test | +--------------------+ 9 rows in set (0.00 sec)
  • 37. To list DB specific Users and hosts: mysql> SELECT User,host,password FROM mysql.user; +----------------+-----------+-------------------------------------------+ | User | host | password | +----------------+-----------+-------------------------------------------+ | root | localhost | *2DB9A8025FC3AB6656D9FA0C0516A35D0DDCAB50 | | cinder | % | *C40F824DE4E1AC3850660914EB44B35F2C69B819 | | keystone_admin | 127.0.0.1 | *8D80B63D60A8F84BAA940BFF836DFBA382895F90 | | nova | % | *E01B1E5009C2ACD39DB27AF619313E824C451134 | | glance | % | *4510B28F08B897398FB03B9AC43027D0815E86FB | | nova | 127.0.0.1 | *E01B1E5009C2ACD39DB27AF619313E824C451134 | | neutron | % | *0B4051CB7295D26D48A9D20EE1664F0660F0C3E7 | | neutron | 127.0.0.1 | *0B4051CB7295D26D48A9D20EE1664F0660F0C3E7 | | cinder | 127.0.0.1 | *C40F824DE4E1AC3850660914EB44B35F2C69B819 | | keystone_admin | % | *8D80B63D60A8F84BAA940BFF836DFBA382895F90 | | glance | 127.0.0.1 | *4510B28F08B897398FB03B9AC43027D0815E86FB | | heat | % | *69355EB9C45A11DA21E6F6A52E7E4E105714D062 | | heat | localhost | *69355EB9C45A11DA21E6F6A52E7E4E105714D062 | +----------------+-----------+-------------------------------------------+ 13 rows in set (0.00 sec) To use any specific DB, like here we are using Glance DB: mysql> use glance; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed To view tables under glance DB: mysql> show tables; +------------------+ | Tables_in_glance | +------------------+ | image_locations | | image_members | | image_properties | | image_tags | | images | | migrate_version |
  • 38. | task_info | | tasks | +------------------+ 8 rows in set (0.00 sec) To view id and status of images: mysql> select id,status from images; +--------------------------------------+--------+ | id | status | +--------------------------------------+--------+ | 3e5176e4-da36-4122-9220-5a45116b9559 | active | | a58fea4c-e2f2-4d8e-8388-80b96569781f | active | | acac429b-f353-4ef0-a7f0-2d7d74c81530 | killed | | ad77e426-965b-40e4-93b4-807cc7bd0f67 | active | | e82b82b8-6905-4561-b030-893edc79128b | killed | +--------------------------------------+--------+ 5 rows in set (0.01 sec) To come out: mysql>quit
  • 39. CINDER Cinder provides an infrastructure for managing volumes in OpenStack. It was originally a Nova component called nova-volume, but has become an independent project since the Folsom release. Block Storage allows devices to be exposed and connected to compute instances for expanded storage, better performance and integration with enterprise storage platform. OpenStack provides persistent block level storgae devices for use with OpenStacj compute instances. Can be exposed to applications as well. The block Storage system manages the creation, attaching and detaching of the block devices to servers. Cinder Components: Block storage functionality is provided in OpenStack by three separate services, collectively referred to as the block stoarge service or Cinder. The three services are: • Openstack-cinder-api: The Api service provides an HTTP end point for block storage requests. When an incoming request is recieved, the API verifies identity requirements are met and translates the request into a message denoting the required block storage actions. The message is then sent to the message broker for processing by the other block storage services. • Openstack-cinder-scheduler: The scheduler service reads requests from the message queue and determines on which block storage host the request must be performed. The scheduler then communicates with the volume service on the selected host to process the request. • Openstack-cinder-volume: The volume service manages the interaction with the block storage devices. As requests come in from the scheduler, the volume service creates, modifies, and removes volumes as required. Fig. 7.1: Cinder Components
  • 40. Fig. 7.2: Cinder Flow Installing Cinder Service: 1) Install the required package: #yum install -y openstack-cinder 2) Copy the /usr/share/cinder/cinder-dist.conf file to /etc/cinder/cinder.conf to set some default values: #cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.orig #cp /usr/share/cinder/cinder-dist.conf /etc/cinder/cinder.conf 3) To be able to authenticate with administrate privileges, source the keystonerc_admin file: #source /root/keystonerc_admin 4) Initialize the database for use with cinder with a password of <your choice>. For a production deployment, be sure to pick a more difficult password. #openstack-db --init --service cinder --pasword <your_password> --rootpw <your_password> 5) Create a cinder user, then link the cinder user and the admin role within the services tenant. #keystone user-create --name cinder --pass <your_password> #keystone user-role-add --user cinder --role admin --tennant services
  • 41. 6) Add the service to the keystone catalog. #keystone service-create --name=cinder --type=volume --description=”OpenStack Block Storage Service” 7) Create the end point for the service. #keystone ebdpoint-create --service-id <service id generated from above step6> --publicurl 'http://<IP or Hostname>:8776/v1/%(tenant_id)s' --adminurl 'http://<IP or Hostname>:8776/v1/%(tenant_id)s' --internalurl 'http://<IP or hostname>:8776/v1/% (tenant_id)s' 8) Update the cinder configuration to use keystone as an identity service. #openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_tenant_name services #openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_user cinder #openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_password <your_password> #openstack-config --set /etc/cinder/cinder.conf DEFAULT qpid_username qpidauth #openstack-config --set /etc/cinder/cinder.conf DEFAULT qpid_password <your_password> #openstack-config --set /etc/cinder/cinder.conf DEFAULT qpid_protocol ssl #openstack-config --set /etc/cinder/cinder.conf DEFAULT qpid_port 5671 9) Start and enable the services. Check for any errors. #service openstack-cinder-scheduler start #service openstack-cinder-api start #service openstack-cinder-volume start #tail /var/log/cinder/* #chkconfig openstack-cinder-scheduler on #chkconfig openstack-cinder-api on #chkconfig openstack-cinder-volume on 10) Edit the /etc/tgt/targets.conf file to include include /etc/cinder/volume/* in order to configure iSCSI to include Cinder volumes. #echo 'include /etc/cinder/volumes/*' >> /etc/tgt/targets.conf 11) Start and enable the tgtd service. #service tgtd start #tail /var/log/messgaes #chkconfig tgtd on 12) Check the status of all the OpenStack services:
  • 42. #openstack-status 13) Creating a cinder volume: #cinder create --display-name vol1 2 14) To list a volume: #cinder list 15) To delete a volume: #cinder delete <volume_name> Configuring External Stoarge We can configure and add a vendor specific driver and make use of an external storage. As OpenStack provides a wide range of flexibility to choose the external storage and here are few of them listed over like: • NetApp • EMC • IBM • ZFS, etc We can choose any of the external stoarge and configure it to make use with CInder, but to do this we will require a vendor specifc (needs to be provided by storage vendor). Here I will show you how we can configure ZFFSA storage: 1. The first step is to enable REST Access at the ZFS SA. Otherwise Cinder will not be able to communicate with the appliance : From the Web based management interface of ZFS SA, go to "Configuration->Services" and at the end of the page there is ‘REST’ button, enable it. Now, to verify : OracleZFS:configuration services rest> show Properties: <status> = online For advanced users, additional features in the appliance could be configured but for the purpose of this demonstration this is sufficient. 2. At the ZFS SA, create a pool - go to Configuration -> Storage and add a pool as shown below the pool is named “default”:
  • 43. OracleZFS:configuration storage> show Properties: pool = Default status = online errors = 0 profile = mirror log_profile = - cache_profile = - scrub = none requested 3. The final step at the ZFSSA is to download and run the workflow file "cinder.akwf". Download the file "cinder.akwf" from "https://java.net/projects/solaris- userland/sources/gate/show/components/openstack/cinder/files/zfssa". Run the workflow on your Oracle ZFS Storage Appliance. The workflow: a) Creates the user if the user does not exist b) Sets Role authorizations for performing Cinder driver operations c) Enables the RESTful service if currently disabled The workflow can be run from the Command Line Interface (CLI) Or from the Browser User Interface (BUI) of the appliance. * From the CLI: zfssa:maintenance workflows> download zfssa:maintenance workflows download (uncommitted)> show Properties: url = (unset) user = (unset) password = (unset) zfssa:maintenance workflows download (uncommitted)> set url="url to the cinder.akwf file" url = "url to the cinder.akwf file" zfssa:maintenance workflows download (uncommitted)> commit Transferred 2.64K of 2.64K (100%) ... done zfssa:maintenance workflows> ls Properties: showhidden = false
  • 44. Workflows: WORKFLOW NAME OWNER SETID ORIGIN VERSION workflow-000 Clear locks root false Oracle Corporation 1.0.0 workflow-001 Configuration for OpenStack Cinder Driver root false Oracle Corporation 1.0.0 zfssa:maintenance workflows> select workflow-001 zfssa:maintenance workflow-001 execute (uncommitted)> set name=openstack name = openstack zfssa:maintenance workflow-001 execute (uncommitted)> set password=devstack password = ******** zfssa:maintenance workflow-001 execute (uncommitted)> commit User openstack created. For information on downloading and running the workflow over BUI, refer: https://openstack.java.net/ZFSSACinderDriver.README Configuring Cinder: 1. Copy the files from below URL to the directory : /usr/lib/python2.6/site- packages/cinder/volume/drivers/zfssa . "https://java.net/projects/solaris-userland/sources/gate/show/components/openstack/cinder/files/zfssa" [root@control-node drivers]# mkdir /usr/lib/python2.6/site-packages/cinder/volume/drivers/zfssa Copy the files to this directory [root@control-node ~]# cd /usr/lib/python2.6/site-packages/cinder/volume/drivers/zfssa [root@control-node zfssa]# ls __init__.py restclient.py zfssaiscsi.py zfssarest.py 2. Configure the cinder plugin by editing /etc/cinder/cinder.conf or by using "openstack-config" command (which is section-aware and thus recommended): Note: Please take backup of /etc/cinder/cinder.conf before editing this file. #openstack-config --set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.zfssa.zfssaiscsi.ZFSSAISCSIDriver #openstack-config --set /etc/cinder/cinder.conf DEFAULT zfssa_host <ZFS HOST IP> #openstack-config --set /etc/cinder/cinder.conf DEFAULT zfssa_auth_user openstack #openstack-config --set /etc/cinder/cinder.conf DEFAULT zfssa_auth_password <PASSWORD> #openstack-config --set /etc/cinder/cinder.conf DEFAULT zfssa_pool <default>
  • 45. #openstack-config --set /etc/cinder/cinder.conf DEFAULT zfssa_target_portal <HOST IP>:3260 #openstack-config --set /etc/cinder/cinder.conf DEFAULT zfssa_project test #openstack-config --set /etc/cinder/cinder.conf DEFAULT zfssa_initiator_group default #openstack-config --set /etc/cinder/cinder.conf DEFAULT zfssa_target_interfaces e1000g0 After executing above commands the /etc/cinder/cinder.conf will look something like below: --snip from /etc/cinder/cinder.conf---- # Driver to use for volume creation (string value) #volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver volume_driver=cinder.volume.drivers.zfssa.zfssaiscsi.ZFSSAISCSIDriver zfssa_host = <HOST IP> zfssa_auth_user = openstack zfssa_auth_password = <Password> zfssa_pool = default zfssa_target_portal = <HOST IP>:3260 zfssa_project = test zfssa_initiator_group = default zfssa_target_interfaces = e1000g0 ----end----- 3. Restart the cinder-volume service: #service openstack-cinder-volume restart 4. Take a look at log files: /var/log/cinder/scheduler.log and /var/log/cinder/volume.log, to check for any error. If errors are found in the logs, fix them before continuing. 5. Install iscsi-initiator-utils package on Control node and the Compute nodes, this is important since the storage plugin uses iscsi commands from this package: # yum install -y iscsi-initiator-utils The installation and configuration are very simple, we do not need to have a “project” in the ZFSSA but we do need to define a pool. Creating and Using Volumes in OpenStack: Now we are ready to create a cinder volume. [root@control-node drivers(keystone_admin)]# cinder create 2 --display-name my-vol-1 +---------------------+--------------------------------------+ | Property | Value | +---------------------+--------------------------------------+
  • 46. | attachments | [] | | availability_zone | nova | | bootable | false | | created_at | 2014-08-12T04:24:37.806752 | | display_description | None | | display_name | my-vol-1 | | encrypted | False | | id | 768fbc56-1d27-46d8-a1e0-772bf23c7797 | | metadata | {} | | size | 2 | | snapshot_id | None | | source_volid | None | | status | creating | | volume_type | None | +---------------------+--------------------------------------+ Extending the volume to 5G [root@control-node drivers(keystone_admin)]# cinder extend 768fbc56-1d27-46d8-a1e0- 772bf23c7797 5 [root@control-node drivers(keystone_admin)]# cinder list +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
  • 47. +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | 484a117d-304c-4280-b127-688100fbdb98 | available | vol2 | 2 | None | false | | | 768fbc56-1d27-46d8-a1e0-772bf23c7797 | available | my-vol-1 | 5 | None | false | | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ Creating templates using Cinder Volumes By default OpenStack supports ephemeral storage where an image is copied into the run area during instance launch and deleted when the instance is terminated. With Cinder we can create persistent storage and launch instances from a Cinder volume. Booting from volume has several advantages, one of the main advantages of booting from volumes is speed. No matter how large the volume is the launch operation is immediate there is no copying of an image to a run areas, an operation which can take a long time when using ephemeral storage (depending on image size). [root@control-node drivers(keystone_admin)]# glance image-list +--------------------------------------+--------+-------------+------------------+------------+--------+ | ID | Name | Disk Format | Container Format | Size | Status | +--------------------------------------+--------+-------------+------------------+------------+--------+ | ad77e426-965b-40e4-93b4-807cc7bd0f67 | hvm | raw | bare | 6442450944 | active | | a58fea4c-e2f2-4d8e-8388-80b96569781f | system | raw | bare | 3145728000 | active | +--------------------------------------+--------+-------------+------------------+------------+--------+ [root@control-node drivers(keystone_admin)]# cinder create --image-id a58fea4c-e2f2-4d8e-8388- 80b96569781f --display-name system 5 +---------------------+--------------------------------------+ | Property | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | created_at | 2014-10-29T11:05:42.824849 | | display_description | None | | display_name | system | | encrypted | False | | id | a4b6f0ab-b897-4bd4-8ef4-330e0eb2d92b | | image_id | a58fea4c-e2f2-4d8e-8388-80b96569781f | | metadata | {} | | size | 5 | | snapshot_id | None | | source_volid | None |
  • 48. | status | creating | | volume_type | None | +---------------------+--------------------------------------+ [root@control-node drivers(keystone_admin)]# cinder list +--------------------------------------+-------------+--------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-------------+--------------+------+-------------+----------+-------------+ | 484a117d-304c-4280-b127-688100fbdb98 | available | vol2 | 2 | None | false | | | 768fbc56-1d27-46d8-a1e0-772bf23c7797 | available | my-vol-1 | 5 | None | false | | | a4b6f0ab-b897-4bd4-8ef4-330e0eb2d92b | downloading | system | 5 | None | false | | +--------------------------------------+-------------+--------------+------+-------------+----------+-------------+ After the download is complete we will see that the volume status changed to “available” and that the bootable state is “true”. [root@control-node ~(keystone_admin)]# cinder list +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | 484a117d-304c-4280-b127-688100fbdb98 | available | vol2 | 2 | None | false | | | 768fbc56-1d27-46d8-a1e0-772bf23c7797 | available | my-vol-1 | 5 | None | false | | | a4b6f0ab-b897-4bd4-8ef4-330e0eb2d92b | available | system | 5 | None | true | | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ The new volume can be used to boot an instance from, or it can be used it as a template. Cinder can create a volume from another volume and ZFSSA can replicate volumes instantly in the back end. The result is an efficient template model where users can spawn an instance from a “template” instantly even if the template is very large in size. Let’s try replicating the bootable volume with the Oracle Linux 6.5 on it creating additional 1 bootable volumes: [root@control-node ~(keystone_admin)]# cinder create 5 --source-volid a4b6f0ab-b897-4bd4-8ef4- 330e0eb2d92b --display-name system-bootable-1 +---------------------+--------------------------------------+ | Property | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | created_at | 2014-10-29T11:48:30.468041 |
  • 49. | display_description | None | | display_name | system-bootable-1 | | encrypted | False | | id | 9e75e706-c752-44c6-b767-6e835113964a | | metadata | {} | | size | 5 | | snapshot_id | None | | source_volid | a4b6f0ab-b897-4bd4-8ef4-330e0eb2d92b | | status | creating | | volume_type | None | +---------------------+--------------------------------------+ [root@control-node ~(keystone_admin)]# cinder list +--------------------------------------+-----------+-------------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+-------------------+------+-------------+----------+-------------+ | 3f4eabe4-d457-41b3-90ef-2e530bcf5be8 | available | system-bootable-1 | 5 | None | true | | | 484a117d-304c-4280-b127-688100fbdb98 | available | vol2 | 2 | None | false | | | 768fbc56-1d27-46d8-a1e0-772bf23c7797 | available | my-vol-1 | 5 | None | false | | | d934c458-202d-46c3-8c69-211fdd473686 | available | system-bootable | 5 | None | true | | | fddd3794-e5ff-485e-a0bf-5c69218ebc66 | available | my-test-vol | 2 | None | false | | +--------------------------------------+-----------+-------------------+------+-------------+----------+------- Note that the creation of last volume was almost immediate, no need to download or copy, ZFSSA takes care of the volume copy for us. Now let's try to boot the instance using our bootable volume: [root@control-node ~(keystone_admin)]# nova boot --boot-volume 9e75e706-c752-44c6-b767- 6e835113964a --flavor 4659934e-7b26-4e68-a649-2ccd9d47cc9e system-instance-1 --nic net- id=87f96ebf-4847-4f72-98b5-41ddcf743486 +--------------------------------------+--------------------------------------------------+ | Property | Value | +--------------------------------------+--------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name | instance-00000004 |
  • 50. | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | pT8WhCoEubew | | config_drive | | | created | 2014-10-29T11:56:32Z | | flavor | m2.tiny (4659934e-7b26-4e68-a649-2ccd9d47cc9e) | | hostId | | | id | fc909461-fc79-4054-a354-fe492b82db49 | | image | Attempt to boot from volume - no image supplied | | key_name | - | | metadata | {} | | name | system-instance-1 | | os-extended-volumes:volumes_attached | [{"id": "9e75e706-c752-44c6-b767-6e835113964a"}] | | progress | 0 | | security_groups | default | | status | BUILD | | tenant_id | ffdefd08b3c842a28967f646036253f8 | | updated | 2014-10-29T11:56:32Z | | user_id | 9df6d9a317ec4f55a839168e5568cda2 | +--------------------------------------+--------------------------------------------------+ In the above command, a custom created flavor was used, and we can get the list of all available flavors using the below command: [root@control-node ~(keystone_admin)]# nova flavor-list +--------------------------------------+-----------+-----------+------+-----------+------+-------+------------- +-----------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | +--------------------------------------+-----------+-----------+------+-----------+------+-------+------------- +-----------+ | 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True | | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True | | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True | | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True | | 4659934e-7b26-4e68-a649-2ccd9d47cc9e | m2.tiny | 512 | 20 | 5 | 512 | 1 | 1.0 | True | | 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True | | 7e4202a7-0fa4-4d5c-b7e6-af29de71509d | m2.small | 1024 | 20 | 5 | 1024 | 1 | 1.0 | True | | 89db9029-8071-4b93-9305-e486dfedc40d | m3.small | 1024 | 8 | 2 | 512 | 1 | 1.0 | True |
  • 51. +--------------------------------------+-----------+-----------+------+-----------+------+-------+------------- +-----------+ And list of available network can be viewed using: [root@control-node ~(keystone_admin)]# neutron net-list +--------------------------------------+--------+-----------------------------------------------------+ | id | name | subnets | +--------------------------------------+--------+-----------------------------------------------------+ | 87f96ebf-4847-4f72-98b5-41ddcf743486 | net1 | 67ead5de-6a2d-4bba-bd5b-cd1d670f18cb 10.10.10.0/24 | | b603d4ec-da2b-4a5c-9d9b-9d36d8f3331f | net2 | d553c00d-ef4d-42ff-ab87-6d2413f462a0 20.20.20.0/24 | | d9723f00-c322-4de0-b237-40b0c1210fea | public | 2a4a9731-a40a-493a-a04c-23cc5facb98f 192.168.1.0/24 | +--------------------------------------+--------+-----------------------------------------------------+ Once the volumes are created using the OpenStack cinder plugin, the corresponding ZFS volumes can also be checked from the CLI login to the ZFSSA : OracleZFS:shares test> show Properties: aclinherit = restricted aclmode = discard atime = true checksum = fletcher4 compression = off dedup = false compressratio = 100 copies = 1 creation = Wed Oct 01 2014 17:38:58 GMT+0000 (UTC) logbias = latency mountpoint = /export quota = 0 readonly = false recordsize = 128K reservation = 0 rstchown = true secondarycache = all nbmand = false sharesmb = off sharenfs = on
  • 52. snapdir = hidden vscan = false snaplabel = sharedav = off shareftp = off sharesftp = off sharetftp = off pool = Default canonical_name = Default/local/test default_group = other default_permissions = 700 default_sparse = false default_user = nobody default_volblocksize = 8K default_volsize = 0 exported = true nodestroy = false maxblocksize = 1M space_data = 11.9G space_unused_res = 0 space_unused_res_shares = 0 space_snapshots = 0 space_available = 22.3G space_total = 11.9G origin = Shares: LUNs: NAME VOLSIZE GUID volume-768fbc56-1d27-46d8-a1e0-772bf23c7797 5G 600144F09C9F77890000542C3C940001 volume-484a117d-304c-4280-b127-688100fbdb98 2G 600144F09C9F77890000543D31330002 volume-a4b6f0ab-b897-4bd4-8ef4-330e0eb2d92b 5G 600144F09C9F778900005450CADC0004 volume-9e75e706-c752-44c6-b767-6e835113964a 5G 600144F09C9F778900005450D4E20005 Children: groups => View per-group usage and manage group quotas replication => Manage remote replication
  • 53. snapshots => Manage snapshots users => View per-user usage and manage user quotas NFS Based Cinder Volumes Now I will show you how we can use NFS based cinder volumes: To use NFS, use the standard NFS driver. This driver is not geared towards a specific driver, and can be used for any NFS storage. 1) The first step is to configure cinder to use NFS and tell it where the NFS shares are located. Take a backup of the /etc/cinder/cinder.conf Then start editing this file, avoid using “vi” and perform this using below command as these are section specific and aware: #openstack-config --set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.nfs.NfsDriver #openstack-config --set /etc/cinder/cinder.conf DEFAULT nfs_shares_config /etc/cinder/shares.conf 2) Add the NFS shares information to /etc/cinder/shares.conf #echo <NFS_Server_IP>:/<nfs_exported_shares> > /etc/cinder/shares.conf e.g: echo 192.168.104.120:/vmlocal/nfs_share > /etc/cinder/shares.conf 3) Restart the cinder service. #service openstack-cinder-volume restart
  • 54. NEUTRON Neutron service implements "Networking-as-a-service" in the OpenStack project, which is meant to create, configure, and manage Software-Defined Networks. Networking in OpenStack has more powerful capabilities and at the same time it's more complicated. Neutron provides APIs to define network connectivity and addressing in the cloud. The Networking service enables operators to leverage different networking technologies to power their cloud networking. The Networking service also provides APIs to configure and manage a variety of network services such as L3 forwarding, NAT etc. The Networking server uses the "neutron-server" daemon to expose the Networking APIs and enable administration of the configured Networking plug-in. A standard architectural design includes a cloud controller host, a network gateway host, and a number of hypervisors that runs nova-compute service, for hosting virtual machines. The cloud controller and network gateway can be on the same host. However, when the VMs are expected to send/receive significant traffic to or from the Internet, a dedicated network gateway host helps avoid CPU contention between the neutron-l3-agent and other OpenStack services that forward packets. Fig. 8.1: Neutron Flow
  • 55. Network Description Managment Network Provides internal communication between OpenStack components. IP addresses on this network should be reachable only within the data center. Data Network Provides VM data communication within the cloud deployment. The IP addressing requirements of this network depend on the Networking plug-in that is used. External Network Provides VMs with Internet access in some deployment scenarios. Anyone on the Internet can reach IP addresses on this network. API network Exposes all OpenStack APIs to tenants. The API network might be the same as the external network, because it is possible to create an external-network subnet that has allocated IP ranges that use less than the full range of IP addresses in an IP block. Plug-in architecture: Networking introduces support for vendor plug-ins, which offer a custom back-end implementation of the Networking API. A plug-in can use a variety of technologies to implement the logical API requests. Some Networking plug-ins might use basic Linux VLANs and IP tables while others might use more advanced technologies, such as L2-in-L3 tunneling or OpenFlow, to provide similar benefits. For example, Network plug-ins are available for ML2 plug-in (Open vSwitch and Linux Bridge plug-ins), Cisco, VMWare NSX, etc. Network Plug-ins typically have requirements for particular software that must be run on each node that handles data packets. This includes any node that runs "nova-compute" and nodes that run dedicated OpenStack Networking service agents such as neutron-dhcp-agent, neutron-l3-agent or neutron-metering-agent. Depending on the configuration, Networking can also include the following agents: Agent Description plug-in agent (neutron-*-agent) Runs on each hypervisor to perform local vSwitch configuration. The agent that runs, depends on the plug-in that you use. Certain plug-ins do not require an agent. dhcp agent (neutron- dhcp-agent) Provides DHCP services to tenant networks. Required by certain plug-ins. l3 agent (neutron-l3- agent) Provides L3/NAT forwarding to provide external network access for VMs on tenant networks. Required by certain plug-ins. metering agent (neutron-metering- agent) Provides L3 traffic metering for tenant networks. These agents interact with the neutron server process through RPC (for example, RabbitMQ or Qpid) or through the standard Networking API.
  • 56. In addition, Networking integrates with OpenStack components in a number of ways: • Networking relies on the Identity service (keystone) for the authentication and authorization of all API requests. • Compute (nova) interacts with Networking through calls to its standard API. As part of creating a VM, the nova-compute service communicates with the Networking API to plug each virtual NIC on the VM into a particular network. • The dashboard (horizon) integrates with the Networking API, enabling administrators and tenant users to create and manage network services through a web-based GUI. Services related to OpenStack Neutron: Common neutron services that runs on the Network Node 1) neutron-server: Provides APIs to request and configure virtual networks 2) neutron-openvswitch-agent: Support virtual networks using Open vSwitch 3) neutron-metadata-agent: Provides Metadata services to the Instances. 4) neutron-l3-agent: OpenStack Neutron Layer 3 Agent 5) neutron-dhcp-agent: OpenStack Neutron DHCP Agent OpenVswitch Open vSwitch is a opens source openFlow capable virtual switch, that is typically used with hypervisors to interconnect VMs within host or VMs in different hosts across the network . Below are the features of Open vSwitch : • VLAN tagging and 802.1q trunking. • Standard spanning tree protocol. • LACP • Port Mirroring (SPAN and RSPAN) • Tunnelling (GRE, VXLAN and IPSEC) Why OpenVswitch? When it comes to virtualization open vswitch is attractive because it provides the ability for a single controller to manage your virtual network across all your servers. It is also very useful in easily allowing for live migration of virtual machines while maintaining network state such as firewall rules, addresses and open network connections. https://github.com/openvswitch/ovs/blob/master/WHY-OVS.md Fig. 8.2: Open vSwitch Components
  • 57. OpenVswitch Tools ovs-vsctl : • ovs-vsctl show • ovs-vsctl list-br • ovs-vsctl list-ports <bridge> ovs-oftcl : • ovs-ofctl show <bridge> • ovs-ofctl dump-ports <bridge> • ovs-ofctl show <bridge> • ovs-ofctl dump-flows <bridge> Network Name Space • Namespaces enables multiple instances of a routing table to co-exist within the same Linux box • Network namespaces are isolated containers which can hold a network configuration and is not seen from outside of the namespace. • Network namespaces make it possible to separate network domains (network interfaces, routing tables, iptables) into completely separate and independent domains. • One can create/delete name space with below commands. # ip netns add <my-ns> To list the name space. # ip netns list To delete name space # ip netns delete <my-ns> Namespace is an isolated container, we can perform all the normal actions in the namespace context using the exec command for example: # ip netns exec my-ns ifconfig # ip netns exec ip route # ip netns exec ping <IP> OpenStack uses a combination of Open vSwitch and network namespaces. OVS defines the interfaces and then we can add those interfaces to namespace. Advantages of Network Namespaces Overlapping IPs: A big advantage of namespaces implementation in neutron is that tenants can create overlapping IP addresses. Linux network namespace is required on nodes running neutron-l3-agent or neutron-dhcp-agent if overlapping IPs is in use. Hence the hosts running these processes must support network namespaces.
  • 58. Fig. 8.3 : A look inside Network Namespace Installing OpenStack Networking: 1) #source /root/keystonerc_admin #keystone service create –name neutron –type network –description 'openstack networking service' 2) Create end point in keystone using id from previous output #keystone endpoint-create –service-id <enter service id from previous output> --publicurl http://<IP or hostname>:9696 –adminurl http://<IP or hostname>:9696 –internalurl http://<IP or hostname>:9696 #keystone catalog 3) Create openstack networking service user named neutron using your own password. #keystone user-create –name neutron –pass <your password> 4) Link the neutron user and admin role within the services tenant. #keystone user-role-add –user neutron –role admin –tenant services 5) Verify the user role: #keystone –os-username neutron –os-password <your neutron password> --os-tenant- name services user-role-list 6) Install the openstack networking service and the open vSwitch plugin on server. #yum -y install openstack-neutron openstack-neutron-openvswitch 7) Openstack neutron service must be connected with the AMQP which you are using either qpid or Rabbitmq and ensure either of one is running #service qpidd status 8) Update the openstack networking service to use the keystone information. #cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak #openstack-config –set /etc/neutron/neutron.conf DEFAULT rpc_backend quantum.openstack.common.rpc.impl_qpid
  • 59. #openstack-config –set /etc/neutron/neutron.conf DEFAULT qpid_hostname <IP> 9) Configure openstack networking to use Qpid and keystone using the values configured previously. #openstack-config –set /etc/neutron/neutron.conf DEFAULT qpid_username qpidauth #openstack-config –set /etc/neutron/neutron.conf DEAFULT qpid_password <your qpid password> #openstack-config –set /etc/neutron/neutron.conf DEAFULT qpid_protocol ssl #openstack-config –set /etc/neutron/neutron.conf DEAFULT qpid_port 5671 #openstack-config –set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name services #openstack-config –set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron #openstack-config –set /etc/neutron/neutron.conf keystone_authtoken admin_password <your password> #openstack-config –set /etc/neutron/neutron.conf agent root_helper “sudo neutron- rootwrap /etc/neutron/rootwrap.conf” 10) Create a /root/keystonerc_neutron file for user neutron: export OS_USERNAME=neutron export OS_TENANT_NAME=services export OS_PASSWORD=<your password> export OS_AUTH_URL=http://<IP>:35357/v2.0/ export PS1='[u@h W(keystone_neutron)]$' 11) Now source the keystonerc_neutron file: #source /root/keystonerc_neutron 12) The Openstack networking setup scripts will make several changes to the /etc/nova/nova.conf file as well. However, we will not cover Nova until next chapter, where we will focus on Nova (compute services) in more detail. #yum install -y openstack-nova-compute 13) Run the OpenStack networking setup script using the openvswtch plug-in. #neutron-server-setup –yes –rootpw <your password> --pluin openvswitch Neutron plugin: openvswitch Plugin: openvswitch => Database: ovs_neutron Verified connectivity to MySQL. Configuration updates complete! #neutron-db-manage –config-file /usr/share/neutron/neutron-dist.conf –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/plugin.ini stamp head 14) Start and enable the neutron-server service after checkking errors. #service neutron-server start #egrep 'ERROR|CRITICAL' /var/log/neutron/server.log #chkconfig neutron-server on #openstack-status 15) Configure openvswitch and start the necessary services. #neutron-node-setup –plugin openvswitch –qhost <your IP> Neutron plugin: openvswitch would you like to update the nova configuration files? (y/n): y Configuration updates complete! #service openvswitch start # egrep 'ERROR|CRITICAL' /var/log/openvswitch/* #chkconfig openvswitch on 16) Create an Open v Switch bridge named br-int. This is the integration bridge that will be used as a patch panel to assign interface(s) to an instance. #ovs-vsctl add-br br-int #ovs-vsctl show
  • 60. 17) Configure the Open vSwitch plugin to use br-int as the integration bridge. Start and enable the neutron-openvswitch-agent service. #cp /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini.backup #openstack-config –set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini OVS integration_bridge br-int #service neutron-openvswitch-agent start #egrep 'ERROR|CRITICAL' /var/log/neutron/openvswitch-agent.log #chkconfig neutron-openvswitch-agent on 18) Enable the neutron-ovs-cleanup service. When started at boot time, this service ensures that the OpenStack networking agents maintain full control over the creation and management of tap devices. #chkconfig neutron-ovs-cleanup on 19) Configure and enable the OpenStack networking DHCP agent (neutron-dhcp-agent) on your server. #neutron-dhcp-setup –plugin openvswitch –qhost <your IP> Neutron plugin: openvswitch Configuration updtaes complete! #service neutron-dhcp-agent start #egrep 'ERROR|CRITICAL' /var/log/neutron/dhcp-agent.log #chkconfig neutron-dhcp-agent on 20) Create the br-ex bridge that will be used for external network traffic in Open vSwitch. #ovs-vsctl add-br br-ex 21) Before attaching eth0 to br-ex bridge, configure the br-ex network device configuration file. #cp /etc/sysconfig/network-scripts/ifcfg-eth0 /root/ #cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-br-ex Note: If you are using HWADDR in config files then please make sure you change it as appropriate else comment that out. Below is the sample file: DEVICE=br-ex IPADDR=<your IP Address> PREFIX=24 GATEWAY= <your gateway IP> DNS1= <your DNS server IP> <<< if you don't have DNS then can comment out this line SEARCH1=<domain name> ONBOOT=yes 22) Once you have verified all the configuration files then you are ready to add eth0 to br-ex. #ovs-vsctl add-port br-ex eth0 ; service network restart #ovs-vsctl show Note: If by cahnce your network stopped working then you can delete the port as well using below and revert back the changes made to network config files to bring up your network back. #ovs-vsctl del-port br-ex eth0 #cp /etc/sysconfig/network-scripts/ifcfg-br-ex /etc/sysconfig/network-scripts/ifcfg-eth0 #service network restart 23) Run the neutron-l3-setup script to configure teh OpenStack networking L3 agent (neutron-l3-
  • 61. agent). #neutron-l3-setup –plugin openvswitch –qhost <your IP> Neutron plugin: openvswitch Configuration updates combination! 24) Start and enable the neutron-l3-agent service. #service neutron-l3-agent start #egrep 'ERROR|CRITICAL' /var/log/neutron/l3-agent.log #chkconfig neutron-l3-agent on 25) The openstack networking service is now running, and you can verify using below: #openstack-status Few helpful commands 1) How to list the existing networks: [root@localhost ~(keystone_admin)]# neutron net-list +--------------------------------------+------+----------------------------------------------------+ | id | name | subnets | +--------------------------------------+------+----------------------------------------------------+ | 85c3f30a-2491-467f-a3b0-d1e8f4b1b7ec | net2 | 58bdc8ed-970b-44ca-bb9d-f81a5bb63851 20.20.20.0/24 | | 9de3e304-5b67-4039-a53e-ded4784287cd | net1 | 4f32c145-c9b5-40a8-bdbe-432ebfb88b69 10.10.10.0/24 | +--------------------------------------+------+----------------------------------------------------+ 2) How to cretae a new network: [root@localhost ~(keystone_admin)]# neutron net-create private Created a new network: +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | id | 1562e7f5-d0b5-4dca-a8e6-77a96b89aa47 | | name | private | | provider:network_type | local | | provider:physical_network | | | provider:segmentation_id | | | shared | False | | status | ACTIVE | | subnets | | | tenant_id | fa55991d5f4449139db2d5de410b0c81 | +---------------------------+--------------------------------------+ [root@localhost ~(keystone_admin)]# neutron net-list +--------------------------------------+---------+----------------------------------------------------+ | id | name | subnets | +--------------------------------------+---------+----------------------------------------------------+ | 1562e7f5-d0b5-4dca-a8e6-77a96b89aa47 | private | | | 85c3f30a-2491-467f-a3b0-d1e8f4b1b7ec | net2 | 58bdc8ed-970b-44ca-bb9d-f81a5bb63851 20.20.20.0/24 |
  • 62. | 9de3e304-5b67-4039-a53e-ded4784287cd | net1 | 4f32c145-c9b5-40a8-bdbe-432ebfb88b69 10.10.10.0/24 | +--------------------------------------+---------+----------------------------------------------------+ 3) How to create a new subnet and assign to the new network created as above: [root@localhost ~(keystone_admin)]# neutron subnet-create --name subprivate private 172.24.1.0/24 Created a new subnet: +------------------+------------------------------------------------+ | Field | Value | +------------------+------------------------------------------------+ | allocation_pools | {"start": "172.24.1.2", "end": "172.24.1.254"} | | cidr | 172.24.1.0/24 | | dns_nameservers | | | enable_dhcp | True | | gateway_ip | 172.24.1.1 | | host_routes | | | id | 7ed1a971-88bb-4b1b-bd50-5c295f3510bd | | ip_version | 4 | | name | subprivate | | network_id | 1562e7f5-d0b5-4dca-a8e6-77a96b89aa47 | | tenant_id | fa55991d5f4449139db2d5de410b0c81 | +------------------+------------------------------------------------+ [root@localhost ~(keystone_admin)]# neutron net-list +--------------------------------------+---------+----------------------------------------------------+ | id | name | subnets | +--------------------------------------+---------+----------------------------------------------------+ | 1562e7f5-d0b5-4dca-a8e6-77a96b89aa47 | private | 7ed1a971-88bb-4b1b-bd50-5c295f3510bd 172.24.1.0/24 | | 85c3f30a-2491-467f-a3b0-d1e8f4b1b7ec | net2 | 58bdc8ed-970b-44ca-bb9d-f81a5bb63851 20.20.20.0/24 | | 9de3e304-5b67-4039-a53e-ded4784287cd | net1 | 4f32c145-c9b5-40a8-bdbe-432ebfb88b69 10.10.10.0/24 | +--------------------------------------+---------+----------------------------------------------------+ 4) How to create a router: [root@localhost ~(keystone_admin)]# neutron router-create router Created a new router: +-----------------------+--------------------------------------+ | Field | Value | +-----------------------+--------------------------------------+ | admin_state_up | True | | external_gateway_info | | | id | c8f59b15-4ae6-4594-9eec-e85374189e05 | | name | router | | status | ACTIVE | | tenant_id | fa55991d5f4449139db2d5de410b0c81 | +-----------------------+--------------------------------------+
  • 63. 6) How to list existing routers: [root@localhost ~(keystone_admin)]# neutron router-list +--------------------------------------+--------+-----------------------+ | id | name | external_gateway_info | +--------------------------------------+--------+-----------------------+ | c8f59b15-4ae6-4594-9eec-e85374189e05 | router | null | +--------------------------------------+--------+-----------------------+ 7) How to add interface to router: [root@localhost ~(keystone_admin)]# neutron router-interface-add router subprivate Added interface 84a1fad1-0748-45f5-829f-076d45345e08 to router router. 8) How to list ports: [root@localhost ~(keystone_admin)]# neutron port-list +--------------------------------------+------+------------------- +-----------------------------------------------------------------------------------+ | id | name | mac_address | fixed_ips | +--------------------------------------+------+------------------- +-----------------------------------------------------------------------------------+ | 1281f263-e251-4714-bf8b-2dc15f26b923 | | fa:16:3e:69:e3:c7 | {"subnet_id": "4f32c145-c9b5- 40a8-bdbe-432ebfb88b69", "ip_address": "10.10.10.2"} | | 57fd9afd-fb53-4d4c-826e-fd8168e0aba8 | | fa:16:3e:97:64:ed | {"subnet_id": "4f32c145-c9b5- 40a8-bdbe-432ebfb88b69", "ip_address": "10.10.10.4"} | | 7845c4c9-f022-4027-8c02-359854198c11 | | fa:16:3e:ae:90:2d | {"subnet_id": "58bdc8ed- 970b-44ca-bb9d-f81a5bb63851", "ip_address": "20.20.20.2"} | | 7e8dd705-6ab9-4d22-b8c6-1b5b165e459b | | fa:16:3e:87:f9:94 | {"subnet_id": "58bdc8ed- 970b-44ca-bb9d-f81a5bb63851", "ip_address": "20.20.20.3"} | | 84a1fad1-0748-45f5-829f-076d45345e08 | | fa:16:3e:1b:51:c2 | {"subnet_id": "7ed1a971- 88bb-4b1b-bd50-5c295f3510bd", "ip_address": "172.24.1.1"} | +--------------------------------------+------+------------------- +-----------------------------------------------------------------------------------+ 9) How to create public network and setting it for external network: [root@localhost ~(keystone_admin)]# neutron net-create --tenant-id services public --router:external=True Created a new network: +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | id | 01aa3874-ea24-4ba9-b02d-e80acfab3173 | | name | public | | provider:network_type | local | | provider:physical_network | | | provider:segmentation_id | |
  • 64. | router:external | True | | shared | False | | status | ACTIVE | | subnets | | | tenant_id | services | +---------------------------+--------------------------------------+ 10) How to allocate the floating IP range and assigning it: [root@localhost ~(keystone_admin)]# neutron subnet-create --tenant-id services --allocation-pool start=10.176.246.1,end=10.176.246.10 --gateway 10.176.246.254 --disable-dhcp --name subpub public 10.176.246.0/24 Created a new subnet: +------------------+---------------------------------------------------+ | Field | Value | +------------------+---------------------------------------------------+ | allocation_pools | {"start": "10.176.246.1", "end": "10.176.246.10"} | | cidr | 10.176.246.0/24 | | dns_nameservers | | | enable_dhcp | False | | gateway_ip | 10.176.246.254 | | host_routes | | | id | 43db4465-0da9-4f44-8942-5f5a5b4ba506 | | ip_version | 4 | | name | subpub | | network_id | 01aa3874-ea24-4ba9-b02d-e80acfab3173 | | tenant_id | services | +------------------+---------------------------------------------------+ 11) How to set gateway to router: [root@localhost ~(keystone_admin)]# neutron router-gateway-set router public Set gateway for router router [root@localhost ~(keystone_admin)]# neutron port-list +--------------------------------------+------+------------------- +-------------------------------------------------------------------------------------+ | id | name | mac_address | fixed_ips | +--------------------------------------+------+------------------- +-------------------------------------------------------------------------------------+ | 1281f263-e251-4714-bf8b-2dc15f26b923 | | fa:16:3e:69:e3:c7 | {"subnet_id": "4f32c145-c9b5- 40a8-bdbe-432ebfb88b69", "ip_address": "10.10.10.2"} | | 4ba58e4e-c90f-4025-be3b-7d0385e451de | | fa:16:3e:08:6e:36 | {"subnet_id": "43db4465- 0da9-4f44-8942-5f5a5b4ba506", "ip_address": "10.176.246.1"} | | 57fd9afd-fb53-4d4c-826e-fd8168e0aba8 | | fa:16:3e:97:64:ed | {"subnet_id": "4f32c145-c9b5- 40a8-bdbe-432ebfb88b69", "ip_address": "10.10.10.4"} | | 7845c4c9-f022-4027-8c02-359854198c11 | | fa:16:3e:ae:90:2d | {"subnet_id": "58bdc8ed-970b- 44ca-bb9d-f81a5bb63851", "ip_address": "20.20.20.2"} | | 7e8dd705-6ab9-4d22-b8c6-1b5b165e459b | | fa:16:3e:87:f9:94 | {"subnet_id": "58bdc8ed-970b- 44ca-bb9d-f81a5bb63851", "ip_address": "20.20.20.3"} |
  • 65. | 84a1fad1-0748-45f5-829f-076d45345e08 | | fa:16:3e:1b:51:c2 | {"subnet_id": "7ed1a971-88bb- 4b1b-bd50-5c295f3510bd", "ip_address": "172.24.1.1"} | +--------------------------------------+------+------------------- +-------------------------------------------------------------------------------------+ [root@localhost ~(keystone_admin)]# neutron router-list +--------------------------------------+-------- +-----------------------------------------------------------------------------+ | id | name | external_gateway_info | +--------------------------------------+-------- +-----------------------------------------------------------------------------+ | c8f59b15-4ae6-4594-9eec-e85374189e05 | router | {"network_id": "01aa3874-ea24-4ba9-b02d- e80acfab3173", "enable_snat": true} | +--------------------------------------+-------- +-----------------------------------------------------------------------------+ 12) How to create a floating IP: [root@localhost ~(keystone_admin)]# neutron floatingip-create public Created a new floatingip: +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | fixed_ip_address | | | floating_ip_address | 10.176.246.2 | | floating_network_id | 01aa3874-ea24-4ba9-b02d-e80acfab3173 | | id | 578a5bfe-1794-4519-b32e-8a85c5827a41 | | port_id | | | router_id | | | status | ACTIVE | | tenant_id | fa55991d5f4449139db2d5de410b0c81 | +---------------------+--------------------------------------+ 13) How to list floating IPs: [root@localhost ~(keystone_admin)]# neutron floatingip-list +--------------------------------------+------------------+---------------------+---------+ | id | fixed_ip_address | floating_ip_address | port_id | +--------------------------------------+------------------+---------------------+---------+ | 578a5bfe-1794-4519-b32e-8a85c5827a41 | | 10.176.246.2 | | +--------------------------------------+------------------+---------------------+---------+ NOTE: All the above operations can be easily done through GUI mode as well and that will look very easy and simple.