Octavia

Overview

Octavia provides network load balancing for OpenStack and is the reference implementation LBaaS (Load Balancer as a Service). Since the OpenStack Liberty release, Octavia has fully replaced and has become the reference implementation for Load Balancing as a Service (LBaaS v2).

Neutron previously provided reference implementation for LBaaS, however it has since been deprecated and Octavia should be used for creating load balancers. OpenStack strongly advises that users should use Octavia to create Load balancers.

Load balancers can be created through the web UI or the OpenStack CLI.

Octavia works with the OpenStack Projects to provide a service for creating LBaaS pools:

  • Nova: managing amphora lifecycle and creating compute resources.

  • Neutron: For network connectivity

  • Keystone: Authentication against Octavia API

  • Glance: stores amphora VM image.

  • Oslo: Communication between Octavia components. Octavia makes extensive use of Taskflow.

LBaaS Pools

LBaaS pools may consist of the following:

Load Balancer: Balances the traffic to server pools. In combination with the health monitor and listener, the load balancer can redistribute traffic to servers in the event of one server failing for example.

Health Monitor: Monitors the health of the pool and defines the method for checking the health of each member of the pool.

Listener: Represent a listening endpoint for the VIP. Here, the Listener listens for client traffic being directed to the load balanced pool.

Pool: A group of servers which are identified by their floating IP addresses. The size of the pool can be increased or decreased in response to traffic to the instances detected by the load balancer.

These can be created through the web UI, command line, or as part of a Heat Stack such as a LAMP stack.

Octavia CLI

In order to use the commands provided from Octavia, make sure to you have the following python package installed:

pip install python-octaviaclient

To test whether the octaviaclient has been installed successfully and we can access the Octavia commands, we can try to list load balancers in our current project:

openstack loadbalancer list

This should return an empty line if there are no load balancers in your project, or a table containing details of load balancers similar to the table below:

+--------------------------------------+---------------------------+----------------------------------+----------------+---------------------+----------+ | id | name | project_id | vip_address | provisioning_status | provider | +--------------------------------------+---------------------------+----------------------------------+----------------+---------------------+----------+ | LOADBALANCER_ID | LOADBALANCER 1 | PROJECT_ID | 192.168.132.93 | ACTIVE | amphora | | LOADBALANCER_ID | LOADBALANCER 2 | PROJECT_ID | 10.0.0.38 | ACTIVE | amphora | | LOADBALANCER_ID | LOADBALANCER 3 | PROJECT_ID | 10.0.0.254 | ACTIVE | amphora | +--------------------------------------+---------------------------+----------------------------------+----------------+---------------------+----------+

Commands

Having installed python-octaviaclient, we are now able to run commands for working with load balanced pools.

There are several commands which Octavia provides which we can use for working with loadbalancers, pools, listeners, and health monitors. Below is a list of some of the commands which can be used:

Further commands can be seen using openstack loadbalancer --help.

Creating LBaaS v2 Pools

LBaaS v2 pools can be created through the Web UI and CLI. Let’s look at creating a simple LBaaS v2 pool.

Load Balancer

Load balancers can be created using:

For example, we can create a load balancer that is on a private subnet.

After a few minutes the provisioning status should change from PENDING_CREATE to ACTIVE. Then, you can associate a floating IP in order to access the load balancer externally.

To create a load balancer which is to balance incoming traffic from the External network:

This will create a load balancer and create a floating IP to be associated with it.

Listener

Listeners are set up and attached to load balancers to ‘listen’ for incoming traffic trying to connect to a pool through the load balancer. Listeners can be created using:

Below is an example of creating a listener for the load loadbalancer test-lb which runs a HTTP protocol on port 80. This means any connections to the load balancer have to be done via this port as defined by the listener.

> Listeners can refer to several pools.

Pool and Members

A pool consists of members that serve traffic behind a load balancer, each member has a specified IP address and port to serve traffic.

The following load balancer algorithms are supported:

  • ROUND_ROBIN: each member of the pool is used in turn to handle incoming traffic.

  • LEAST_CONNECTIONS: a VM with the least number of connections is selected to receive the next incoming connection.

  • SOURCE_IP and SOURCE_IP_PORT: Source IP is hashed and divided by the total weight of the active VMs to determine which VM will receive the request.

    NOTE: Pools can only be associated with one listener. NOTE: The recommended and preferred algorithm is SOURCE_IP

Pools are created using openstack loadbalancer pool create:

Then members can be defined for the pool using openstack loadbalancer member create:

Example

After setting up the load balancer, listener, and pool we can test that we can access the VM in the pool via the load balancer:

More load balancing examples can be found here: https://docs.openstack.org/octavia/train/user/guides/basic-cookbook.html

Load Balancer Status

We can also check the status of a load balancing pool from the command line. For example, we can check the status of a load balancer that is in front of a kubernetes service being hosted on multiple nodes.

To check the load balancer status we can use the following command:

Health Monitor

It is possible to set up a listener without a health monitor, however OpenStack recommends to always configure production load balancers with a health monitor. Health monitors in are attached to load balancer pools and monitors the health of each pool member. Should a pool member fail a health check, the health monitor should remove that member temporarily from the pool. If that pool member begins to respond to health checks, the health monitor returns the member to the pool.

Health Monitor Options

  • delay: number of seconds to check between health checks

  • timeout: the number of seconds to any given health check to complete. Timeout should always be smaller than delay.

    • -max-retries: number of subsequent health checks a given back-end server must fail before it is considered down, or that a failed back-end server must pass to be considered up again.

HTTP Health Monitors

By default, Octavia will probe the “/” path on the application server. However, in many applications this is not appropriate because the “/” path ends up being a cached page, or causes the server to do more work than it is necessary for a basic health check.

HTTP health monitors also have the following options:

  • url_path: path part of the URL that should be retrieved form the back end server. By default it is “/”.

  • http_method: HTTP method that should be used to retrieve the url_path. By default this is “GET”.

  • expected_codes: list of HTTP status codes that indicate an OK health check. By default this is just “200”.

 

To create health monitors and attach them to load balancing pools, we can use the command:

Status Codes

The load balancer, listener, health monitor, pool and pool member have a provisioning status and an operating status.

Provisioning Status Codes

Code

Reason

Code

Reason

ACTIVE

Entity Provisioned successfully

DELETED

Entity successfully deleted

ERROR Please refer to error messages

Provisioning failed

PENDING_CREATE

Entity is being created

PENDING_UPDATE

Entity is being updated

PENDING_DELETE

Entity is being deleted

Operation Status Codes

Status

Reason

Status

Reason

ONLINE

Entity is operating normally. All Pool members are healthy

DRAINING

Member is not accepting new connections

OFFLINE

Entity is administratively disabled

DEGRADED

One or more components are in ERROR

ERROR Please refer to error messages

Entity has failed. Member is failing health monitoring check. All pool members are in ERROR

NO_MONITOR

No health monitor is configured. Current status in known

 

Load Balancer as a Service Quick Start

This quick start assumes you have a VM set up with SSH and a webserver that you want to expose. The steps included are applicable to other ports and services too. This example assumes you’ve configured your machine following Running an NGinX webserver inside a Docker container. The steps will be similar for bespoke systems too.

First we will create a load balancer and use it to provide ssh access to a VM initially, then add another listener and pool to point to the webserver on the VM.

Create a Loadbalancer

In the Openstack web interface, expand Network and then click Load Balancers on the left hand side of the screen.

Click “Create Load Balancer”

On the “Load Balancer Details” screen enter a name for the load balancer and select the subnet for your projects private network. Leave the IP Address and Flavor fields blank. Then click “Next”

On the “Listener Details” screen select Protocol TCP and set port 200 (which we’ve randomly selected). To avoid conflicts it’s best to select random port numbers greater than 1024. Enter the name as ssh. The other defaults are fine. Click “Next”

On the “Pool Details” screen enter the name as SSH and set the Algorithm and Session Persistence to “Source IP”. Click “Next”

On the “Pool Members” screen add the VM you want to point at and set the port to 22. Click “Next”

On the “Monitor Details” screen enter the name ssh and set the type to “TCP”. Click “Create Load Balancer”

You will then see the loadbalancer that you created in the list with the Provisioning Status of “Pending Create”

After a few minutes the Provisioning Status will be updated to “Active”

Now Click the arrow next to “Edit Load Balancer” and click “Associate Floating IP”

Select one of your floating IPs and then click “Associate”

You should now be able to ssh to your VM on port 200 of the IP you assigned to the load Balancer

Viewing details

If you click on the Name of the load balancer you can view details of the listerners and pools

Unfortunately a number of the “Operating Status” fields always report “Offline” even when they are online

You can add another Listener to this load balancer by hitting “Create Listener” and following similar steps to the above :ref: create_octavia_listener

Try using these steps to load balance access to the nginx webserver created in the other tutorial.

Explanation of terms

Algorithms:

Least connections - sends each connection to whichever pool member has the

Round Robin - sends connections to each backend node in return

Source IP - Sends connections to backend nodes based on the hash of the client’s IP

Session Persistence:

Source IP - Uses the hash of the client’s IP to maintain session persistence

HTTP Cookie - Session persistence based on http cookies

APP Cookie - Session persistence based on application cookies

Monitor Types:

HTTP - Checks for a configurable http response

HTTPS - Checks for a http 200. Requires that the certificate is trusted by the loadbalancer and we currently dont have a way to inject new CAs

PING - A simple ping test

TCP - A simple tcp handshake

TLS-HELLO - A tls handshake

UDP-CONNECT - Checks for an open udp socket.

LBaaS v2: Heat Stacks

Load balancing in a template consists of:

  • Pool: A group of servers which are identified by their floating IP addresses. The size of the pool can be increased or decreased in response to traffic to the instances detected by the load balancer.

  • Listener: Represent a listening endpoint for the vip. Listener resources listen for the client traffic.

  • Health Monitor: Monitors the health of the pool.

  • Load Balancer: Balances the traffic to server pools. In combination with the health monitor and listener, the load balancer can redistribute traffic to servers in the event of one server failing for example.

Octavia Resources in Heat

 

The table below lists the attributes for each resource:

Example

The example below shows how the Octavia resources can be defined in a Heat Template.

References

https://docs.openstack.org/octavia/train/reference/introduction.html

https://docs.openstack.org/octavia/train/user/guides/basic-cookbook.html

https://docs.openstack.org/octavia/train/reference/glossary.html

https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation

http://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-balance

https://docs.openstack.org/api-ref/load-balancer/v2/index.html?expanded=create-pool-detail,show-pool-details-detail#general-api-overview

https://docs.openstack.org/mitaka/networking-guide/config-lbaas.html

https://ibm-blue-box-help.github.io/help-documentation/heat/autoscaling-with-heat/

https://docs.openstack.org/heat/latest/template_guide/openstack.html

https://docs.openstack.org/octavia/train/reference/introduction.html

https://docs.openstack.org/octavia/train/user/guides/basic-cookbook.html