Introducing OpenStack Heat
OpenStack Heat is the Orchestration service for OpenStack and orchestrates infrastructure resources (such as VMs, floating IPs, volumes, networks etc.) for cloud applications. This is done using templates in the form of YAML files which contain the properties and the relationships between resources.
Heat Templates allow relationships to be defined between difference resources, allowing Heat to call different OpenStack APIs to create different resources such as LBaaS Pools (Octavia), servers and server groups (Nova), alarms (Aodh), volumes (Cinder), networks and security groups (Neutron). Heat manages the lifecycle of the stack and when a stack needs to be updated, an updated template can be applied to the existing stack. These templates can be integrated with software configuration management tools such as Ansible and Puppet.
Architecture
Heat provides an AWS CloudFormation implementation for OpenStack. Heat provides integration of other core OpenStack components into a one-file template system. This system not only allows resources from most OpenStack Projects to be created in a single template, it also provides more functionality including auto scaling of VMs and nested stacks.
heat: A CLI that communicates with the heat-api to execute AWS CloudFormation APIs. End developer could use heat REST API directly.
heat-api: provides an OpenStack-native REST API that processes API requests by sending them to the heat engine over RPC
heat-api-cfn: provides AWS Query API that is compatible with AWS CloudFormation and processes API requests by sending them to the heat-engine over RPC
heat-engine: orchestrates the launching of templates and provides events back to API consumer.
Heat Commands
To use Heat commands in the command line, we need to install the following package:
pip install python-heatclient
To test that this has installed correctly, and that we are able to access the Heat API, we can run the command:
openstack stack list #This should return an empty line if there are no stacks in the project or a table similar to the following: +---------------------------------+---------------------------------+-----------------+----------------------+----------------------+ | ID | Stack Name | Stack Status | Creation Time | Updated Time | +---------------------------------+---------------------------------+-----------------+----------------------+----------------------+ | a00fa2cd-3e29-489f-8d9f-f805956 | software-deployment-test | CREATE_COMPLETE | 2020-12-09T08:34:15Z | None | | 7d045 | | | | | | 1cb66fbd-1336-414b-b112-b0fcffe | spark-standalone-cluster | CREATE_COMPLETE | 2020-12-07T10:31:58Z | None | | b0645 | | | | | | b7263b67-65c4-4333-abd0-7033afa | spark-stack-2 | CREATE_FAILED | 2020-12-04T11:42:25Z | None | | 961b6 | | | | | | c9c10097-c275-4c44-9324-0eb2f7a | docker-script-test | CREATE_COMPLETE | 2020-12-02T16:39:54Z | None | | d60cf | | | | | +---------------------------------+---------------------------------+-----------------+----------------------+----------------------+
The following list is the list of commands from Heat which can be used in OpenStack:
# Commands provided by Heat are of the form: openstack stack <command> <options> stack abandon # abandon a stack and output results stack adopt # adopt a stack stack cancel # cancel create or update task for a stack stack check # check stack and its resources stack create # create a stack stack delete # delete a stack stack environment show # stack event list # List stack events stack event show # View stack event stack export # export stack data json stack failures list # List failed resources in a stack stack file list # show a stack's files map stack hook clear #clear resource hooks on a given stack stack hook poll # list resources with pending hook for a stack stack list # list stacks in the project stack output list # list stack outputs stack output show # view stack output stack resource list # list stack resources stack resource mark unhealthy # mark one of the stack resources as unhealthy stack resource metadata # view metadata for stack's resource stack resource show # view details about a stack's resource stack resource signal # signal a resource with optional data (JSON data) stack resume # resume a stack stack show # view details about a stack stack snapshot create # create a snapshot of the stack stack snapshot delete # delete stack snapshot stack snapshot list # list stack snapshots stack snapshot restore # restore stack snapshot stack snapshot show # view details of a stack snapshot stack suspend # suspend a stack stack template show # view stack template stack update # update a stack using an updated template
Stacks
Stacks: a collection of resources and their associated configuration
Template: A YAML file defining the resources which make up the stack. In OpenStack, templates follow the Heat Orchestration Template (HOT) format.
Note: While Heat can interpret CFN (CloudFormation) Templates, they are _not_ backwards compatible with Heat Orchestrated Templates. It is recommended to use Heat Orchestrated Templates to create stacks.
Heat Orchestrated Templates
Heat Orchestrated Templates are YAML files that instruct Heat which resources to create and the relationships between resources. Ansible has documentation on how to write YAML files that can be found here: https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html
Heat Templates consist of seven sections:
heat_template_version:
(required) Indicates which format and features are used and supported when creating the stack.description:
(optional) Description of the stack template. It is recommended to include a description in templates to describe what users can do with the template.parameter_groups:
(optional) Defines how to group input parameters and the order of the parameters.parameters:
(optional) Defines input parameters. This section can be omitted if there are no input values required.resources:
(required) Defines resources in the template. At least one resource should be defined in a HOT template.outputs:
(optional) Defines output parameters available to users once the template has been instantiated.conditions:
(optional) Includes statements which can be used to apply conditions to a resource, for example a resource is created only when a property is defined or when another resource has been created first.
The structure of a HOT template is given as:
heat_template_version: 2018-08-31 #OpenStack Version we want to use. #Here, we want to use template for the Rocky release onwards description: #description of the template parameter_groups: #declares the parameter group and order. #This is not a compulsory section, however it is useful for grouping #parameters together when building more complex templates. parameters: #declares the parameters for resources resources: #declares the template resources # e.g. alarms, floating IPs, instances etc. outputs: #declares the output of the stack conditions: #declares any conditions on the stack
Please see the documentation Create a Heat Stack (Create a Heat stack) for an introduction to creating a stack using a HOT template.
Rocky Heat Templates
Templates which use:
heat_template_version: 2018-08-31 #Or heat_template_version: rocky
Indicate that the template is a HOT template and has features added and/or removed up to the Queens Release. The list of supported functions in a Rocky Heat Template is:
digest # allows for performing digest operations on a given value filter # removes values from list get_attr # references an attribute of a resource get_file # returns the content of a file into the template. Use to include files containing scripts or configuration files get_param # references an input parameter of a template get_resource # references another resource in the same template list_join # joins a string with the given delimiter make_url # builds URLs list_concat # concatenates lists together list_concat_unique # behaves identically to list_concat. Only removes repeating items of lists contains # checks whether a specific value is in a sequence map_merge # merges maps together map_replace # performs key/value replacements on existing mapping repeat # allows for dynamically transforming lists by iterating over the contents of one or more source lists and replacing list elements in the template resource_facade # retrieves data in a parent provider template. A facade is a custom definition of a resource from a provider template str_replace # constructs strings by providing a template string with placeholders and a list of mappings to assign values to those placeholders at runtime str_replace_strict # similar to str_replace, only an error is raised if any params are not present in template str_replace_vstrict # similar to str_replace, only an error is raised if any params are empty str_split # allows for a string to be split into a list by providing an arbitrary delimiter yaql # evaluates yaql expression on given data if # returns corresponding value based on evaluation of a condition
For more details about these intrinsic functions, please see the following documentation: https://docs.openstack.org/heat/train/template_guide/hot_spec.html#get-attr
The list of supported condition functions is:
equals # compares whether two values are equal get_param # references input parameter of a template not # acts as a NOT operator and # acts as an AND operator or # acts as an OR operator yaql # evaluates yaql expression on given data contains # checks whether a specific value is in a sequence
Pseudo Parameters
As well as parameters defined by the template author. Heat creates three parameters for every stack:
OS::stack_name # stack name OS::stack_id # stack identifier OS::project_id # project identifier
These parameters are accessible using get_param
function.
Assign Floating IPs to VMs in a Heat Stack
Floating IPs can be assigned to VMs that are on private networks in order for them to be accessible from an external network.
This document assumes that Floating IPs have been assigned to your project. If you do not have any floating IPs in your project, please contact the cloud team at cloud-support@stfc.ac.uk
There are two similar methods for assigning a floating IP to a virtual machine in a heat stack.
Use the resource
OS::Neutron::FloatingIPAssociation
.Add the ID of the floating IP in the networks property of the
OS::Nova::Server
Resource
For both methods, OS::Neutron::Port
is used in order to define a port on the VM on which to attach the floating IP.
To get the ID of the floating IPs, use the command:
openstack floating ip list
This will return a list of floating IPs containing the IP address, pool, port, and ID.
Using OS::Neutron::FloatingIPAssociation
heat_template_version: <template-version> parameters: <define parameters here> resources: private_network_port: # define the VM port which will be used to attach the floating IP type: OS::Neutron::Port properties: network_id: <private network ID> #private network ID security_groups: [{get_param: security_group_id}] test_VM: type: OS::Nova::Server properties: image: {get_param: image_id} flavor: {get_param: flavor_id} key_name: {get_param: key_name} networks: - port: {get_resource: private_network_port} floating_ip_association: type: OS::Neutron::FloatingIPAssociation properties: floatingip_id: <ID of Floating IP address> #ID of IP address to assign to VM port_id: {get_resource: private_network_port} #port to attach floating IP
Using the Network Property in OS::Nova::Server
heat_template_version: <temlate-version> parameters: <define parameters> resources: private_network_port: type: OS::Neutron::Port properties: network_id: <private-network-id> #private network ID security_groups: [{get_param: security_group_id}] test_VM: type: OS::Nova::Server properties: image: {get_param: image_id} flavor: {get_param: flavor_id} key_name: {get_param: key_name} networks: [{network: <private-network-id>, port: {get_resource: private_network_port}, floating_ip: <floating-ip-id> }]
AutoScaling in a Heat Stack
Sometimes we may need to have a stack that can respond when a group of servers are using a lot or little resources, such as memory usage. For example if a group of servers exceed a given memory usage threshold, we want that group of resources to scale up. This documentation will go through autoscaling, and how autoscaling could be implemented in a Heat stack.
To create an autoscaling stack we need:
AutoScaling Group: A group of servers defined so that the number of servers in the group and be increased or decreased.
Alarms: Alarms created using OpenStack Aodh to monitor the resource usage of the VMs in the autoscaling group. For example, we could create an alarm to monitor memory usage and alarm if the autoscaling group exceeds the alarm’s threshold.
Scaling Policies: Policies which are executed when an Aodh Alarm is triggered. When an alarm is triggered, the scaling policy attached to that alarm will instruct the autoscaling group to change in size, either increasing or decreasing the number of VMs.
Heat Resources
This section will cover the resources available in Heat that are required for creating an autoscaling stack.
AutoScaling involves resources from:
Heat: For creating an autoscaling group and defining the scaling policies
Aodh: For alarm creation
Gnocchi: For metrics that are used in threshold alarms
OS::Heat::AutoScalingGroup
This is an autoscaling group which can scale resources. This group can create the desired number of similar resources and we can define the minimum and maximum count for the given resource.
the_resource: type: OS::Heat::AutoScalingGroup properties: #required max_size: Integer # maximum number of resources in the group min_size: Integer # minimum number of resources in the group resource: {...} # resource definition for the resources in the group, written in HOT (Heat Orchestrated Template) format #optional desired_capacity: Integer # desired initial number of resources cooldown: Integer # cool down period in seconds rolling_updates: {"min_in_service": Integer, "max_batch_size": Integer, "pause_time": Number} # policy for rolling updates in the group, defaults to: {"min_in_service": 0, "max_batch_size": 1, "pause_time": 0} # min_in_service: minimum number of resources in service while rolling updates are executed # max_batch_size: maximum number of resources to replace at once # pause_time: number of seconds to wait between batches of updates
For example:
autoscaling-group: type: OS::Heat::AutoScalingGroup properties: min_size: 1 max_size: 3 resource: type: server.yaml #Refers to a Heat Template for creating a VM properties: flavor: {get_param: flavor} image: {get_param: image} key_name: {get_param: key_name} network: {get_param: network} metadata: {"metering.server_group": {get_param: "OS::stack_id"}}
OS::Heat::ScalingPolicy
the_resource: type: OS::Heat::ScalingPolicy properties: # required adjustment_type: String # Type of adjustment. Allowed values: “change_in_capacity”, “exact_capacity”, “percent_change_in_capacity” auto_scaling_group_id: String # AutoScaling Group ID to apply policy to scaling_adjustment: Number # Size of adjustment # Optional cooldown: Number # cooldown period, in seconds min_adjustment_step: Integer # minimum number of resources that are added or removed when the AutoScalingGroup scales up or down. Only used if specifying percent_change_in_capacity for adjustment_type property
For example:
scaleup_policy: type: OS::Heat::ScalingPolicy properties: adjustment_type: change_in_capacity auto_scaling_group_id: {get_resource: autoscaling-group} cooldown: 60 scaling_adjustment: 1 scaledown_policy: type: OS::Heat::ScalingPolicy properties: adjustment_type: change_in_capacity auto_scaling_group_id: {get_resource: autoscaling-group} cooldown: 60 scaling_adjustment: -1
OS::Aodh::GnocchiAggregationByResourcesAlarm
This resource creates an alarm as an aggregation of resources alarm. This alarm is a threshold alarm monitoring the aggregated metrics of the members of the autoscaling group defined above. Gnocchi provides the metrics which Aodh uses to determine whether an alarm should be triggered.
the_resource: type: OS::Aodh::GnocchiAggregationByResourcesAlarm properties: # required metric: String # metric name watched by the alarm query: String # query to filter the metrics resource_type: String # resource type threshold: Number # threshold to evaluate against # optional aggregation_method: String # method to compare to the threshold alarm_actions: [Value, Value, ...] # list of webhooks to invoke when state transitions to alarm alarm_queues: [String, String, ...] # list of Zaqar queues to post to when state transitions to alarm comparison_operator: String # operator used to compare specified statistic with threshold. Allowed values: “le”, “ge”, “eq”, “lt”, “gt”, “ne” description: String # alarm description enabled: Boolean # Defaults to true. Determines if alarm evaluation is enabled evaluation_periods: Integer # number of periods to evaluate over granularity: Integer # time range in seconds insufficient_data_actions: [Value, Value, ...] # list of webhooks to invoke when state transitions to insufficient data insufficient_data_queues: [String, String, ...] # list of Zaqar queues to post to when state transitions to alarm ok_actions: [Value, Value, ...] # list of webhooks to invoke when state transitions to ok ok_queues: [String, String, ...] # list of Zaqar queues to post to when state transitions to ok repeat_actions: Boolean # Defaults to True. False to trigger actions when the threshold is reached AND the alarm has changed state severity: String # severity of alarm. Allowed values: “low”, “moderate”, “critical” time_constraints: [{"name": String, "start": String, "description": String, "duration": Integer, "timezone": String}, {"name": String, "start": String, "description": String, "duration": Integer, "timezone": String}, ...] # Describe time constraints for alarm, defaults to [] # description: description for time constraints # duration: duration for time constraint # name: name for time constraint # start: start time for time constraint. A CRON expression property # timezone: Timezone for the time constraint.
For example, for our autoscaling stack we could define the alarms in the following way:
memory_alarm_high: type: OS::Aodh::GnocchiAggregationByResourcesAlarm properties: description: Scale up if memory > 1000 MB metric: memory.usage aggregation_method: mean granularity: 300 evaluation_periods: 1 threshold: 1000 resource_type: instance comparison_operator: gt query: list_join: - '' - - {'=': {server_group: {get_param: "OS::stack_id"}}} alarm_actions: - get_attr: [scaleup_policy, signal_url] memory_alarm_low: type: OS::Aodh::GnocchiAggregationByResourcesAlarm properties: description: Scale down if memory < 200MB metric: memory.usage aggregation_method: mean granularity: 300 evaluation_periods: 1 threshold: 200 resource_type: instance comparison_operator: lt query: list_join: - '' - - {'=': {server_group : {get_param: "OS::stack_id"}}} alarm_actions: - get_attr: [scaledown_policy, signal_url]
Creating a LAMP Stack
LAMP stacks contain a group of software consisting of:
Linux OS
Apache web server
mySQL Database
PHP
Templates
The following templates shows how an instance can be configured using a bash script to install Apache, mySQL, and PHP. The bash script is executed during the VM’s cloud-init phase.
Note: During the cloud-init phase, root executes any user-scripts.
> During the execution of the scripts, we will be ‘holding’ some of the packages to stop them being upgraded during the apt-get upgrade step. This is because the upgrade requires a user response for the packages. These can be upgraded by the user after VM configuration is complete by running the upgrade command.
Using OS::Heat::SoftwareConfig
The heat resource OS::Heat::SoftwareConfig has been used to define the user script.
OS::Heat::SoftwareConfig
heat_template_version: 2018-03-02 parameters: key_name: type: string default: <key-pair-name> description: Key Pair to use in order to SSH into the instance. image_id: type: string default: <image-id> description: The image for the instance flavor_id: type: string default: <flavor-id> description: The flavor for the instance security_group_id: type: string default: <security-group-id> resources: test_script: type: OS::Heat::SoftwareConfig properties: group: ungrouped config: | #!/bin/bash -v apt-get update apt-mark hold libpam-krb5 libpam-modules libpam-modules-bin libpam-runtime libpam-systemd libpam0g apt-get -y upgrade apt-mark unhold libpam-krb5 libpam-modules libpam-modules-bin libpam-runtime libpam-systemd libpam0g # mysql-server is installed at this stage (can install after VM creation) apt-get install -y apache2 php libapache2-mod-php php-mysql php-gd mysql-server test_VM: type: OS::Nova::Server properties: image: {get_param: image_id} flavor: {get_param: flavor_id} key_name: {get_param: key_name} networks: - network: Internal security_groups: - {get_param: security_group_id} user_data_format: SOFTWARE_CONFIG user_data: {get_resource: test_script}
Using OS::Nova::Server
The bash script can be placed alternatively in the user_data property of the OS::Nova::Server resource.
heat_template_version: <template-version> parameters: <define parameters> resources: VM: type: OS::Nova::Server properties: image: {get_param: image_id} flavor: {get_param: flavor_id} key_name: {get_param: key_name} networks: - network: <network-name> security_groups: - {get_param: security_group_id} user_data: | #!/bin/bash -v apt-get update apt-mark hold libpam-krb5 libpam-modules libpam-modules-bin libpam-runtime libpam-systemd libpam0g apt-get -y upgrade apt-mark unhold libpam-krb5 libpam-modules libpam-modules-bin libpam-runtime libpam-systemd libpam0g # mysql-server is installed at this stage (can install after VM creation) apt-get install -y apache2 php libapache2-mod-php php-mysql php-gd mysql-server
When a Bash Script becomes too long or complex, the get_file function can be used to retrieve and execute the bash script:
heat_template_version: <template-version> parameters: <define parameters> resources: VM: type: OS::Nova::Server properties: image: {get_param: image_id} flavor: {get_param: flavor_id} key_name: {get_param: key_name} networks: - network: <network-name> security_groups: - {get_param: security_group_id} user_data_format: RAW user_data: {get_file: bash-script.sh}
Note: user_data_format is required for defining how the user_data should be formatted for the server.without this parameter when defining a bash script this way, cloud init returns errors in the log and cannot run the script.
The function str_replace can be used in order to set variable values in the bash script based on parameters or resources in the stack.
Example from Openstack: https://docs.openstack.org/heat/rocky/template_guide/software_deployment.html
heat_template_version: <template-version> parameters: <define parameters> resources: VM: type: OS::Nova::Server properties: image: {get_param: image_id} flavor: {get_param: flavor_id} key_name: {get_param: key_name} networks: - network: <network-name> security_groups: - {get_param: security_group_id} user_data: str_replace: template: | #!/bin/bash # ... params: $FOO: {get_param: foo}
Note: If a stack-update is performed and any changes have been made to the stack update,then the server will be deleted and replaced.
> If you are using the above bash scripts to install mysql-server as well as the other components of the stack, it is best to also run mysql_secure_installation as well. To automate mysql_secure_installation steps, please see the page Installing and setting up MySQL database in a Stack
Create LEMP Stack
LEMP Stack consists of:
Linux
Nginx
MySQL
PHP
Templates
Similar to installing a LAMP stack, you create a template that will execute a bash script to install LEMP on a server. The bash script below can be used in conjuction with the templates on the Create a LAMP stack docs at https://stfc.atlassian.net/wiki/spaces/CLOUDKB/pages/211845365/Heat#Creating-a-LAMP-Stack.
Bash Script
A bash script can be added to a stack template so that when a virtual machine is built, it will automatically set up the VM with the components for a LEMP stack. This script also shows how to automate mysql_secure_installation as well.
#!/bin/bash -v apt-get update # packages for PAM should not be updated by using a bash script # these return prompts to ask if the configuration on the VM can be changed # and the script becomes 'stuck' #To overcome this, use apt-mark hold <packages> apt-mark hold libpam-krb5 libpam-modules libpam-modules-bin libpam-runtime libpam-systemd libpam0g apt-get -y upgrade #Then unhold the packages apt-mark unhold libpam-krb5 libpam-modules libpam-modules-bin libpam-runtime libpam-systemd libpam0g #Install nginx apt-get install -y nginx # In this example, we will install and set up mysql. Alternatively this step can be done after VM creation apt-get install -y expect #to run intereactive script inside bash shell #Install mySQL and set up root access apt-get install -y mysql-server SECURE_MYSQL=$(expect -c " set timeout 5 spawn mysql_secure_installation expect \"Press y|Y for Yes, any other key for No:\" send \"n\r\" expect \"Please set the password for root here.\" send \"temporarypw\r\" expect \"Re-enter password:\" send \"temporarypw\r\" expect \"Remove anonymous users? (Press y|Y for Yes, any other key for No):\" send \"y\r\" expect \"Disallow root login remotely? (Press y|Y for Yes, any other key for No):\" send \"y\r\" expect \"Remove test database and access to it? (Press y|Y for Yes, any other key for No):\" send \"y\r\" expect \"Reload privilege tables now? (Press y|Y for Yes, any other key for No)\" send \"y\r\" expect eof ") echo "$SECURE_MYSQL" mysql <<EOF SELECT user,authentication_string,plugin,host FROM mysql.user; ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'secret'; FLUSH PRIVILEGES; SELECT user,authentication_string,plugin,host FROM mysql.user; EOF #add-apt-repository universe #already available for ubuntu bionic #install PHP and PHP packages apt-get install -y php-fpm php-mysql echo "All components for LEMP stack have been installed!"
This script sets up the root password as a temporary password. If you set up a temporary password, this must be changed once the root user has signed into mySQL. Alternatively, the Heat resource OS::Heat::RandomString
can be used in the Heat Template to generate a password that can be used when setting up MySQL with this bash script.
Create A Server Group
This document is in progress
Server groups can be used to ensure that instances are either placed on the same hypervisor (affinity) or are placed on different hypervisors (anti-affinity).
There are four policies which can be applied to a server group:
affinitysoft-affinityanti-affinitysoft-anti-affinity
Server groups can be implemented in a Heat template using the resource OS::Nova::ServerGroup.
Syntax
resource: server_group: type: OS::Nova:ServerGroup properties: name: string #Optional, - Server group name. Any updates cause replacement. policies: [string, string] #Optional, a list of string policies to apply.
Example
resources: affinity_group: type: OS::Nova::ServerGroup properties: name: hosts on same compute nodes policies: - affinity my_instance: type: OS::Nova::Server properties: image: { get_param: image_id } flavor: { get_param: flavor_id } key_name: {get_param: KeyName } networks: - network: Internal #define the network to use as internal security_groups: - { get_param: security_group_id } user_data_format: RAW name: server_1 #name for instance scheduler_hints: group: {get_resource: affinity_group}
You can list the server groups which are in your project using the command:
openstack server group list
This should return a table similar to this one:
+--------------------------------------+---------------------------------+----------+ | ID | Name | Policies | +--------------------------------------+---------------------------------+----------+ | 6c8030c0-1b33-4470-b26d-51b6cac17bb7 | hosts on same compute nodes | affinity | +--------------------------------------+---------------------------------+----------+
You can also list the members of the server group using:
openstack server show <server-group-name/id>
For example:
openstack server group show 6c8030c0-1b33-4470-b26d-51b6cac17bb7 +----------+--------------------------------------+ | Field | Value | +----------+--------------------------------------+ | id | 6c8030c0-1b33-4470-b26d-51b6cac17bb7 | | members | 87663bdb-c597-4098-b09c-624ec9974572 | | name | hosts on same compute nodes | | policies | affinity | +----------+--------------------------------------+
Automating Interactive Scripts
Sometimes there are installations which require user responses to certain questions. One example of this can be found when running the mysql_secure_installation command. It is possible to automate these responses when you want to have a database, like MySQL set up and ready to use as root.
To do this, we use expect.
According to the linux man page, expect is programmed dialogue which knows what is expected from the program and what the correct response should be.
Typically, Expcect is run in a separate script with the first line of the script as:
#!/usr/local/bin/expect -<options>
However, it is possible to run an expect script within a regular bash script.
An expect script inside a normal bash script would be written as follows:
$FOO=(expect -c " expect \"<interactive prompt>\" send \"<response>\r\" ") echo "$FOO"
Accessing MySQL Database from JupyterHub
This document will show how to connect to a MySQL database which is on the same server as JupyterHub.
In MySQL, a new user ‘megan’ has been created with a ‘test’ password. This user has been given access to the database ‘demodb’.
Python Packages
Python Packages
The following python packages are used in this notebook: - pymysql - sqlalchemy - pandas and scikitlearn - these will be used in one example
Connecting to a MySQL database
Either pymysql or sqlalchemy can be used to connect to a database, however it is better to use sqlalchemy if you want to import a pandas dataframe as a table into your MySQL database.
# pymysql import pymysql.cursors #connect to database connection = pymysql.connect(host='localhost', user='megan', password='test', database='demodb')
#sqlalchemy from sqlalchemy import create_engine eng = create_engine('mysql+pymysql://megan:test@localhost/demodb') #connects to the database demodb as user=megan and password=test eng.connect() #this will show an error if it fails to connect to the database <sqlalchemy.engine.base.Connection at 0x7f4c5c739f60>
After connecting to the database, you can check the list of tables using:
#if using sqlalchemy eng.table_names() ['example', 'iris_table', 'tutorials_tbl']
#alternatively, pymysql allows you to execute SQL commands: cursor = connection.cursor() query = "SELECT * FROM iris_table" #example SQL query cursor.execute(query) #executes the query, here it states there are 150 entries in the table #output 150
Import dataset into MySQL
The following example will show how to store an Iris dataset as a new table in our demo database.
#import python libraries from sklearn import datasets import pandas as pd iris = datasets.load_iris() #load Iris dataset print(iris.keys()) dict_keys(['data', 'target', 'frame', 'target_names', 'DESCR', 'feature_names', 'filename'])
#store dataset in a dataframe data = pd.DataFrame(iris.data, columns=[iris.feature_names]) print(data) #print the dataframe #The output would be: sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) 0 5.1 3.5 1.4 0.2 1 4.9 3.0 1.4 0.2 2 4.7 3.2 1.3 0.2 3 4.6 3.1 1.5 0.2 4 5.0 3.6 1.4 0.2 .. ... ... ... ... 145 6.7 3.0 5.2 2.3 146 6.3 2.5 5.0 1.9 147 6.5 3.0 5.2 2.0 148 6.2 3.4 5.4 2.3 149 5.9 3.0 5.1 1.8 [150 rows x 4 columns]
#create a new table in MySQL database and import this dataset into the table #using the engine we have created to connect to the database demodb in MySQL, #import the pandas dataframe 'data' as a new table in the database called 'iris_table_demo' data.to_sql(con=eng, name ='iris_table_demo')
The command above also has an option for the case when the table name matches a table which already exists in the database. This means that you can either replace and overwrite the table in the database, or append the current table in MySQL and add this dataset to the data which is already there.
#confirm that the table is in the database: eng.table_names() ['example', 'iris_table', 'iris_table_demo', 'tutorials_tbl']
This shows that we have a new table in the database demodb called ‘iris_table_demo’.
Let’s look at the table iris_table and print the records from that table.
query = "SELECT * FROM iris_table" cursor.execute(query) records = cursor.fetchall() print(records) ((None, 5.1, 3.5, 1.4, 0.2), (None, 4.9, 3.0, 1.4, 0.2), (None, 4.7, 3.2, 1.3, 0.2), (None, 4.6, 3.1, 1.5, 0.2), (None, 5.0, 3.6, 1.4, 0.2), (None, 5.4, 3.9, 1.7, 0.4), (None, 4.6, 3.4, 1.4, 0.3), (None, 5.0, 3.4, 1.5, 0.2), (None, 4.4, 2.9, 1.4, 0.2), (None, 4.9, 3.1, 1.5, 0.1), (None, 5.4, 3.7, 1.5, 0.2), (None, 4.8, 3.4, 1.6, 0.2), (None, 4.8, 3.0, 1.4, 0.1), (None, 4.3, 3.0, 1.1, 0.1), (None, 5.8, 4.0, 1.2, 0.2), (None, 5.7, 4.4, 1.5, 0.4), (None, 5.4, 3.9, 1.3, 0.4), (None, 5.1, 3.5, 1.4, 0.3), (None, 5.7, 3.8, 1.7, 0.3), (None, 5.1, 3.8, 1.5, 0.3), (None, 5.4, 3.4, 1.7, 0.2), (None, 5.1, 3.7, 1.5, 0.4), (None, 4.6, 3.6, 1.0, 0.2), (None, 5.1, 3.3, 1.7, 0.5), (None, 4.8, 3.4, 1.9, 0.2), (None, 5.0, 3.0, 1.6, 0.2), (None, 5.0, 3.4, 1.6, 0.4), (None, 5.2, 3.5, 1.5, 0.2), (None, 5.2, 3.4, 1.4, 0.2), (None, 4.7, 3.2, 1.6, 0.2), (None, 4.8, 3.1, 1.6, 0.2), (None, 5.4, 3.4, 1.5, 0.4), (None, 5.2, 4.1, 1.5, 0.1), (None, 5.5, 4.2, 1.4, 0.2), (None, 4.9, 3.1, 1.5, 0.2), (None, 5.0, 3.2, 1.2, 0.2), (None, 5.5, 3.5, 1.3, 0.2), (None, 4.9, 3.6, 1.4, 0.1), (None, 4.4, 3.0, 1.3, 0.2), (None, 5.1, 3.4, 1.5, 0.2), (None, 5.0, 3.5, 1.3, 0.3), (None, 4.5, 2.3, 1.3, 0.3), (None, 4.4, 3.2, 1.3, 0.2), (None, 5.0, 3.5, 1.6, 0.6), (None, 5.1, 3.8, 1.9, 0.4), (None, 4.8, 3.0, 1.4, 0.3), (None, 5.1, 3.8, 1.6, 0.2), (None, 4.6, 3.2, 1.4, 0.2), (None, 5.3, 3.7, 1.5, 0.2), (None, 5.0, 3.3, 1.4, 0.2), (None, 7.0, 3.2, 4.7, 1.4), (None, 6.4, 3.2, 4.5, 1.5), (None, 6.9, 3.1, 4.9, 1.5), (None, 5.5, 2.3, 4.0, 1.3), (None, 6.5, 2.8, 4.6, 1.5), (None, 5.7, 2.8, 4.5, 1.3), (None, 6.3, 3.3, 4.7, 1.6), (None, 4.9, 2.4, 3.3, 1.0), (None, 6.6, 2.9, 4.6, 1.3), (None, 5.2, 2.7, 3.9, 1.4), (None, 5.0, 2.0, 3.5, 1.0), (None, 5.9, 3.0, 4.2, 1.5), (None, 6.0, 2.2, 4.0, 1.0), (None, 6.1, 2.9, 4.7, 1.4), (None, 5.6, 2.9, 3.6, 1.3), (None, 6.7, 3.1, 4.4, 1.4), (None, 5.6, 3.0, 4.5, 1.5), (None, 5.8, 2.7, 4.1, 1.0), (None, 6.2, 2.2, 4.5, 1.5), (None, 5.6, 2.5, 3.9, 1.1), (None, 5.9, 3.2, 4.8, 1.8), (None, 6.1, 2.8, 4.0, 1.3), (None, 6.3, 2.5, 4.9, 1.5), (None, 6.1, 2.8, 4.7, 1.2), (None, 6.4, 2.9, 4.3, 1.3), (None, 6.6, 3.0, 4.4, 1.4), (None, 6.8, 2.8, 4.8, 1.4), (None, 6.7, 3.0, 5.0, 1.7), (None, 6.0, 2.9, 4.5, 1.5), (None, 5.7, 2.6, 3.5, 1.0), (None, 5.5, 2.4, 3.8, 1.1), (None, 5.5, 2.4, 3.7, 1.0), (None, 5.8, 2.7, 3.9, 1.2), (None, 6.0, 2.7, 5.1, 1.6), (None, 5.4, 3.0, 4.5, 1.5), (None, 6.0, 3.4, 4.5, 1.6), (None, 6.7, 3.1, 4.7, 1.5), (None, 6.3, 2.3, 4.4, 1.3), (None, 5.6, 3.0, 4.1, 1.3), (None, 5.5, 2.5, 4.0, 1.3), (None, 5.5, 2.6, 4.4, 1.2), (None, 6.1, 3.0, 4.6, 1.4), (None, 5.8, 2.6, 4.0, 1.2), (None, 5.0, 2.3, 3.3, 1.0), (None, 5.6, 2.7, 4.2, 1.3), (None, 5.7, 3.0, 4.2, 1.2), (None, 5.7, 2.9, 4.2, 1.3), (None, 6.2, 2.9, 4.3, 1.3), (None, 5.1, 2.5, 3.0, 1.1), (None, 5.7, 2.8, 4.1, 1.3), (None, 6.3, 3.3, 6.0, 2.5), (None, 5.8, 2.7, 5.1, 1.9), (None, 7.1, 3.0, 5.9, 2.1), (None, 6.3, 2.9, 5.6, 1.8), (None, 6.5, 3.0, 5.8, 2.2), (None, 7.6, 3.0, 6.6, 2.1), (None, 4.9, 2.5, 4.5, 1.7), (None, 7.3, 2.9, 6.3, 1.8), (None, 6.7, 2.5, 5.8, 1.8), (None, 7.2, 3.6, 6.1, 2.5), (None, 6.5, 3.2, 5.1, 2.0), (None, 6.4, 2.7, 5.3, 1.9), (None, 6.8, 3.0, 5.5, 2.1), (None, 5.7, 2.5, 5.0, 2.0), (None, 5.8, 2.8, 5.1, 2.4), (None, 6.4, 3.2, 5.3, 2.3), (None, 6.5, 3.0, 5.5, 1.8), (None, 7.7, 3.8, 6.7, 2.2), (None, 7.7, 2.6, 6.9, 2.3), (None, 6.0, 2.2, 5.0, 1.5), (None, 6.9, 3.2, 5.7, 2.3), (None, 5.6, 2.8, 4.9, 2.0), (None, 7.7, 2.8, 6.7, 2.0), (None, 6.3, 2.7, 4.9, 1.8), (None, 6.7, 3.3, 5.7, 2.1), (None, 7.2, 3.2, 6.0, 1.8), (None, 6.2, 2.8, 4.8, 1.8), (None, 6.1, 3.0, 4.9, 1.8), (None, 6.4, 2.8, 5.6, 2.1), (None, 7.2, 3.0, 5.8, 1.6), (None, 7.4, 2.8, 6.1, 1.9), (None, 7.9, 3.8, 6.4, 2.0), (None, 6.4, 2.8, 5.6, 2.2), (None, 6.3, 2.8, 5.1, 1.5), (None, 6.1, 2.6, 5.6, 1.4), (None, 7.7, 3.0, 6.1, 2.3), (None, 6.3, 3.4, 5.6, 2.4), (None, 6.4, 3.1, 5.5, 1.8), (None, 6.0, 3.0, 4.8, 1.8), (None, 6.9, 3.1, 5.4, 2.1), (None, 6.7, 3.1, 5.6, 2.4), (None, 6.9, 3.1, 5.1, 2.3), (None, 5.8, 2.7, 5.1, 1.9), (None, 6.8, 3.2, 5.9, 2.3), (None, 6.7, 3.3, 5.7, 2.5), (None, 6.7, 3.0, 5.2, 2.3), (None, 6.3, 2.5, 5.0, 1.9), (None, 6.5, 3.0, 5.2, 2.0), (None, 6.2, 3.4, 5.4, 2.3), (None, 5.9, 3.0, 5.1, 1.8))
However, it is better to first import the table into a pandas data frame:
df = pd.read_sql('SELECT * FROM iris_table', con=connection)
print(df)
index ('sepal length (cm)',) ('sepal width (cm)',) \ 0 None 5.1 3.5 1 None 4.9 3.0 2 None 4.7 3.2 3 None 4.6 3.1 4 None 5.0 3.6 .. ... ... ... 145 None 6.7 3.0 146 None 6.3 2.5 147 None 6.5 3.0 148 None 6.2 3.4 149 None 5.9 3.0 ('petal length (cm)',) ('petal width (cm)',) 0 1.4 0.2 1 1.4 0.2 2 1.3 0.2 3 1.5 0.2 4 1.4 0.2 .. ... ... 145 5.2 2.3 146 5.0 1.9 147 5.2 2.0 148 5.4 2.3 149 5.1 1.8 [150 rows x 5 columns]
List of useful MySQL commands
http://g2pc1.bu.edu/~qzpeng/manual/MySQL%20Commands.htm
References
https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/Welcome.html?r=7078
https://docs.openstack.org/heat/train/developing_guides/architecture.html
https://docs.openstack.org/heat/train/template_guide/openstack.html
https://docs.openstack.org//heat/latest/doc-heat.pdf
https://docs.openstack.org/heat/train/template_guide/hot_spec.html#hot-spec
https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html
https://ibm-blue-box-help.github.io/help-documentation/heat/autoscaling-with-heat/
https://github.com/openstack/heat-templates/blob/master/hot/autoscaling.yaml
https://docs.openstack.org/heat/rocky/template_guide/software_deployment.html
https://gist.github.com/Mins/4602864
https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Nova::ServerGroup
https://docs.syseleven.de/syseleven-stack/en/tutorials/affinity
https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Nova::ServerGroup
https://docs.openstack.org/mitaka/config-reference/compute/scheduler.html#servergroupaffinityfilter
https://linux.die.net/man/1/expect
https://gist.github.com/Mins/4602864
https://overiq.com/sqlalchemy-101/installing-sqlalchemy-and-connecting-to-database/
https://pandas.pydata.org/pandas-docs/stable/reference/index.html
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_sql.html
https://towardsdatascience.com/sqlalchemy-python-tutorial-79a577141a91
https://linuxize.com/post/show-tables-in-mysql-database/
https://pynative.com/python-mysql-select-query-to-fetch-data/