OpenStack Supported Versions

Cloudbreak was tested against the following versions of Red Hat Distribution of OpenStack (RDO):

Cloudbreak requires that the standard components are installed and configured on OpenStack:

OpenStack Images

We have pre-built cloud images for OpenStack with the Cloudbreak Deployer pre-installed and with Cloudbreak pre-installed. Following steps will guide you through the launch of the images then the needed configuration.

Alternatively, instead of using the pre-built cloud image, you can install Cloudbreak Deployer on your own VM. See install the Cloudbreak Deployer for more information.

Please make sure you opened the following ports on your security group:

OpenStack Image Details

Cloudbreak Deployer image

Cloudbreak image

Import the image into your OpenStack

Cloudbreak Deployer import

export OS_IMAGE_NAME=<add_a_name_to_your_new_image>
export OS_USERNAME=<your_os_user_name>
export OS_AUTH_URL=<http://.../v2.0>
export OS_TENANT_NAME=<your_os_tenant_name>

Import the new image into your OpenStack:

glance image-create --name "$OS_IMAGE_NAME" --file "$CBD_LATEST_IMAGE" --disk-format qcow2 --container-format bare
--progress

Minimum and Recommended VM requirements: 8GB RAM, 10GB disk, 2 cores

Full size here.

Cloudbreak import

export CB_LATEST_IMAGE_NAME=<file_name_of_the_above_cloudbreak_image>
export OS_USERNAME=<your_os_user_name>
export OS_AUTH_URL=<http://.../v2.0>
export OS_TENANT_NAME=<your_os_tenant_name>

Import the new image into your OpenStack:

glance image-create --name "$CB_LATEST_IMAGE_NAME" --file "$CB_LATEST_IMAGE" --disk-format qcow2
--container-format bare --progress

OpenStack Setup

Before configuring Cloudbreak Deployer, you should know that:

Set up Cloudbreak Deployer

You should have already installed the Cloudbreak Deployer either by using the OpenStack Cloud Images or by installing the Cloudbreak Deployer manually on your own VM.

If you have your own installed VM, check the Initialize your Profile section here before starting the provisioning.

You can connect to the previously created cbd VM.

To open the cloudbreak-deployment directory, run:

cd /var/lib/cloudbreak-deployment/

This directory contains configuration files and the supporting binaries for Cloudbreak Deployer.

Initialize Your Profile

First, initialize cbd by creating a Profile file with the following content:

export UAA_DEFAULT_SECRET='[SECRET]'
export UAA_DEFAULT_USER_PW='[PASSWORD]'

By default the cbd tool tries to guess PUBLIC_IP to bind Cloudbreak UI to it. But if cbd cannot get the IP address during the initialization, set the appropriate value also in your Profile.

OpenStack-specific Configuration

Make sure that the VM image used by Cloudbreak is imported on your OpenStack.

Using Self-signed Certificates

If your OpenStack is secured with a self-signed certificate, you need to import that certificate into Cloudbreak, or else Cloudbreak won't be able to communicate with your OpenStack. To import the certificate, place the certificate file in the generated certs directory /certs/trusted/. The trusted directory does not exist by default, so you need to create it. Cloudbreak will automatically pick up these certificates and import them into its truststore upon start.

Availability Zones and Region config

By default Cloudbreak uses RegionOne region with nova availability zone, but OpenStack supports multiple regions and multiple availability zones. You can customize Cloudbreak deployment and enable multiple regions and availability zones by creating an openstack-zone.json under the etc directory of Cloudbreak deployment (e.g. /var/lib/cloudbreak-deployment/etc/openstack-zone.json). You can find an example of openstack-zone.json containing two regions and four availability zones below:

{
  "items": [
    {
      "name": "MyRegionOne",
      "zones": [ "az1", "az2", "az3"]
    },
    {
      "name": "MyRegionTwo",
      "zones": [ "myaz"]
    }
  ]
}

If the etc directory does not exist under Cloudbreak deployment directory, then please create it. Restart is needed to pick up the changes done in openstack-zone.json file.

Start Cloudbreak Deployer

To start the Cloudbreak application, use the following command:

cbd start

This will start all the Docker containers and initialize the application.

The first time you start the Coudbreak app, the process will take longer than usual due to the download of all the necessary docker images.

The cbd start command includes the cbd generate command which applies the following steps:

Validate that Cloudbreak Deployer Has Started

After the cbd start command finishes, check the following:

   cbd doctor

If you need to run cbd update, refer to Cloudbreak Deployer Update.

   cbd logs cloudbreak

You should see a line like this: Started CloudbreakApplication in 36.823 seconds. Cloudbreak normally takes less than a minute to start.

Provisioning Prerequisites

Generate a New SSH Key

All the instances created by Cloudbreak are configured to allow key-based SSH, so you'll need to provide an SSH public key that can be used later to SSH onto the instances in the clusters you'll create with Cloudbreak. You can use one of your existing keys or you can generate a new one.

To generate a new SSH keypair:

ssh-keygen -t rsa -b 4096 -C "your_email@example.com"
# Creates a new ssh key, using the provided email as a label
# Generating public/private rsa key pair.
# Enter file in which to save the key (/Users/you/.ssh/id_rsa): [Press enter]
You'll be asked to enter a passphrase, but you can leave it empty.

# Enter passphrase (empty for no passphrase): [Type a passphrase]
# Enter same passphrase again: [Type passphrase again]

After you enter a passphrase the keypair is generated. The output should look something like below.

# Your identification has been saved in /Users/you/.ssh/id_rsa.
# Your public key has been saved in /Users/you/.ssh/id_rsa.pub.
# The key fingerprint is:
# 01:0f:f4:3b:ca:85:sd:17:sd:7d:sd:68:9d:sd:a2:sd your_email@example.com

Later you'll need to pass the .pub file's contents to Cloudbreak and use the private part to SSH to the instances

Provisioning via Browser

You can log into the Cloudbreak application at https://<Public_IP>/.

The main goal of the Cloudbreak UI is to easily create clusters on your own cloud provider account. This description details the OpenStack setup - if you'd like to use a different cloud provider check out its manual.

This document explains the four steps that need to be followed to create Cloudbreak clusters from the UI:

IMPORTANT Make sure that you have sufficient qouta (CPU, network, etc) for the requested cluster size

Setting up OpenStack credentials

Cloudbreak works by connecting your OpenStack account through so called Credentials, and then uses these credentials to create resources on your behalf. The credentials can be configured on the manage credentials panel on the Cloudbreak Dashboard.

To create a new OpenStack credential follow these steps:

  1. Select the Keystone Version. For instance, select the v2
  2. Fill out the new credential Name
    • Only alphanumeric and lowercase characters (min 5, max 100 characters) can be applied
  3. Copy your OpenStack user name to the User field
  4. Copy your OpenStack user password to the Password field
  5. Copy your OpenStack tenant name to the Tenant Name field
  6. Copy your OpenStack identity service (Keystone) endpoint (e.g. http://PUBLIC_IP:5000/v2.0) to the Endpoint field
  7. Copy your SSH public key to the SSH public key field
    • The SSH public key must be in OpenSSH format and it's private keypair can be used later to SSH onto every instance of every cluster you'll create with this credential.
    • The SSH username for the OpenStack instances is cloudbreak.

Any other parameter is optional here. You can read more about Keystone v3 here.

API Facing is the URL perspective in which the API is accessing data.

Public in account means that all the users belonging to your account will be able to use this credential to create clusters, but cannot delete it.

Full size here.

Infrastructure templates

After your OpenStack account is linked to Cloudbreak you can start creating resource templates that describe your clusters' infrastructure:

When you create one of the above resources, Cloudbreak does not make any requests to OpenStack. Resources are only created on OpenStack after the create cluster button has pushed. These templates are saved to Cloudbreak's database and can be reused with multiple clusters to describe the infrastructure.

Templates

Templates describe the instances of your cluster - the instance type and the attached volumes. A typical setup is to combine multiple templates in a cluster for the different types of nodes. For example you may want to attach multiple large disks to the datanodes or have memory optimized instances for Spark nodes.

The instance templates can be configured on the manage templates panel on the Cloudbreak Dashboard.

If Public in account is checked all the users belonging to your account will be able to use this resource to create clusters, but cannot delete it.

Networks

Your clusters can be created in their own networks or in one of your already existing one. If you choose an existing network, it is possible to create a new subnet within the network. The subnet's IP range must be defined in the Subnet (CIDR) field using the general CIDR notation. Here you can read more about OpenStack networking.

Custom OpenStack Network

If you'd like to deploy a cluster to your OpenStack network you'll have to create a new network template on the manage networks panel on the Cloudbreak Dashboard.

"Before launching an instance, you must create the necessary virtual network infrastructure...an instance uses a public provider virtual network that connects to the physical network infrastructure...This network includes a DHCP server that provides IP addresses to instances...The admin or other privileged user must create this network because it connects directly to the physical network infrastructure."

Here you can read more about OpenStack virtual network and public provider network.

You have the following options to create a new network:

Explanation of the parameters:

IMPORTANT Please make sure the defined subnet here doesn't overlap with any of your already deployed subnet in the network, because the validation only happens after the cluster creation starts.

In case of existing subnet make sure you have enough room within your network space for the new instances. The provided subnet CIDR will be ignored, but a proper CIDR range will be used.

If Public in account is checked all the users belonging to your account will be able to use this network template to create clusters, but cannot delete it.

NOTE The new networks are created on OpenStack only after the the cluster provisioning starts with the selected network template.

Full size here.

Security groups

Security group templates are very similar to the Security Groups on OpenStack. They describe the allowed inbound traffic to the instances in the cluster. Currently only one security group template can be selected for a Cloudbreak cluster and all the instances have a public IP address so all the instances in the cluster will belong to the same security group. This may change in a later release.

Default Security Group

You can also use the two pre-defined security groups in Cloudbreak.

only-ssh-and-ssl: all ports are locked down except for SSH and the selected Ambari Server HTTPS (you can't access Hadoop services outside of the network):

Custom Security Group

You can define your own security group by adding all the ports, protocols and CIDR range you'd like to use. The rules defined here doesn't need to contain the internal rules, those are automatically added by Cloudbreak to the security group on OpenStack.

Hadoop services : Ambari (8080) Consul (8500) NN (50070) RM Web (8088) Scheduler (8030RM) IPC (8050RM) Job history server (19888) HBase master (60000) HBase master web (60010) HBase RS (16020) HBase RS info (60030) Falcon (15000) Storm (8744) Hive metastore (9083) Hive server (10000) Hive server HTTP (10001) Accumulo master (9999) Accumulo Tserver (9997) Atlas (21000) KNOX (8443) Oozie (11000) Spark HS (18080) NM Web (8042) Zeppelin WebSocket (9996) Zeppelin UI (9995) Kibana (3080) * Elasticsearch (9200)

IMPORTANT 443, 9443, and 22 ports needs to be there in every security group otherwise Cloudbreak won't be able to communicate with the provisioned cluster.

If Public in account is checked all the users belonging to your account will be able to use this security group template to create clusters, but cannot delete it.

NOTE The security groups are created on OpenStack only after the cluster provisioning starts with the selected security group template.

Full size here.</sub

Defining Cluster Services

Blueprints

Blueprints are your declarative definition of a Hadoop cluster. These are the same blueprints that are used by Ambari.

You can use the 3 default blueprints pre-defined in Cloudbreak or you can create your own ones. Blueprints can be added from file, URL (an example blueprint) or the whole JSON can be written in the JSON text box.

The host groups in the JSON will be mapped to a set of instances when starting the cluster. Besides this the services and components will also be installed on the corresponding nodes. Blueprints can be modified later from the Ambari UI.

NOTE: It is not necessary to define all the configuration in the blueprint. If a configuration is missing, Ambari will fill that with a default value.

If Public in account is checked all the users belonging to your account will be able to use this blueprint to create clusters, but cannot delete or modify it.

Full size here.

A blueprint can be exported from a running Ambari cluster that can be reused in Cloudbreak with slight modifications. There is no automatic way to modify an exported blueprint and make it instantly usable in Cloudbreak, the modifications have to be done manually. When the blueprint is exported some configurations are hardcoded for example domain names, memory configurations...etc. that won't be applicable to the Cloudbreak cluster

Cluster deployment

After all the cluster resources are configured you can deploy a new HDP cluster.

Here is a basic flow for cluster creation on Cloudbreak Web UI:

Full size here.

Configure Cluster tab

Setup Network and Security tab

Choose Blueprint tab

Review and Launch tab

Cloudbreak uses OpenStack to create the resources - you can check out the resources created by Cloudbreak on the Instances page of your OpenStack Project. Full size here.

Besides these you can check the progress on the Cloudbreak Web UI itself if you open the new cluster's Event History. Full size here.

Advanced options

Ambari Username This user will be used as admin user in Ambari. You can log in using this username on the Ambari UI.

Ambari Password The password associated with the Ambari username. This password will be also the default password for all required passwords which are not specified in the blueprint. E.g: hive DB password.

Connector Variant Cloudbreak provides two implementation for creating OpenStack cluster

The HEAT variant utilizes the Heat templating to launch a stack, but the NATIVE variant starts the cluster by using a sequence of API calls without Heat to achieve the same result, although both of them are using the same authentication and credential management.

Minimum cluster size The provisioning strategy in case the cloud provider cannot allocate all the requested nodes.

Validate blueprint This is selected by default. Cloudbreak validates the Ambari blueprint in this case.

Custom Image If you enable this, you can override the default image for provision.

Shipyard enabled cluster This is selected by default. Cloudbreak will start a Shipyard container which helps you to manage your containers.

Config recommendation strategy Strategy for how configuration recommendations will be applied. Recommended configurations gathered by the response of the stack advisor.

Cluster termination

You can terminate running or stopped clusters with the terminate button in the cluster details.

IMPORTANT Always use Cloudbreak to terminate the cluster. If that fails for some reason, try to delete the OpenStack instances first. Instances are started in an Auto Scaling Group so they may be restarted if you terminate an instance manually!

Sometimes Cloudbreak cannot synchronize its state with the cluster state at the cloud provider and the cluster can't be terminated. In this case the Forced termination option can help to terminate the cluster at the Cloudbreak side. If it has happened:

  1. You should check the related resources at the OpenStack
  2. If it is needed you need to manually remove resources from there

Full size here.

Interactive mode / Cloudbreak Shell

The goal with the Cloudbreak Shell (Cloudbreak CLI) was to provide an interactive command line tool which supports:

Start Cloudbreak Shell

To start the Cloudbreak CLI use the following commands:

   cd cloudbreak-deployment
   cbd start
   cbd util cloudbreak-shell

At the very first time it will take for a while, because of need to download all the necessary docker images.

This will launch the Cloudbreak shell inside a Docker container then it is ready to use. Full size here.

IMPORTANT You have to copy all your files into the cbd working directory, what you would like to use in shell. For example if your cbd working directory is ~/cloudbreak-deployment then copy your blueprint JSON, public ssh key file...etc. to here. You can refer to these files with their names from the shell.

Autocomplete and Hints

Cloudbreak Shell helps you with hint messages from the very beginning, for example:

cloudbreak-shell>hint
Hint: Add a blueprint with the 'blueprint create' command or select an existing one with 'blueprint select'
cloudbreak-shell>

Beyond this you can use the autocompletion (double-TAB) as well:

cloudbreak-shell>credential create --
credential create --AWS          credential create --AZURE        credential create --EC2          credential create --GCP          credential create --OPENSTACK

Provisioning via CLI

Setting up OpenStack credential

Cloudbreak works by connecting your OpenStack account through so called Credentials, and then uses these credentials to create resources on your behalf. Credentials can be configured with the following command for example:

credential create --OPENSTACK --name my-os-credential --description "sample description" --userName <OpenStack username> --password <OpenStack password> --tenantName <OpenStack tenant name> --endPoint <OpenStack Identity Service (Keystone) endpoint> --sshKeyString "ssh-rsa AAAAB****etc"

NOTE that Cloudbreak does not set your cloud user details - we work around the concept of OpenStack's authentication. You should have already valid OpenStack credentials. You can find further details here.

Alternatives to provide SSH Key:

Other available option:

--facing URL perspective in which the API is accessing data, allowed types are: public, admin and internal.

NOTE If facing not specified OpenStack default value will be applied.

You can check whether the credential was created successfully

credential list

You can switch between your existing credentials

credential select --name my-os-credential

Infrastructure templates

After your OpenStack account is linked to Cloudbreak you can start creating resource templates that describe your clusters' infrastructure:

When you create one of the above resources, Cloudbreak does not make any requests to OpenStack. Resources are only created on OpenStack after the cluster create has applied. These templates are saved to Cloudbreak's database and can be reused with multiple clusters to describe the infrastructure.

Templates

Templates describe the instances of your cluster - the instance type and the attached volumes. A typical setup is to combine multiple templates in a cluster for the different types of nodes. For example you may want to attach multiple large disks to the datanodes or have memory optimized instances for Spark nodes.

A template can be used repeatedly to create identical copies of the same stack (or to use as a foundation to start a new stack). Templates can be configured with the following command for example:

template create --OPENSTACK --name my-os-template --description "sample description" --instanceType m1.medium 
--volumeSize 100 --volumeCount 1

Other available option here is --publicInAccount. If it is true, all the users belonging to your account will be able to use this template to create clusters, but cannot delete it.

You can check whether the template was created successfully

template list

Networks

Your clusters can be created in their own networks or in one of your already existing one. If you choose an existing network, it is possible to create a new subnet within the network. The subnet's IP range must be defined in the Subnet (CIDR) field using the general CIDR notation. Here you can read more about OpenStack networking.

Custom OpenStack Network

If you'd like to deploy a cluster to your OpenStack network you'll have to create a new network template.

A network also can be used repeatedly to create identical copies of the same stack (or to use as a foundation to start a new stack).

"Before launching an instance, you must create the necessary virtual network infrastructure...an instance uses a public provider virtual network that connects to the physical network infrastructure...This network includes a DHCP server that provides IP addresses to instances...The admin or other privileged user must create this network because it connects directly to the physical network infrastructure."

Here you can read more about OpenStack virtual network and public provider network.

network create --OPENSTACK --name my-os-network --description openstack-network --publicNetID <id of an OpenStack 
public network> --subnet 10.0.0.0/16

IMPORTANT

  • In case of existing subnet all three parameters must be provided, with new subnet only two are required.
  • Please make sure the defined subnet here doesn't overlap with any of your already deployed subnet in the network, because the validation only happens after the cluster creation starts.
  • In case of existing subnet make sure you have enough room within your network space for the new instances. The provided subnet CIDR will be ignored, but a proper CIDR range will be used.

NOTE The new networks are created on OpenStack only after the the cluster provisioning starts with the selected network template.

Other available options here:

--networkId This must be an ID of an existing OpenStack virtual network.

--routerId Your virtual network router ID (must be provided in case of existing virtual network).

--subnetId Your subnet ID within your virtual network. If the identifier is provided, the Subnet (CIDR) will be ignored. Leave it blank if you'd like to create a new subnet within the virtual network with the provided Subnet (CIDR) range.

--publicInAccount If it is true, all the users belonging to your account will be able to use this template to create clusters, but cannot delete it.

You can check whether the network was created successfully

network list

Defining Cluster Services

Blueprints

Blueprints are your declarative definition of a Hadoop cluster. These are the same blueprints that are used by Ambari.

You can use the 3 default blueprints pre-defined in Cloudbreak or you can create your own ones. Blueprints can be added from file or URL (an example blueprint).

The host groups in the JSON will be mapped to a set of instances when starting the cluster. Besides this the services and components will also be installed on the corresponding nodes. Blueprints can be modified later from the Ambari UI.

NOTE: It is not necessary to define all the configuration in the blueprint. If a configuration is missing, Ambari will fill that with a default value.

blueprint create --name my-blueprint --description "sample description" --file <the path of the blueprint>

Other available options:

--url the url of the blueprint

--publicInAccount If it is true, all the users belonging to your account will be able to use this blueprint to create clusters, but cannot delete it.

You can check whether the blueprint was created successfully

blueprint list

A blueprint can be exported from a running Ambari cluster that can be reused in Cloudbreak with slight modifications. There is no automatic way to modify an exported blueprint and make it instantly usable in Cloudbreak, the modifications have to be done manually. When the blueprint is exported some configurations are hardcoded for example domain names, memory configurations..etc. that won't be applicable to the Cloudbreak cluster.

Metadata Show

You can check the stack metadata with

stack metadata --name myawsstack --instancegroup master

Other available options:

--id In this case you can select a stack with id.

--outputType In this case you can modify the outputformat of the command (RAW or JSON).

Cluster deployment

After all the cluster resources are configured you can deploy a new HDP cluster. The following sub-sections show you a basic flow for cluster creation with Cloudbreak Shell.

Select credential

Select one of your previously created OpenStack credential:

credential select --name my-os-credential

Select blueprint

Select one of your previously created blueprint which fits your needs:

blueprint select --name multi-node-hdfs-yarn

Configure instance groups

You must configure instance groups before provisioning. An instance group define a group of nodes with a specified template and security group. Usually we create instance groups for host groups in the blueprint. For Ambari server only 1 host group can be specified. If you want to install the Ambari server to a separate node, you need to extend your blueprint with a new host group which contains only 1 service: HDFS_CLIENT and select this host group for the Ambari server. Note: this host group cannot be scaled so it is not advised to select a 'slave' host group for this purpose.

instancegroup configure --instanceGroup master --nodecount 1 --templateName minviable-aws --securityGroupName all-services-port --ambariServer true
instancegroup configure --instanceGroup slave_1 --nodecount 1 --templateName minviable-aws --securityGroupName all-services-port --ambariServer false

Other available option:

--templateId Id of the template

Select network

Select one of your previously created network which fits your needs or a default one:

network select --name my-os-network

Create stack / Create cloud infrastructure

Stack means the running cloud infrastructure that is created based on the instance groups configured earlier (credential, instancegroups, network, securitygroup). Same as in case of the API or UI the new cluster will use your templates and by using OpenStack will launch your cloud stack. Use the following command to create a stack to be used with your Hadoop cluster:

stack create --OPENSTACK --name myosstack --region local

The infrastructure is created asynchronously, the state of the stack can be checked with the stack show command. If it reports AVAILABLE, it means that the virtual machines and the corresponding infrastructure is running at the cloud provider.

Other available option is:

--wait - in this case the create command will return only after the process has finished.

Create a Hadoop cluster / Cloud provisioning

You are almost done! One more command and your Hadoop cluster is starting! Cloud provisioning is done once the cluster is up and running. The new cluster will use your selected blueprint and install your custom Hadoop cluster with the selected components and services.

cluster create --description "my first cluster"

Other available option is --wait - in this case the create command will return only after the process has finished.

You are done! You have several opportunities to check the progress during the infrastructure creation then provisioning:

For example: Full size here.

         cluster show

For example: Full size here.

For example: Full size here.

Stop Cluster

You have the ability to stop your existing stack then its cluster if you want to suspend the work on it.

Select a stack for example with its name:

stack select --name my-stack

Other available option to define a stack is its --id.

Every time you should stop the cluster first then the stack. So apply following commands to stop the previously selected stack:

cluster stop
stack stop

Restart Cluster

Select your stack that you would like to restart after this you can apply:

stack start

After the stack has successfully restarted, you can restart the related cluster as well:

cluster start

Upscale Cluster

If you need more instances to your infrastructure, you can upscale your selected stack:

stack node --ADD --instanceGroup host_group_slave_1 --adjustment 6

Other available option is --withClusterUpScale - this indicates also a cluster upscale after the stack upscale. You can upscale the related cluster separately if you want to do this:

cluster node --ADD --hostgroup host_group_slave_1 --adjustment 6

Downscale Cluster

You also can reduce the number of instances in your infrastructure. After you selected your stack:

cluster node --REMOVE  --hostgroup host_group_slave_1 --adjustment -2

Other available option is --withStackDownScale - this indicates also a stack downscale after the cluster downscale. You can downscale the related stack separately if you want to do this:

stack node --REMOVE  --instanceGroup host_group_slave_1 --adjustment -2

Cluster Termination

You can terminate running or stopped clusters with

stack delete --name myawsstack

Other available option is --wait - in this case the terminate command will return only after the process has finished.

IMPORTANT: Always use Cloudbreak to terminate the cluster. If that fails for some reason, try to delete the CloudFormation stack first. Instances are started in an Auto Scaling Group so they may be restarted if you terminate an instance manually!

Sometimes Cloudbreak cannot synchronize its state with the cluster state at the cloud provider and the cluster can't be terminated. In this case the Forced termination option on the Cloudbreak Web UI can help to terminate the cluster at the Cloudbreak side. If it has happened:

  1. You should check the related resources at the AWS CloudFormation
  2. If it is needed you need to manually remove resources from there

Silent Mode

With Cloudbreak Shell you can execute script files as well. A script file contains shell commands and can be executed with the script cloudbreak shell command

script <your script file>

or with the cbd util cloudbreak-shell-quiet command

cbd util cloudbreak-shell-quiet < example.sh

IMPORTANT: You have to copy all your files into the cbd working directory, what you would like to use in shell. For example if your cbd working directory is ~/cloudbreak-deployment then copy your script file to here.

Example

The following example creates a hadoop cluster with hdp-small-default blueprint on m1.large instances with 2X100G attached disks on osnetwork network using all-services-port security group. You should copy your ssh public key file into your cbd working directory with name id_rsa.pub and change the <...> parts with your OpenStack credential and network details.

credential create --OPENSTACK --name my-os-credential --description "credentail description" --userName <OpenStack username> --password <OpenStack password> --tenantName <OpenStack tenant name> --endPoint <OpenStack Identity Service (Keystone) endpoint> --sshKeyPath <path of your public SSH key file>
credential select --name my-os-credential
template create --OPENSTACK --name ostemplate --description openstack-template --instanceType m1.large --volumeSize 100 --volumeCount 2
blueprint select --name hdp-small-default
instancegroup configure --instanceGroup host_group_master_1 --nodecount 1 --templateName ostemplate --securityGroupName all-services-port --ambariServer true
instancegroup configure --instanceGroup host_group_master_2 --nodecount 1 --templateName ostemplate --securityGroupName all-services-port --ambariServer false
instancegroup configure --instanceGroup host_group_master_3 --nodecount 1 --templateName ostemplate --securityGroupName all-services-port --ambariServer false
instancegroup configure --instanceGroup host_group_client_1  --nodecount 1 --templateName ostemplate --securityGroupName all-services-port --ambariServer false
instancegroup configure --instanceGroup host_group_slave_1 --nodecount 3 --templateName ostemplate --securityGroupName all-services-port --ambariServer false
network create --OPENSTACK --name osnetwork --description openstack-network --publicNetID <id of an OpenStack public network> --subnet 10.0.0.0/16
network select --name osnetwork
stack create --OPENSTACK --name my-first-stack --region local --wait true
cluster create --description "My first cluster" --wait true

Congratulations! Your cluster should now be up and running on this way as well. To learn more about Cloudbreak and provisioning, we have some interesting insights for you.

Edit on GitHub