“Configure Openstack – Part 8/8”

OPENSTACK 

We’re here:

1 STEP – TRY A SSH CONNECTION TO NEW INSTANCE

eee

Congratulations! You have now built and successfully deployed a new cloud instance running on OpenStack, taking full advantage of both Juju and MAAS.

<- part 7/8 . part 9/9 ->

“cya to the next 1…. Njoy !” bye dakj

Disclaimer: All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate steps to date, we take no responsibility if you implement any of these steps in a production environment.

“We learn from our mistakes”

“Configure Openstack – Part 7/8”

OPENSTACK 

We’re here:

1 STEP – ADD A FLOATING IP

eee

2 STEP – CREATE A NEW KEY PAIRS

aaa

save that on our desktop

3 STEP – ADD OUR SSH PUBLIC KEY

In the guide about the installation of Maas we’ve created a SSH key of using for our nodes, the same key we must to be imported on Openstack and to make that on Maas server run that

$: cat .ssh/id_rsa.pub ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDLGUFa2BixCHivURlkn2eryb3LOIwSz9l..
..
..

abd import that

4 STEP – CREATE A NEW SECURITY GROUP

aaanull

edit the new security group and add the rule for SSH connection

5 STEP – CREATE A CLOUD INSTANCE

ddd

now launch the new instance

6 STEP – ASSIGN A FLOATING IP TO INSTANCE

All that’s left to do is assign a floating IP to the new server and connect with SSH.

Congratulations! You have now built and successfully deployed a new cloud instance running on OpenStack,

<- part 6/8 . part 8/8 ->

“cya to the next 1…. Njoy !” bye dakj

Disclaimer: All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate steps to date, we take no responsibility if you implement any of these steps in a production environment.

“We learn from our mistakes”

“Configure Openstack – Part 6/8”

OPENSTACK 

We’re here:

1 STEP – CREATE FLAVOURS

ddd

we’ve created these flavours

2 STEP – CHANGE USER

Change from admin to user

3 STEP – DEFINE AN VIRTUAL PRIVATE NETWORK

ss

4 STEP – CREATE A VIRTUAL ROUTER

ddd

then edit that and adding our virtual private network

Congratulations! You have now built and successfully deployed a new cloud instance running on OpenStack,

<- part 5/8 . part 7/8 ->

“cya to the next 1…. Njoy !” bye dakj

Disclaimer: All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate steps to date, we take no responsibility if you implement any of these steps in a production environment.

“We learn from our mistakes”

“Configure Openstack – Part 5/8”

OPENSTACK 

We’re here:

1 STEP – CONFIGURE OPENSTACK

For the login set the password on Keystone via Juju gui:

let’s with the commit

The URL will be http://<IP ADDRESS>/horizon. When you enter this into your browser you can login with:

Domain: admin_domain
User Name: admin
Password: "r12k@rd0"

now here is it

2 STEP – DEFINE AN EXTERNAL NETWORK

From admin user we create a public network

this is the result

3 STEP – ADD UBUNTU CLOUD IMAGE

Canonical’s Ubuntu cloud images can be found here, for our lab we’ve used this release: bionic-server-cloudimg-amd64-lxd.tar.xz. At this point we can adding that on our Openstack

at the end

4 STEP – CREATE A DOMAIN

ddd

after to create a new domain click on “Set domain context”


5 STEP – CREATE A PROJECT

ddd

6 STEP – CREATE A NEW MEMBER

ddd

assign the new user to project “u1804Pro” and as role “Member”

7 STEP – CREATE A GROUP FOR THE NEW MEMBER

ddd

then add our member to the new group

8 STEP – MANAGE MEMBERS ON NEW PROJECT

aaa

9 STEP – MANAGE MEMBERS ON NEW DOMAIN

dd

<- part 4/8 . part 6/8 ->

“cya to the next 1…. Njoy !” bye dakj

Disclaimer: All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate steps to date, we take no responsibility if you implement any of these steps in a production environment.

“We learn from our mistakes”

“Install Openstack – Part 4/8”

OPENSTACK 

For more information about Openstack we can view this link:

Openstack

or we can view this video

  • Two disks (identified by /dev/vda and /dev/vdb);
  • Two cabled network ports on eno2 and eno3;

1 STEP – PREPARE THE 4 IBM SERVER x3650 FOR OPENSTACK

Before to continue with Openstack we need to prepare our evironment and make on each of 4 nodes the same procedure shown here :

“Add IBM Server x3650 as Node on Maas – part 2/4”

we start from this situation, where all nodes are in ready status

and their interface is so configurated

2 STEP – DEPLOY OPENSTACK VIA JUJU

Befxxxx… on Maas run the following command to create a model on Juju for Openstack

$: juju add-model openstack 
Added 'openstack' model with credential 'richardsith' for user 'admin'

ddddd the run the command to deploy the bundle

 
$: juju deploy cs:bundle/openstack-base-58
Located bundle "cs:bundle/openstack-base-58" Resolving charm: cs:ceph-mon-31 Resolving charm: cs:ceph-osd-273 Resolving charm: cs:ceph-radosgw-262 Resolving charm: cs:cinder-276 Resolving charm: cs:cinder-ceph-238 Resolving charm: cs:glance-271 Resolving charm: cs:keystone-288 Resolving charm: cs:percona-cluster-272 Resolving charm: cs:neutron-api-266 Resolving charm: cs:neutron-gateway-256 Resolving charm: cs:neutron-openvswitch-255 Resolving charm: cs:nova-cloud-controller-316 Resolving charm: cs:nova-compute-290 Resolving charm: cs:ntp-31 Resolving charm: cs:openstack-dashboard-271 Resolving charm: cs:rabbitmq-server-82 Executing changes:
..
..
..
..
..
Deploy of bundle completed.

on Juju we’ll see that

during the deploy our node will make the boot

to see the end of all procedure and have Openstack deployed we’ve to use this command:

$: juju status
Model Controller Cloud/Region Version SLA Timestamp openstack maas-controller maas-cloud 2.5.0 unsupported 16:15:48Z App Version Status Scale Charm Store Rev OS Notes ceph-mon 13.2.1+dfsg1 active 3 ceph-mon jujucharms 31 ubuntu ceph-osd 13.2.1+dfsg1 active 3 ceph-osd jujucharms 273 ubuntu ceph-radosgw 13.2.1+dfsg1 active 1 ceph-radosgw jujucharms 262 ubuntu cinder 13.0.2 active 1 cinder jujucharms 276 ubuntu cinder-ceph 13.0.2 active 1 cinder-ceph jujucharms 238 ubuntu glance 17.0.0 active 1 glance jujucharms 271 ubuntu keystone 14.0.1 active 1 keystone jujucharms 288 ubuntu mysql 5.7.20-29.24 active 1 percona-cluster jujucharms 272 ubuntu neutron-api 13.0.2 active 1 neutron-api jujucharms 266 ubuntu neutron-gateway 13.0.2 active 1 neutron-gateway jujucharms 256 ubuntu neutron-openvswitch 13.0.2 active 3 neutron-openvswitch jujucharms 255 ubuntu nova-cloud-controller 18.0.3 active 1 nova-cloud-controller jujucharms 316 ubuntu nova-compute 18.0.3 active 3 nova-compute jujucharms 290 ubuntu ntp 3.2 active 4 ntp jujucharms 31 ubuntu openstack-dashboard 14.0.1 active 1 openstack-dashboard jujucharms 271 ubuntu rabbitmq-server 3.6.10 active 1 rabbitmq-server jujucharms 82 ubuntu Unit Workload Agent Machine Public address Ports Message ceph-mon/0 active idle 1/lxd/0 10.20.81.5 Unit is ready and clustered ceph-mon/1 active idle 2/lxd/0 10.20.81.8 Unit is ready and clustered ceph-mon/2* active idle 3/lxd/0 10.20.81.3 Unit is ready and clustered ceph-osd/0* active idle 1 10.20.81.23 Unit is ready (1 OSD) ceph-osd/1 active idle 2 10.20.81.22 Unit is ready (1 OSD) ceph-osd/2 active idle 3 10.20.81.24 Unit is ready (1 OSD) ceph-radosgw/0* active idle 0/lxd/0 10.20.81.18 80/tcp Unit is ready cinder/0* active idle 1/lxd/1 10.20.81.16 8776/tcp Unit is ready cinder-ceph/0* active idle 10.20.81.16 Unit is ready glance/0* active idle 2/lxd/1 10.20.81.7 9292/tcp Unit is ready keystone/0* active idle 3/lxd/1 10.20.81.4 5000/tcp Unit is ready mysql/0* active idle 0/lxd/1 10.20.81.17 3306/tcp Unit is ready neutron-api/0* active idle 1/lxd/2 10.20.81.15 9696/tcp Unit is ready neutron-gateway/0* active idle 0 10.20.81.21 Unit is ready ntp/0* active idle 10.20.81.21 123/udp chrony: Ready nova-cloud-controller/0* active idle 2/lxd/2 10.20.81.6 8774/tcp,8775/tcp,8778/tcp Unit is ready nova-compute/0* active executing 1 10.20.81.23 Unit is ready neutron-openvswitch/1 active idle 10.20.81.23 Unit is ready ntp/2 active idle 10.20.81.23 123/udp chrony: Ready nova-compute/1 active executing 2 10.20.81.22 Unit is ready neutron-openvswitch/2 active idle 10.20.81.22 Unit is ready ntp/3 active idle 10.20.81.22 123/udp chrony: Ready nova-compute/2 active executing 3 10.20.81.24 Unit is ready neutron-openvswitch/0* active idle 10.20.81.24 Unit is ready ntp/1 active idle 10.20.81.24 123/udp chrony: Ready openstack-dashboard/0* active idle 3/lxd/2 10.20.81.20 80/tcp,443/tcp Unit is ready rabbitmq-server/0* active idle 0/lxd/2 10.20.81.19 5672/tcp Unit is ready
..
..
..
..
..
Machine State DNS Inst id Series AZ Message 0 started 10.20.81.21 6nnhrb bionic Deployed 0/lxd/0 started 10.20.81.18 juju-8002b3-0-lxd-0 bionic Container started 0/lxd/1 started 10.20.81.17 juju-8002b3-0-lxd-1 bionic Container started 0/lxd/2 started 10.20.81.19 juju-8002b3-0-lxd-2 bionic Container started 1 started 10.20.81.23 r8xw7m bionic Deployed 1/lxd/0 started 10.20.81.5 juju-8002b3-1-lxd-0 bionic Container started 1/lxd/1 started 10.20.81.16 juju-8002b3-1-lxd-1 bionic Container started 1/lxd/2 started 10.20.81.15 juju-8002b3-1-lxd-2 bionic Container started 2 started 10.20.81.22 7xxmyw bionic Deployed 2/lxd/0 started 10.20.81.8 juju-8002b3-2-lxd-0 bionic Container started 2/lxd/1 started 10.20.81.7 juju-8002b3-2-lxd-1 bionic Container started 2/lxd/2 started 10.20.81.6 juju-8002b3-2-lxd-2 bionic Container started 3 started 10.20.81.24 fk7ysg bionic Deployed 3/lxd/0 started 10.20.81.3 juju-8002b3-3-lxd-0 bionic Container started 3/lxd/1 started 10.20.81.4 juju-8002b3-3-lxd-1 bionic Container started 3/lxd/2 started 10.20.81.20 juju-8002b3-3-lxd-2 bionic Container started

To locate its own IP address in the status command look the line

openstack-dashboard/0*    active    idle       3/lxd/2  10.20.81.20     80/tcp,443/tcp              Unit is ready 

or a quickest way to get the IP address for the dashboard is with the following command:

$: juju status --format=yaml openstack-dashboard | grep public-address | awk '{print $2}' 
10.20.81.4

<- part 3/8 . part 5/8 ->

“cya to the next 1…. Njoy !” bye dakj

Disclaimer: All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate steps to date, we take no responsibility if you implement any of these steps in a production environment.

“We learn from our mistakes”

“Install Juju – part 3/8”

JUJU

For more information about Juju we can view this link:

Juju

or we can view this video

1 STEP – ADD JUJU STABLE PPA ON MAAS SERVER

Make the upgrade of Juju we need to have the last stable release installed on our MAAS server and the commands to use are the following:

$: sudo add-apt-repository -yu ppa:juju/stable 
$: sudo apt update
$: sudo apt-get dist-upgrade

2 STEP – INSTALL JUJU

Run these commands

$: sudo snap install juju --classic

then this command to upgrade Juju

$: sudo snap refresh juju

3 STEP – ADD A CLOUD TO JUJU

Lets begin with its configuration, we can make that in two way, using either interactive mode  or manually mode. We’ll see both case:

Interactive Mode

Always on our MAAS server prompt the following commands:

$: juju add-cloud
Since Juju 2 is being run for the first time, downloading latest cloud information. Fetching latest public cloud list... Your list of public clouds is up to date, see `juju clouds`. Cloud Types
maas
manual
openstack
oracle
vsphere
Select cloud type: maas
Enter a name for your maas cloud: maas-cloud
Enter the API endpoint url: http://10.20.81.1:5240/MAAS Cloud "maas-cloud" successfully added You may bootstrap with 'juju bootstrap maas-cloud'

Now confirm the successful addition of the cloud:

$: juju clouds 
Cloud Regions Default Type Description aws 15 us-east-1 ec2 Amazon Web Services aws-china 1 cn-north-1 ec2 Amazon China aws-gov 1 us-gov-west-1 ec2 Amazon (USA Government) azure 26 centralus azure Microsoft Azure azure-china 2 chinaeast azure Microsoft Azure China cloudsigma 5 hnl cloudsigma CloudSigma Cloud google 13 us-east1 gce Google Cloud Platform joyent 6 eu-ams-1 joyent Joyent Cloud oracle 5 uscom-central-1 oracle Oracle Cloud rackspace 6 dfw rackspace Rackspace Cloud localhost 1 localhost lxd LXD Container Hypervisor maas-cloud 0 maas Metal As A Service

Manually Mode

$: nano maas-cloud.yaml

and adding the following lines

clouds:
   maas-cloud:
      type: maas
      auth-types: [oauth1]
      endpoint: http://10.20.81.1:5240/MAAS

then

$: juju add-cloud maas-cloud maas-cloud.yaml 
Since Juju 2 is being run for the first time, downloading latest cloud information.
Fetching latest public cloud list... Your list of public clouds is up to date, see `juju clouds`

then

$: juju update-clouds

4 STEP – ADD CLOUD CREDENTIALS

Before to make that we need to copy the maas-oauth from here

In order to access our cloud, Juju needs to know how to authenticate itself and the way to make that is use the following command

$: juju add-credential maas-cloud 
Enter credential name: richardsith 
Using auth-type "oauth1".
Enter maas-oauth: (paste the MAAS Keys copied)
Credential "richardsith" added locally for cloud "maas-cloud".

to list our credentials use:

$: juju list-credentials 
Cloud Credentials maas-cloud richardsith

5 STEP – INSTALL JUJU CONTROLLER ON IBM SERVER X3650 NODE

Now we can create a Juju controller with the bootstrap command:

$: juju bootstrap maas-cloud maas-cloud-controller --to juju.maas --debug

while in case we’ve defined a tag for that node we can using the following command:

$: juju bootstrap --constraints tags=juju maas-cloud maas-controller --debug 

at the end of all the procedure of bootstrapping our JUJU will  deploy Juju Gui on that node and on shell we’ll have something like that:

.......
.......
00:23:05 DEBUG juju.juju api.go:263 API hostnames unchanged - not resolving
00:23:05 INFO  cmd cmd.go:129 Bootstrap complete, "testmaas-controller" controller now available.
00:23:05 INFO  cmd cmd.go:129 Controller machines are in the "controller" model.
00:23:05 INFO  cmd cmd.go:129 Initial model "default" added.
00:23:05 INFO  cmd supercommand.go:465 command finished

on MAAS our Node will begin the commissioning task

At the end it’ll appear in this way

The Juju controller was called ‘maas-cloud-controller’, running this command to check that

$: juju list-controllers 
Use --refresh flag with this command to see the latest information.
Controller            Model    User   Access     Cloud/Region  Models  Machines    HA  Version
maas-cloud-controller*  default  admin  superuser  maas-cloud       2         1  none  2.0.2 

6 STEP – INSTALL JUJU GUI

Then as last step to have Juju gui installed, run this command

$: juju gui
GUI 2.13.2 for model "admin/default" is enabled at:   https://10.20.81.38:17070/gui/u/admin/default 
Your login credential is:
username: admin
password: a2df893dd79ece549647652fb76d85bd

using that address with that credential we can view Juju Gui

after login we’ll see the default controller

to have the credential run that

$: juju gui --show-credentials  
Opening the Juju GUI in your browser. Couldn't find a suitable web browser! Set the BROWSER environment variable to your desired browser. If it does not open, open this URL: https://10.20.81.38:17070/gui/6ac598d4-0f9d-47e9-870a-53854b2f9b6a/
Username: admin
Password: 11454d1bfadcb555ac9ff8b42e083fbd

<- part 2/8 . part 4/8 ->

“cya to the next 1…. Njoy !” bye dakj

Disclaimer: All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate steps to date, we take no responsibility if you implement any of these steps in a production environment.

“We learn from our mistakes”

“Add IBM Server x3650 on Maas – part 2/8”

IBM SERVER X3650 M4

The starting point our lab is the following:

1 STEP – CONFIGURE IMM ON IBM SERVER X3650 M4

With these few steps we can active the protocol IMM (Integrated Management Module) on our IBM Server x3650 M4 and permit the management of node on Maas Server via BMC.
IPMI is a protocol to talk to BMCs and it’s used from Maas to apply some task on node. The first thing to make is enter on its Bios and, as showing here, set the IMM parameters. For our Lab the management network is 10.20.81.0/24

2 STEP – SET HARD DISK IN RAID 0

Now our IMM has been set correctly on our node, let’s see how set one/two Hdd in Raid 0 for our lab. For Juju we’ve to use only 1 HDD and the procedure is the following:

Save the configuration from here and then complete this howto making the boot of the node. If we need to have 2 or more Hard disk independent make the same step also for the second one.

3 STEP – BOOT PXE

After that go back until here

then select PXE network and our node will boot

At the end we’ll see our node on Maas and its status is in Enlistment .

The second part is done see you to next one.

<- part 1/8 . part 3/8 ->

“cya to the next 1…. Njoy !” bye dakj

Disclaimer: All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate steps to date, we take no responsibility if you implement any of these steps in a production environment.

“We learn from our mistakes”