“Add IBM Server x3650 on Maas – part 2/8”

IBM SERVER X3650 M4

The starting point our lab is the following:

1 STEP – CONFIGURE IMM ON IBM SERVER X3650 M4

With these few steps we can active the protocol IMM (Integrated Management Module) on our IBM Server x3650 M4 and permit the management of node on Maas Server via BMC.
IPMI is a protocol to talk to BMCs and it’s used from Maas to apply some task on node. The first thing to make is enter on its Bios and, as showing here, set the IMM parameters. For our Lab the management network is 10.20.81.0/24

2 STEP – SET HARD DISK IN RAID 0

Now our IMM has been set correctly on our node, let’s see how set one/two Hdd in Raid 0 for our lab. For Juju we’ve to use only 1 HDD and the procedure is the following:

Save the configuration from here and then complete this howto making the boot of the node. If we need to have 2 or more Hard disk independent make the same step also for the second one.

3 STEP – BOOT PXE

After that go back until here

then select PXE network and our node will boot

At the end we’ll see our node on Maas and its status is in Enlistment .

The second part is done see you to next one.

<- part 1/8 . part 3/8 ->

“cya to the next 1…. Njoy !” bye dakj

Disclaimer: All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate steps to date, we take no responsibility if you implement any of these steps in a production environment.

“We learn from our mistakes”

“Create VM via MAAS Pods – PART 6/6”

MAAS

The starting point our lab is the following:

1 STEP – ADD PODS DEVICE ON MAAS

Let’s go to Pods Tab and adding our KVM HOST

from here

after to save that we’ll have that

opening that we’ll see that

from here we can create our VM directly composed them, eaxmple we’ve created HubVm01

run Compose to complete the task.

the node starts. The fifth part is done see you to next one.

<- part 4/6 .


“cya to the next 1…. Njoy !”

bye dakj

Disclaimer: All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate steps to date, we take no responsibility if you implement any of these steps in a production environment.

“We learn from our mistakes”

“Create VM via KVM & Add Host on MAAS – PART 5/6″

MAAS

The starting point our lab is the following:

1 STEP – CREATE VM ON HOST SERVER VIA KVM CONSOLE

Run the following command:

$: sudo mkdir -p /var/kvm/images 

then

$: sudo virt-install \
--name ubuntu1804 \
--ram 4096 \
--disk path=/var/kvm/images/ubuntu1804.img,size=30 \
--vcpus 2 \
--os-type linux \
--os-variant ubuntu16.04 \
--network bridge=br0 \
--graphics none \
--console pty,target_type=serial \
--location 'http://archive.ubuntu.com/ubuntu/dists/bionic/main/installer-amd64/' \
--extra-args 'console=ttyS0,115200n8 serial'

let’s proceed with the installation of ubuntu 18.04 and complete all tasks.

Very important that we scroll all the way down using our arrow keys and select “OpenSSH Server” using the space bar before continuing.

to list the VM use

$: virsh list
 Id    Name                           State
----------------------------------------------------
 1     ubuntu1804                 running

After finishing installation, back to KVM host and shutdown the guest like follows

$: virsh shutdown ubuntu1804 

In case we decided to delete the VM we must use this command before of its shutdown:

$: virsh destroy ubuntu1804 
$: sudo rm /var/kvm/images/ubuntu1804.img

2 STEP- ADD KVM-BACKED NODE ON MAAS

Now it’s time to adding our KVM node on MAAS, let’s go on MAAS dashboard -> Machines. From that we can adding either a new chassis (useful to see all VM created via KVM) or a new machine to add only one. For our lab we have used the chassis. As Address to use this command:

qemu+ssh://richardsith@10.20.81.2/system

save that and after few second we’ll see the VM previously created.

This task is done.

3 STEP – COMMISSION THE NODE

Let’s continue with the guide, it’s the time to Acquire and running the commission of our KVM node

at the end of this task we’ll have that

then acquire it

4 STEP – DEPLOY THE NODE

Once our node is in Ready status we can deploy our Ubuntu 16.04 LTS Server Edition definitely on it and having the control of that.

at the end on our KVM node will be deployed Ubuntu 18.04 LTS Server Edition.

The forth part is done see you to next one.

<- part 4/6 .


“cya to the next 1…. Njoy !”

bye dakj

Disclaimer: All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate steps to date, we take no responsibility if you implement any of these steps in a production environment.

“We learn from our mistakes”

“Ubuntu 16.04 LTS LXD Dashboard”

ubuntu-16-04-lts

LXD DASHBOARD

After have seen how to prepare a machine with LXC (Linux Pure Container) we can try to install its dashbaord and view as it works. Lets see that.

1 STEP – INSTALL LXD DASHBOARD

From terminal run the following command

$: sudo apt-get install lxc debootstrap bridge-utils -y
$: sudo su
$: wget https://lxc-webpanel.github.com/tools/install.sh -O - | bash

we’ll obtain that

Screen Shot 2017-08-06 at 02.38.14.png

Now opening with our browser the URL http://your_ip_address:5000/ and using as credentials:

user: admin
password: admin

Screen Shot 2017-08-06 at 02.38.51

after the login the first screen is that

Screen Shot 2017-08-06 at 02.39.09

create and configured the container is very easy

Screen Shot 2017-08-06 at 10.26.59

One of the main focus for Ubuntu LTS was to make LXC dead easy to use, to achieve this. Creating a basic container via console and starting with it we can use the following commands:

$: sudo apt-get install lxc
$: sudo lxc-create -t ubuntu -n my-container
$: sudo lxc-start -n my-container

Log In

$: sudo lxc-console -n my-container -t 1
root@my-container:#

This part is done see you to next one.


“cya to the next 1…. Njoy !”
bye dakj

Disclaimer: All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate steps to date, we take no responsibility if you implement any of these steps in a production environment.

“We learn from our mistakes”

“Ubuntu 16.04 LTS Openstack Cloud Server – part 8/8”

ubuntu-16-04-lts

THE CANONICAL DISTRIBUTION OF OPENSTACK

The starting point our lab is the following:

Screen Shot 2017-08-07 at 18.30.37

1 STEP – CREATE A FLOATING IP

At this point our new instance is ready but we can’t connect on it via ssh because we need make also other sets. The first task to make is to associate a floating IP address to our LXD container. Goto Project -> Network -> Floating IPs

Screen Shot 2017-08-08 at 12.41.13

and create a new one

2 STEP – ASSOCIATE A FLOATING IP TO LXD CONTAINER

Goto Project -> Compute -> Instances and associate that IP to our LXD container

our screen will appear in this way

Screen Shot 2017-08-08 at 12.48.34

dd

this task will get a few minutes before to have our instance ready.

dddd

The eighth part is done see you to next one.

<- part 7/8


“cya to the next 1…. Njoy !”
bye dakj

Disclaimer: All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate steps to date, we take no responsibility if you implement any of these steps in a production environment.

“We learn from our mistakes”

“Ubuntu 16.04 LTS The Canonical Distribution of Openstack Autopilot Server in HA – part 3/3”

ubuntu-16-04-lts

UBUNTU OPENSTACK AUTOPILOT IN HA

Our situation now is the following:

Screen Shot 2017-05-29 at 22.55.06

1 STEP – DEPLOY UBUNTU OPENSTASK AUTOPILOT ON NODES

After Juju and Landscape, our next step is deploy Openstack Autopilot on nodes dedicated to that services. The first task is to change the status of our nodes openstack1-4.maas in ready, that one is done doing the commission on nodes:

Screen Shot 2017-05-30 at 09.41.00

then configure the nodes openstack1-3.maas to have a network so configured

Screen Shot 2017-05-29 at 16.30.14

now our Openstack Autopilot in HA is ready to be deployed, lets go to Landscape and scroll down the page

Screen Shot 2017-05-29 at 16.11.37

Screen Shot 2017-05-29 at 16.33.35

let’s go with the configuration

Screen Shot 2017-05-29 at 16.34.09

here we need to make our choices, for our lab is that

 

select “Add hardware” and run “Autopilot placement”

 

start with its installation

Screen Shot 2017-05-30 at 09.49.15

then

 

At this point we need to wait the end of all the deploy

Screen Shot 2017-05-30 at 10.06.48

this task will take log time.

Screen Shot 2017-05-30 at 11.08.32.png

the last tasks

At the end lets go to Openstack Dashboard with the our login

The last part is done.


“cya to the next 1…. Njoy !”
bye dakj

Disclaimer: All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate steps to date, we take no responsibility if you implement any of these steps in a production environment.

“We learn from our mistakes”

“Ubuntu 16.04 LTS The Canonical Distribution of Openstack Autopilot Server in HA – part 2/3”

ubuntu-16-04-lts

UBUNTU OPENSTACK AUTOPILOT IN HA

Our situation now is the following:

Screen Shot 2017-05-29 at 22.10.21

next step is to deploy Landscape.

1 STEP – DEPLOY UBUNTU LANDSCAPE ON NODE

After Juju, our next step is deploy Landscape on the node (landscape.maas) dedicated to that service. From that situation deploy all service

Screen Shot 2017-05-29 at 15.41.41

after that

 

now we need to wait long time and complete this task when it’s moment:

 

on our browser we can open the url https://10.20.81.4/account/standalone/openstack and use that credentials for the login

Screen Shot 2017-05-29 at 16.10.16

Screen Shot 2017-05-29 at 16.11.37

The second part is done see you to next one.

<- part 1/3 . part 3/3 ->


“cya to the next 1…. Njoy !”
bye dakj

Disclaimer: All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate steps to date, we take no responsibility if you implement any of these steps in a production environment.

“We learn from our mistakes”