“Add & Install a new HDD on Ubuntu 18.04 LTS Server Edition”

AUTO MOUNT

 

1 STEP – AUTOMATIC MOUNT AT BOOT

We have added a hard disk of 1TB capacity on an existing servers to be auto mounted as a /StorageVm partition at next boot. To manage that fdisk is a command line utility to view and manage hard disks and partitions on Linux systems.

1.1 PARTITION THE NEW HDD

If the drive is still blank and unformatted, we can formatting the drive using the command line

$: sudo fdisk -l
Disk /dev/loop0: 87 MiB, 91160576 bytes, 178048 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop1: 54.6 MiB, 57229312 bytes, 111776 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sda: 278.5 GiB, 298999349248 bytes, 583983104 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 631BE3B6-EFA6-4135-B213-C4B7D677ACA4

Device       Start       End   Sectors  Size Type
/dev/sda1     2048   1050623   1048576  512M EFI System
/dev/sda2  1050624 583981055 582930432  278G Linux filesystem

Disk /dev/sdb: 3.6 TiB, 3995997306880 bytes, 7804682240 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x94014387

then lets’ begin with the procedure

$: sudo fdisk /dev/sdb
Welcome to fdisk (util-linux 2.31.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 
First sector (2048-4294967295, default 2048): 
Last sector, +sectors or +size{K,M,G,T,P} (2048-4294967294, default 4294967294): 

Created a new partition 1 of type 'Linux' and of size 2 TiB.
Command (m for help): w
The partition table has been altered.
Failed to add partition 1 to system: Device or resource busy

The kernel still uses the old partitions. The new table will be used at the next reboot. 
Synching disks.

then

$: sudo mkfs -t ext4 /dev/sdb1
mke2fs 1.44.1 (24-Mar-2018)
/dev/sdb1 contains a ext4 filesystem
last mounted on Wed Aug 29 13:52:29 2018
Proceed anyway? (y/N) y
/dev/sdb1 is mounted; will not make a filesystem here!

and finally

$: sudo mkfs -t ext4 /dev/sdb1
mke2fs 1.44.1 (24-Mar-2018)
/dev/sdb1 contains a ext4 filesystem
last mounted on Wed Aug 29 13:52:29 2018
Proceed anyway? (y/N) y
Creating filesystem with 536870655 4k blocks and 134217728 inodes
Filesystem UUID: 7cdb6e60-b655-4f15-9a8c-982154ac2194
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000
Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done

1.2 CREATE A MOUNT POINT

Now that the drive is partitioned and formatted, we need to choose a mount point. This will be the location from which you will access the drive in the future.

$: sudo mkdir /media/StorageVM

1.3 AUTOMATIC MOUNT AT BOOT

to make these last task we need to know the UUID of our new HDD, the command to know that is

$: ls -al /dev/disk/by-uuid/
total 0
drwxr-xr-x 2 root root 100 Aug 29 14:57 .
drwxr-xr-x 7 root root 140 Aug 29 13:52 ..
lrwxrwxrwx 1 root root  10 Aug 29 13:52 421A-0CA5 -> ../../sda1
lrwxrwxrwx 1 root root  10 Aug 29 14:57 7cdb6e60-b655-4f15-9a8c-982154ac2194 -> ../../sdb1
lrwxrwxrwx 1 root root  10 Aug 29 13:52 8d1e89e0-7b7d-4935-8276-401d611eaba1 -> ../../sda2

now edit fstab file

$: sudo nano /etc/fstab

and add the line as we made

UUID=7cdb6e60-b655-4f15-9a8c-982154ac2194        /media/StorageVm       ext4    defaults 0      0
$: sudo reboot

Also that one is completed, see you next time


“cya to the next 1…. Njoy !”
bye dakj

Disclaimer: All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate steps to date, we take no responsibility if you implement any of these steps in a production environment.

“We learn from our mistakes”

“KVM (Kernel-Based Virtual Machine) on Ubuntu 18.04 LTS Server Edition”

KVM HYPERVISOR

What is a hypervisor?

KVM is a hypervisor that creates and run virtual machines. A server on which a hypervisor is running is called as a host machine. Each virtual machine is referred to as a guest machine. Using KVM, you can run multiple operating systems such as CentOS, OpenBSD, FreeBSD, MS-Windows running unmodified.

KERNEL-BASED VIRTUAL MACHINE (KVM)

  1. The host server located in the remote data center and it is a headless server.
  2. All commands in this tutorial typed over the ssh based session.
  3. You need a vnc client to install the guest operating system.

01 STEP – INSTALL KVM

from terminal run:

$: sudo apt-get install qemu-kvm libvirt-bin virtinst bridge-utils cpu-checker

02 STEP – VERIFY KVM INSTALLATION

from terminal run:

$: kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used

03 STEP: CONFIGURATION BRIDGED NETWORKING

from terminal run:

$: sudo cp /etc/network/interfaces /etc/network/interfaces.backup
$: sudo nano /etc/network/interfaces

Edit/append as follows:

 auto br0
 iface br0 inet static
         address 10.18.44.26
         netmask 255.255.255.192
         broadcast 10.18.44.63
         dns-nameservers 10.0.80.11 10.0.80.12
         # set static route for LAN
 	     post-up route add -net 10.0.0.0 netmask 255.0.0.0 gw 10.18.44.1
 	     post-up route add -net 161.26.0.0 netmask 255.255.0.0 gw 10.18.44.1
         bridge_ports eth0
         bridge_stp off
         bridge_fd 0
         bridge_maxwait 0
 
 # br1 setup with static wan IPv4 with ISP router as a default gateway
 auto br1
 iface br1 inet static
         address 208.43.222.51
         network 255.255.255.248
         netmask 255.255.255.0
         broadcast 208.43.222.55
         gateway 208.43.222.49
         bridge_ports eth1
         bridge_stp off
         bridge_fd 0
         bridge_maxwait 0

Save and close the file. Restart the networking service, enter:

$: sudo systemctl restart networking

Verify it:

$: sudo brctl show

04 STEP – CREATE OUR FIRST VM

We’re going to create a Debian 8.x VM: In this example, I’m creating Debian 8.5 VM with 2GB RAM, 2 CPU core, 2 nics (1 for lan and 1 for wan) and 40GB disk space, enter:

 $: cd /var/lib/libvirt/boot/
 $: sudo wget https://mirrors.kernel.org/debian-cd/current/amd64/iso-dvd/debian-8.5.0-amd64-DVD-1.iso
 $: sudo virt-install \
 --virt-type=kvm \
 --name=debina8 \
 --ram=2048 \
 --vcpus=2 \
 --os-variant=debian8 \
 --virt-type=kvm \
 --hvm \
 --cdrom=/var/lib/libvirt/boot/debian-8.5.0-amd64-DVD-1.iso \
 --network=bridge=br0,model=virtio \
 --network=bridge=br1,model=virtio \
 --graphics vnc \
 --disk path=/var/lib/libvirt/images/debian8.qcow2,size=40,bus=virtio,format=qcow2

To configure vnc login from another terminal over ssh and type:

$: sudo virsh dumpxml debian8 | grep vnc
 <graphics type='vnc' port='5904' autoport='yes' listen='127.0.0.1'>

Please note down the port value (i.e. 5904). You need to use an SSH client to setup tunnel and a VNC client to access the remote vnc server. Type the following SSH port forwarding command from your client/desktop:

$ ssh vivek@server1.cyberciti.biz -L 5904:127.0.0.1:5904

Once you have ssh tunnel established, you can point your VNC client at your own 127.0.0.1 (localhost) address and port 5904 to continue with Debian Linux 8.5 installation.

Fig.0 : VNC client to complete CentOS 7.x installation

05 STEP – USEFUL COMMANDS

Let us see some useful commands.

List a running vms/domains

$: sudo virsh list

Shutodwn a vm/domain called openbsd

$: sudo virsh shutdown openbsd

Start a vm/domain called openbsd

$: sudo virsh start openbsd

Suspend a vm/domain called openbsd

$: sudo virsh suspend openbsd

Reboot (soft & safe reboot) a vm/domain called openbsd

$: sudo virsh reboot openbsd

Reset (hard reset/not safe) a vm/domain called openbsd

$: sudo virsh reset openbsd

Delete/remove a vm/domain called openbsd

$: sudo virsh undefine openbsd
$: sudo virsh destroy openbsd

“cya to the next 1…. Njoy !”
bye dakj

Disclaimer: All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate steps to date, we take no responsibility if you implement any of these steps in a production environment.

“We learn from our mistakes”

“Bridge (BR0) interface on Ubuntu 18.04 LTS server Edition”

BRIDGE INTERFACE

What is a bridged?

A Bridged networking is nothing but a simple technique to connect to the outside network through the physical interface. It is useful for LXC/KVM/Xen/Containers virtualization and other virtual interfaces. The virtual interfaces appear as regular hosts to the rest of the network. In this tutorial I will explain how to configure a Linux bridge with bridge-utils (brctl) command line utility on Ubuntu server.

Our sample bridged networking

Fig.01: Sample Ubuntu Bridged Networking Setup For Kvm/Xen/LXC Containers (br0)
In this example eth0 and eth1 is the physical network interface. eth0 connected to the LAN and eth1 is attached to the upstream ISP router/Internet.

01 STEP – INSTALL BRIDGE-UTILS

Type the following apt-get command to install the bridge-utils:

$: sudo apt install bridge-utils

02 STEP – CREATING A PERMANENT NETWORK BRIDGE

Edit /etc/network/interface:

$: sudo cp /etc/network/interfaces /etc/network/interfaces.bck
$: sudo nano /etc/network/interfaces

If the bridge br0 is to be assigned an IP address by DHCP:

auto ens33
iface ens33 inet manual

auto br0
iface br0 inet dhcp
    bridge_ports  ens33

If the bridge br0 is to be assigned a static IP address:

auto ens33
iface ens33 inet manual

auto br0
iface br0 inet static
        address 1.1.10.6
        netmask 255.255.255.0
        network 1.1.10.0
        broadcast 1.1.10.255
        gateway 1.1.10.2
        # dns-* options are implemented by the resolvconf package, if installed
        dns-nameservers 8.8.8.8 8.8.4.4 
        dns-search localdomain.local 
        # bridge options
        bridge_ports ens33

Save and close the file, while on our VMware Fusion the network is

Screen Shot 2016-11-04 at 22.34.38.png

03 – RESTART THE NETWORK SERVICE

to restart the networking service

$: sudo reboot

Use the ping/ip commands to verify that both LAN and WAN interfaces are reachable:

# See br0
$: ip a show
# See routing info
$:  ip r
# ping public site
$: ping -c 2 cyberciti.biz
# ping lan server
$: ping -c 2 10.0.80.12

Sample outputs:

Fig.03: Verify Bridging Ethernet Connections

Now if we want, we can configure a LXC containers to use br0 to reach directly Internet or LAN.


“cya to the next 1…. Njoy !”
bye dakj

Disclaimer: All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate steps to date, we take no responsibility if you implement any of these steps in a production environment.

“We learn from our mistakes”

“Ubuntu 18.04 LTS Server Edition – part 1/1”

ubuntu-16-04-lts

UBUNTU 18.04 LTS SERVER EDITION 

-Ends-


“cya to the next 1…. Njoy !”
bye dakj

Disclaimer: All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate steps to date, we take no responsibility if you implement any of these steps in a production environment.

“We learn from our mistakes”

“Ubuntu 18.04 LTS Server Edition”

ubuntu-16-04-lts

For more informations about Ubuntu Server Edition we can view this link:

UBUNTU 18.04 LTS SERVER EDITION 

Now lets begin with these guides useful to see how to install Ubuntu 16.04 LTS  Server Edition and realising our labs. All lab are build on a virtual environment. The topics will be covered by:


“cya to the next 1…. Njoy !”
bye dakj

Disclaimer: All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate steps to date, we take no responsibility if you implement any of these steps in a production environment.

“We learn from our mistakes”

“How to configure two-factor authentication (2FA) using Google Authenticator on Ubuntu 16.04 LTS Server Edition”

ubuntu-16-04-lts

 

UBUNTU LTS SERVER

In this tutorial, we will describe the necessary steps to configure two-factor authentication (2FA) using Google Authenticator (application on our Android mobile device.) on an Ubuntu 16.04 LTS Server Edition. This method adds another layer of protection to our server adding an extra step to the basic login procedure.

1 STEP – INSTALL GOOGLE AUTHENTICATOR

Login to our  server via SSH as user root

$: ssh user@IP_Address

Update its repository and install the new packages:

$: sudo apt-get update && apt-get upgrade

Install the Google Authenticator package.

$: sudo apt-get install libpam-google-authenticator

2 STEP – CONFIGURE GOOGLE AUTHENTICATOR

Once the package is installed, run the google-authenticator program to create a key for the user you will be logging with. The program can generate two types of authentication tokens – time-based and one-time tokens. Time-based passwords will change randomly at a certain amount of time, and one-time passwords are valid for a single authentication. In our case, we will use time-based passwords. Run the program to create the keys

$: google-authenticator

We will be asked if we want the authentication to be time-based.

Do you want authentication tokens to be time-based (y/n) y

Big QR code will be generated in our terminal. We can scan the code with the authenticator application on our Android/iOS/Windows phone or tablet or enter the secret key generated on the screen.

Screen Shot 2017-07-31 at 12.49.13.png

Emergency scratch codes will also be generated. We can use these codes for authentication in case We lose our mobile device.

Your emergency scratch codes are:
80463533
68335920
89221348
12489672
11144603

Save the authentication settings for the root user by answering YES to the next questions

Do you want me to update your "/root/.google_authenticator" file (y/n) y

Do you want to disallow multiple uses of the same authentication
token? This restricts you to one login about every 30s, but it increases
your chances to notice or even prevent man-in-the-middle attacks (y/n) y

By default, tokens are good for 30 seconds and in order to compensate for
possible time-skew between the client and the server, we allow an extra
token before and after the current time. If you experience problems with poor
time synchronization, you can increase the window from its default
size of 1:30min to about 4min. Do you want to do so (y/n) n

If the computer that you are logging into isn't hardened against brute-force
login attempts, you can enable rate-limiting for the authentication module.
By default, this limits attackers to no more than 3 login attempts every 30s.
Do you want to enable rate-limiting (y/n) y

Now we have the Google Authenticator application configured and the next step is to configure the authentication settings in openSSH.

$: sudo nano /etc/pam.d/sshd

auth required pam_google_authenticator.so

Save the changes, and open the “/etc/ssh/sshd_config” file and enable Challenge Response Authentication.

$: sudo nano /etc/ssh/sshd_config

ChallengeResponseAuthentication yes

Save the file, and restart the SSH server for the changes to take effect.

$: sudo systemctl restart ssh

Two-factor authentication is now enabled on our server and every time we try to login to our Ubuntu 16.04 LTS Server Edition via SSH we will have to enter our user’s password and the verification code generated by Google Authenticator.


 


“cya to the next 1…. Njoy !”
bye dakj

Disclaimer: All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate steps to date, we take no responsibility if you implement any of these steps in a production environment.

“We learn from our mistakes”

“Maas and Juju environment in LXD and ZFS on Ubuntu 16.04Lts Server Edition″

ubuntu-16-04-lts

For more informations about Maas and Juju we can view these link:

UBUNTU MAAS AND JUJU

Now lets begin with this first guide used to prepare our environment:

Then, after we’ve installed our OS we can proceed with another link that will explain us how to build our environment with LXD and ZFS:

Now it’s the time to realise all our lab, the topics will be covered by:

this how at moment is ended..


“cya to the next 1…. Njoy !”
bye dakj

Disclaimer: All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate steps to date, we take no responsibility if you implement any of these steps in a production environment.

“We learn from our mistakes”

“Maas and Juju on LXD and ZFS – part 4/4”

ubuntu-16-04-lts

UBUNTU JUJU

In order to setup Juju we need to make some steps and requirements:

  • Deploy a Charm

Applications themselves are deployed either as ‘charms’ or as ‘bundles’. Charms are singular applications, such as Haproxy or PostgreSQL, whereas bundles are a curated collection of charms and their relationships. Bundles are ideal for deploying OpenStack, for instance, or Kubernetes.

1 STEP – ADD A NEW NODE TO JUJU MODEL

At this point we can adding the new machine to our model, the command is the following:

$: juju add-machine --model jujulab

after few second our container list will be that

$: lxc list

+---------------+---------+--------------------+------+------------+-----------+
|     NAME      |  STATE  |        IPV4        | IPV6 |    TYPE    | SNAPSHOTS |
+---------------+---------+--------------------+------+------------+-----------+
| xenial-juju   | RUNNING | 10.20.40.41 (eth0) |      | PERSISTENT | 0         |
+---------------+---------+--------------------+------+------------+-----------+
| xenial-maas   | STOPPED |                    |      | PERSISTENT | 0         |
+---------------+---------+--------------------+------+------------+-----------+
| juju-1d0c27-0 | RUNNING | 10.20.40.28 (eth0) |      | PERSISTENT | 0         | 
+---------------+---------+--------------------+------+------------+-----------+

To change the name of that container we need to run the following commands:

$: lxc exec juju-1d0c27-0 bash
$: sudo nano /etc/hostname

and change that from juju-1de061-0 to xenial-juju. then 

$: exit
$: lxc stop juju-1d0c27-0
$: lxc move juju-1d0c27-0 xenial-vnode01

Our new container list will be that

$: lxc list

+---------------+---------+--------------------+------+------------+-----------+
|     NAME      |  STATE  |        IPV4        | IPV6 |    TYPE    | SNAPSHOTS |
+---------------+---------+--------------------+------+------------+-----------+
| xenial-juju   | RUNNING | 10.20.40.41 (eth0) |      | PERSISTENT  | 0         |
+---------------+---------+--------------------+------+------------+-----------+
| xenial-maas   | STOPPED |                    |      | PERSISTENT | 0         |
+---------------+---------+--------------------+------+------------+-----------+
| xenial-vnode01| RUNNING | 10.20.40.28 (eth0) |      | PERSISTENT | 0         |
+---------------+---------+--------------------+------+------------+-----------+

3 STEP – DEPLOY A CHARM ON OUR NEW LXC CONTAINER (NODE)

Using the gui is very easy to deploy an application (charm) on our node. For our lab we will use as service MySQL Database.

click on “Add to Canvas”

screen-shot-2017-02-07-at-15-59-00

Drag and drop the service on node

then commit changes

run the Deploy

screen-shot-2017-02-07-at-15-59-22

waiting the end of the task, when it’ll be finished we’ll have that situation

screen-shot-2017-02-07-at-16-00-16

at the end our charm is deployed on our node and in juju status will see that.

$: juju status --model jujulab
Model      Controller          Cloud/Region  Version
jujulab  maaslab-controller  maaslab       2.0.2
App    Version  Status  Scale  Charm  Store       Rev  OS      Notes
mysql  5.7.17   active      1  mysql  jujucharms   56  ubuntu 
Unit      Workload  Agent  Machine  Public address  Ports     Message
mysql/0*  active    idle   0        10.20.81.96     3306/tcp  Ready
Machine  State    DNS          Inst id             Series  AZ
0        started  10.20.81.96  manual:10.20.81.96  xenial 
Relation  Provides  Consumes  Type
cluster   mysql     mysql     peer

Once deployed, we need to log in as root MySQL User at the MySQL console

$: juju switch jujulab

then

$: juju ssh mysql/0

then

ubuntu@vnode00: sudo apt update
ubuntu@vnode00: sudo apt dist-update

then

ubuntu@vnode00: mysql -u root -p`sudo cat /var/lib/mysql/mysql.passwd
mysql>

the end…..

<- part 3/4


“cya to the next 1…. Njoy !”
bye dakj

Disclaimer: All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate steps to date, we take no responsibility if you implement any of these steps in a production environment.

“We learn from our mistakes”

“Maas and Juju on LXD and ZFS – part 3/4”

ubuntu-16-04-lts

UBUNTU JUJU

In order to setup Juju we need to make some steps and requirements:

  • Creating a New Model in JUJU

check the list of our LXC container

$: lxc list
+----------------+---------+--------------------+------+------------+-----------+
|      NAME      |  STATE  |        IPV4        | IPV6 |    TYPE    | SNAPSHOTS |
+----------------+---------+--------------------+------+------------+-----------+
| xenial-juju    | RUNNING | 10.20.40.26 (eth0) |      | PERSISTENT | 0         |
+----------------+---------+--------------------+------+------------+-----------+
| xenial-maas    | RUNNING | 10.20.40.39 (eth0) |      | PERSISTENT | 0         |
+----------------+---------+--------------------+------+------------+-----------+

Once the container is running, on hosts file of VM Host we need to add the following lines to resolve hosts names

$: sudo nano /etc/hosts

then adding these lines

#Ubuntu 16.04Lts LXD-ZFS with MAAS & JUJU - Bridge
10.20.40.27     lxd

#Container for Ubuntu MAAS      
10.20.40.39     xenial-maas
#Container for Ubuntu JUJU      
10.20.40.26     xenial-juju
#Container fo vNodes
10.20.40.28     xenial-vnode00

1 STEP – CREATE A NEW MODEL FOR OUR CONTROLLER

see this link to understand the model in JUJU

https://jujucharms.com/docs/2.0/models

first check our controller

$: juju list-controllers --refresh

Controller    Model  User   Access     Cloud/Region         Models  Machines    HA  Version
xenial-juju*  -      admin  superuser  localhost/localhost       2         1  none  2.0.1

now we can either to create a new model or use the default

$: juju add-model jujulab

that command to view the list

$: juju list-models

Controller: xenial-juju
Model       Cloud/Region   Status     Machines  Cores  Access  Last connection
controller  lxd/localhost  available         1      -  admin   just now
default     lxd/localhost  available         0      -  admin   1 minute ago
jujulab*    lxd/localhost  available         0      -  admin   never connected

we can see that also on its gui

Screen Shot 2017-02-07 at 15.23.12.png

to see the status

$: juju status

Model    Controller   Cloud/Region         Version
jujulab  xenial-juju  localhost/localhost  2.0.1
App  Version  Status  Scale  Charm  Store  Rev  OS  Notes
Unit  Workload  Agent  Machine  Public address  Ports  Message
Machine  State  DNS  Inst id  Series  AZ

the third part is done.

<- part 2/4 . part 4/4 ->


“cya to the next 1…. Njoy !”
bye dakj

Disclaimer: All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate steps to date, we take no responsibility if you implement any of these steps in a production environment.

“We learn from our mistakes”

“Maas and Juju on LXD and ZFS – part 2/4”

ubuntu-16-04-lts

UBUNTU JUJU 

In this moment we’ll have on our physical host both LXD and ZFS environment installed, and Maas, now we can see the procedure to install Juju services. In order to install Juju and Juju Gui we need to make some steps and requirements:

  • Install JUJU
  • Create a Controller for new environment;
  • Deploy the Application (JUJU Gui)

1 STEP – INSTALL JUJU

on our virtual host we can launch the following command to install JUJU

$: sudo apt-add-repository -y ppa:juju/stable
$: sudo apt update $: sudo apt-get dist-upgrade 

then let’s start with the installation

$: sudo apt-get install juju

2 STEP – CREATE A LXD CONTROLLER FOR JUJU

See this link to understand the functionality of the controller in JUJU

https://jujucharms.com/docs/2.0/controllers

Juju needs of a controller instance to manage our models and the juju bootstrapcommand is used to create one. This command expects a name (for referencing this controller) and a cloud to use. The LXD ‘cloud’ is known as ‘localhost’ to Juju. For our LXD localhost cloud, we will make a controller called ‘lxd-controller’:

$: juju bootstrap lxd lxd-ctr --debug
Creating Juju controller "lxd-ctr" on localhost/localhost
Bootstrapping model "controller"
Starting new instance for initial controller
Launching instance
 - juju-507b62-0d      
Installing Juju agent on bootstrap instance
Preparing for Juju GUI 2.1.8 release installation
Waiting for address
Attempting to connect to fd1b:8791:9376:4cdd:216:3eff:fe3f:cd79:22
Attempting to connect to 10.215.221.176:22
sudo: unable to resolve host juju-507b62-0
Logging to /var/log/cloud-init-output.log on remote host
Running apt-get update
Running apt-get upgrade
Installing package: curl
Installing package: cpu-checker
Installing package: bridge-utils
Installing package: cloud-utils
Installing package: cloud-image-utils
Installing package: tmux
Fetching tools: curl -sSfw 'tools from %{url_effective} downloaded: HTTP %{http_code}; time %{time_total}s; size %{size_download} bytes; speed %{speed_download} bytes/s ' --retry 10 -o $bin/tools.tar.gz <[https://streams.canonical.com/juju/tools/agent/2.0-beta12/juju-2.0-beta12-xenial-amd64.tgz]>
Bootstrapping Juju machine agent
Starting Juju machine agent (jujud-machine-0)
Bootstrap agent installed
Waiting for API to become available: upgrade in progress (upgrade in progress)
Waiting for API to become available: upgrade in progress (upgrade in progress)
Waiting for API to become available: upgrade in progress (upgrade in progress)
Bootstrap complete, lxd-test now available.

in case we received an error like that:

12:37:11 ERROR cmd supercommand.go:458 failed to bootstrap model: subprocess encountered error code 1

the original post is reported here

http://askubuntu.com/questions/847593/error-on-lxd-container-while-we-install-juju-gui-ver-2-0

JUJU take the wrong IP address for LXD node. When the container is RUNNING you can go inside the container

$: lxc list
+---------------+---------+--------------------+------+------------+-----------+
|     NAME      |  STATE  |        IPV4        | IPV6 |    TYPE    | SNAPSHOTS |
+---------------+---------+--------------------+------+------------+-----------+
| juju-1de061-0 | RUNNING | 10.20.40.26 (eth0) |      | PERSISTENT | 0         |
+---------------+---------+--------------------+------+------------+-----------+
| xenial-maas   | RUNNING | 10.20.40.39 (eth0) |      | PERSISTENT | 0         |
+---------------+---------+--------------------+------+------------+-----------+
$:  lxc exec juju-1de061-0 bash

then do this :

$: iptables -t nat -A OUTPUT -d 10.20.40.254 -p tcp --dport 8443 -j DNAT --to-destination 10.20.40.27:8443

where  10.20.40.27 is the IP of the our virtual host. the following command shows us both containers

$: lxc list
+---------------+---------+--------------------+------+------------+-----------+
|     NAME      |  STATE  |        IPV4        | IPV6 |    TYPE    | SNAPSHOTS |
+---------------+---------+--------------------+------+------------+-----------+
| juju-1de061-0 | RUNNING | 10.20.40.26 (eth0) |      | PERSISTENT | 0         |
+---------------+---------+--------------------+------+------------+-----------+
| xenial-maas   | RUNNING | 10.20.40.39 (eth0) |      | PERSISTENT | 0         |
+---------------+---------+--------------------+------+------------+-----------+

To change the name of that container we need to run the following commands:

$: sudo nano /etc/hostname

and change that from juju-1de061-0 to xenial-juju. then 

$: exit
$: lxc stop juju-1de061-0
$: lxc move juju-1de061-0 xenial-juju

Once the process has completed we can check that the controller has been created:

$: juju list-controllers 

This will return a list of the controllers known to Juju, which at the moment is the one we just created:

CONTROLLER        MODEL    USER         CLOUD/REGION
lxd-ctr*         default  admin@local  localhost/localhost

We can check on how far Juju has got by running the command:

$: juju status
Model    Controller   Cloud/Region         Version
default  lxd-ctr  localhost/localhost  2.0.1
App  Version  Status  Scale  Charm  Store  Rev  OS  Notes
Unit  Workload  Agent  Machine  Public address  Ports  Message
Machine  State  DNS  Inst id  Series  AZ

3 STEP – RUN JUJU GUI

Once the container is running, on hosts file of VM Host we need to add the following lines to resolve hosts names

$: sudo nano /etc/hosts

add these lines

#Ubuntu 16.04Lts LXD-ZFS with MAAS & JUJU - Bridge
10.20.40.27     lxd

#Container for Ubuntu MAAS      
10.20.40.39     xenial-maas
#Container for Ubuntu JUJU      
10.20.40.26     xenial-juju

run the following command to active the gui

$: juju gui
Opening the Juju GUI in your browser.
Couldn't find a suitable web browser!
Set the BROWSER environment variable to your desired browser.
If it does not open, open this URL:
https://10.20.81.115:17070/gui/6ac598d4-0f9d-47e9-870a-53854b2f9b6a/

then

$: juju gui --show-credentials
Opening the Juju GUI in your browser.
If it does not open, open this URL:
https://10.20.40.42:17070/gui/28a15508-29a9-4ad9-80b0-d846710b6214/
Username: admin
Password: xxxxxxxxx
Couldn't find a suitable web browser!
Set the BROWSER environment variable to your desired browser.

use that credentials to make the login in Juju Gui

 

also this how-to at moment is completed..

<- part 1/4” . part 3/4 ->


“cya to the next 1…. Njoy !”
bye dakj

Disclaimer: All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate steps to date, we take no responsibility if you implement any of these steps in a production environment.

“We learn from our mistakes”