“LXD on Ubuntu 18.04 LTS Server Edition”

LXD

What is LXD (Pure-container hypervisor)?

The LXD container hypervisors is supported and created by Ubuntu team, simplest, LXD is a daemon which provides a REST API to drive LXC containers. Its main goal is to provide a user experience that’s similar to that of virtual machines but using Linux containers rather than hardware virtualization. To obtain more info about that read this link

https://www.ubuntu.com/containers/lxd

and that

https://stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/

1 STEP – INSTALL LXD

Type the following aptgetcommand or apt command to LXD:

$: sudo sh -c 'apt update && apt upgrade'
$: sudo apt install lxd

1.2 STEP – ADD USER TO LXD GROUP

No need to be the root user to manage LXD daemon. To manage LXD server add our username to lxd group using the command:

$: sudo adduser {USERNameHere} lxd
$: sudo adduser richardsith lxd
$: newgrp lxd

Verify it with the id command:

$: id

2 STEP – SETUP ZFS

I suggest you use ZFS and look this link:

“ZFS ON UBUNTU 18.04 LTS SERVER EDITION″

3 STEP – CONFIGURE LXD STORAGE AND NETWORK

It is time set up the LXD server.  We must configure networking and storage option such as directory, zfs, btrf and more:

$: sudo lxd init

You must answer a series of questions on how to configure the LXD server. We can verify it with the following command:

$: lxd list
$: lxd list | more

4 STEP – CREATE A CONTAINER

Creating our first Linux container

$: lxc image list images:
$: lxc image list images: | grep -i ubuntu

To create and start containers from images use the launch command:

$: lxc launch images:ubuntu/bionic/amd64 ubuntu-svr

List the existing containers:

$: lxc list --fast

5 STEP – USEFUL COMMANDS

To run or execute command in containers use exec command:

$: lxc exec containerName -- command

example

$: lxc exec ubuntu-svr -- ip r

To gain login and gain shell access in a container named file-server , enter:

$: lxc exec ubuntu-svr bash

Start containers using the following cli:

$: lxc start ubuntu-svr

Stop containers using the following syntax:

$: lxc stop ubuntu-svr

Want to restart your containers for any reasons? Try:

$: lxc restart ubuntu-svr

The command to delete immediately the container is:

$: lxc stop ubuntu-svr
$: lxc delete ubuntu-svr

Type the following command to have some info:

$: lxc info ubuntu-svr

“cya to the next 1…. Njoy !”
bye dakj

Disclaimer: All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate steps to date, we take no responsibility if you implement any of these steps in a production environment.

“We learn from our mistakes”

“ZFS on Ubuntu 18.04 LTS Server Edition″

ZFS

What is ZFS?

The Z File System (ZFS) was originally designed at Sun Microsystem. It is an advanced file system and logical volume manager. It works on Solaris, FreeBSD, Linux and many other operating systems. The features of ZFS include protection against data corruption, compression, volume management, snapshots, data integrity, Software RAID, cache and much more.

1 STEP – INSTALL ZFS

Native OpenZFS management utilities for Linux are located in zfsutils-linux package. You can also use meta package called zfs. Simply type the following command:

$: sudo apt install zfs

Sample outputs:

Reading package lists... Done
Building dependency tree       
Reading state information... Done
Note, selecting 'zfsutils-linux' instead of 'zfs'
The following additional packages will be installed:
  libnvpair1linux libuutil1linux libzfs2linux libzpool2linux zfs-doc zfs-zed
Suggested packages:
  default-mta | mail-transport-agent samba-common-bin nfs-kernel-server zfs-initramfs
The following NEW packages will be installed:
  libnvpair1linux libuutil1linux libzfs2linux libzpool2linux zfs-doc zfs-zed zfsutils-linux
0 upgraded, 7 newly installed, 0 to remove and 19 not upgraded.
Need to get 884 kB of archives.
After this operation, 2,822 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://mirrors.service.networklayer.com/ubuntu xenial-updates/main amd64 zfs-doc all 0.6.5.6-0ubuntu10 [49.4 kB]
Get:2 http://mirrors.service.networklayer.com/ubuntu xenial-updates/main amd64 libuutil1linux amd64 0.6.5.6-0ubuntu10 [27.4 kB]
Get:3 http://mirrors.service.networklayer.com/ubuntu xenial-updates/main amd64 libnvpair1linux amd64 0.6.5.6-0ubuntu10 [23.5 kB]
Get:4 http://mirrors.service.networklayer.com/ubuntu xenial-updates/main amd64 libzpool2linux amd64 0.6.5.6-0ubuntu10 [385 kB]
Get:5 http://mirrors.service.networklayer.com/ubuntu xenial-updates/main amd64 libzfs2linux amd64 0.6.5.6-0ubuntu10 [106 kB]
Get:6 http://mirrors.service.networklayer.com/ubuntu xenial-updates/main amd64 zfsutils-linux amd64 0.6.5.6-0ubuntu10 [263 kB]
Get:7 http://mirrors.service.networklayer.com/ubuntu xenial-updates/main amd64 zfs-zed amd64 0.6.5.6-0ubuntu10 [29.8 kB]
Fetched 884 kB in 1s (651 kB/s)    
Selecting previously unselected package zfs-doc.
(Reading database ... 91925 files and directories currently installed.)
Preparing to unpack .../zfs-doc_0.6.5.6-0ubuntu10_all.deb ...
Unpacking zfs-doc (0.6.5.6-0ubuntu10) ...
Selecting previously unselected package libuutil1linux.
Preparing to unpack .../libuutil1linux_0.6.5.6-0ubuntu10_amd64.deb ...
Unpacking libuutil1linux (0.6.5.6-0ubuntu10) ...
Selecting previously unselected package libnvpair1linux.
Preparing to unpack .../libnvpair1linux_0.6.5.6-0ubuntu10_amd64.deb ...
Unpacking libnvpair1linux (0.6.5.6-0ubuntu10) ...
Selecting previously unselected package libzpool2linux.
Preparing to unpack .../libzpool2linux_0.6.5.6-0ubuntu10_amd64.deb ...
Unpacking libzpool2linux (0.6.5.6-0ubuntu10) ...
Selecting previously unselected package libzfs2linux.
Preparing to unpack .../libzfs2linux_0.6.5.6-0ubuntu10_amd64.deb ...
Unpacking libzfs2linux (0.6.5.6-0ubuntu10) ...
Selecting previously unselected package zfsutils-linux.
Preparing to unpack .../zfsutils-linux_0.6.5.6-0ubuntu10_amd64.deb ...
Unpacking zfsutils-linux (0.6.5.6-0ubuntu10) ...
Selecting previously unselected package zfs-zed.
Preparing to unpack .../zfs-zed_0.6.5.6-0ubuntu10_amd64.deb ...
Unpacking zfs-zed (0.6.5.6-0ubuntu10) ...
Processing triggers for libc-bin (2.23-0ubuntu3) ...
Processing triggers for initramfs-tools (0.122ubuntu8.1) ...
update-initramfs: Generating /boot/initrd.img-4.4.0-28-generic
Processing triggers for systemd (229-4ubuntu6) ...
Processing triggers for ureadahead (0.100.0-19) ...
Processing triggers for man-db (2.7.5-1) ...
Setting up zfs-doc (0.6.5.6-0ubuntu10) ...
Setting up libuutil1linux (0.6.5.6-0ubuntu10) ...
Setting up libnvpair1linux (0.6.5.6-0ubuntu10) ...
Setting up libzpool2linux (0.6.5.6-0ubuntu10) ...
Setting up libzfs2linux (0.6.5.6-0ubuntu10) ...
Setting up zfsutils-linux (0.6.5.6-0ubuntu10) ...
zfs-import-cache.service is a disabled or a static unit, not starting it.
zfs-import-scan.service is a disabled or a static unit, not starting it.
zfs-mount.service is a disabled or a static unit, not starting it.
Processing triggers for initramfs-tools (0.122ubuntu8.1) ...
update-initramfs: Generating /boot/initrd.img-4.4.0-28-generic
Setting up zfs-zed (0.6.5.6-0ubuntu10) ...
zed.service is a disabled or a static unit, not starting it.
Processing triggers for libc-bin (2.23-0ubuntu3) ...
Processing triggers for systemd (229-4ubuntu6) ...
Processing triggers for ureadahead (0.100.0-19) ...

1.2 STEP – WHAT IS A ZFS VIRTUAL DEVICES (ZFS VDEVs)?

A VDEV is nothing but a collection of a physical disk, file image, or ZFS software raid device, hot spare for ZFS raid. Examples are:

  1. /dev/sdb – a physical disk
  2. /images/200G.img – a file image
  3. /dev/sdc1 – A partition

1.3 STEP – WHAT IS A ZFS POOLSs (ZPOOL)?

A zpool is a storage made of VDEVS (a collect of VDEVS). We can combine two or more physical disks or files or combination of both.

2 STEP – CREATE A RAID1 MIRROR

Use the following syntax

$: zpoll create NAME mirror VDEV1 VDEV2

To create ZPool mirror group called RaidLab, enter:

$: sudo zpool create RaidLab mirror /dev/sdb /dev/sdc

Simply type the following command to see the current health status for ZPools:

$: zpool status

Type the following command to check the size and usage of ZPools:

$: zpool list
$: df

Type the following command to find out the I/O statistics:

$: zpool iostat
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
RaidLab     2.64M   888G      0      2      0  6.38K

You can now start copying data or store data in /nixcraft:
$: cd /RaidLab
$: ls
$: cd /foo/ . 

However, ZFS allows we to create file system. For example data or containers file systems in the pool called RaidLab:

$: sudo zfs create RaidLab/data
$: sudo zfs create RaidLab/containers
$: zfs list
NAME                  USED  AVAIL  REFER  MOUNTPOINT
RaidLab             2.67M   860G  2.59M  /RaidLab
RaidLab/containers    19K   860G    19K  /RaidLab/containers
RaidLab/data          19K   860G    19K  /RaidLab/data

2.1 – REMOVE ZPOOL

The command used is the following:

$: sudo zpool destroy zpoolNameHere
for our lab:
$: sudo zpool destroy nixcraft
$: zpool status

“cya to the next 1…. Njoy !”
bye dakj

Disclaimer: All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate steps to date, we take no responsibility if you implement any of these steps in a production environment.

“We learn from our mistakes”

“LXD and ZFS in Ubuntu 18.04 LTS Server Edition″

LXD & ZFS

What is LXD?

The LXD container hypervisors is supported and created by Ubuntu team. To obtain more info about this project look these links

https://www.ubuntu.com/containers/lxd

and that

https://stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/

What is a ZFS?

To obtain an answer to this Q. we’ve to read these links:

https://wiki.ubuntu.com/ZFS

One time we have view all informations about LXD and ZFS we can proceed with our howto

In order to setup our environment we need to make some steps and having some requirements:

  • Create a bridge network (see this howto);
  • Install LXD and ZFS;
  • Create a Container profile for our lab;

our VM is so configured

Screen Shot 2016-11-04 at 17.16.14.png

screen-shot-2016-11-04-at-22-34-38

01 STEP – PREPARE HDD FOR ZFS

The first thing to do is assigned a new partition on /dev/sdb, run

$: ssh rs@10.20.40.22
$: sudo fdisk /dev/sdb
Welcome to fdisk (util-linux 2.31.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): n
Partition number (1-128, default 1): 1
First sector (34-7804682206, default 2048): 2048 
Last sector, +sectors or +size{K,M,G,T,P} (2048-7804682206, default 7804682206):7804682206 

Created a new partition 1 of type 'Linux filesystem' and of size 3.6 TiB.

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Synching disks.

then create its partition, the result will be that

Screen Shot 2016-11-04 at 17.20.13.png

02 STEP – INSTALL LXD

Then we can proceed installing LXD and ZFS utility with the following commands:

$: sudo sh -c 'apt update && apt upgrade'

create the “lxd” group and add yourself to it.

$: sudo groupadd --system lxd
$: sudo usermod -G lxd -a <username>

install LXD by

$: sudo apt install lxd

its own virtual network adapter will have to be so configured

$: sudo more /etc/network/interfaces
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto ens33
iface ens33 inet manual
auto br0
iface br0 inet dhcp
        bridge_ports ens33

03 STEP – INSTALL ZFS

After Lxd it’s time of that:

$: sudo apt-get install zfsutils-linux 
$: sudo modprobe zfs

reboot the VM

$: sudo reboot

04 STEP – INITIALISE LXD

In order the networking establishes the connection between LXC containers host, we need to set up a bridge device.

$: groups
$: sudo lxd init

Name of the storage backend to use (dir or zfs): zfs
Create a new ZFS pool (yes/no)? yes
Name of the new ZFS pool: lxd
Would you like to use an existing block device (yes/no)? yes
Path to the existing block device: /dev/sdb1
Would you like LXD to be available over the network (yes/no)? no
Do you want to configure the LXD bridge (yes/no)? yes
Warning: Stopping lxd.service, but it can still be activated by:
  lxd.socket
LXD has been successfully configured.

05 STEP – CREATE A LXC PROFILE

Lets create a container profile by copying the default:

$: lxc profile copy default svrlab

06 STEP – LAUNCH LXD CONTAINER

Once the profile has been created, we can now launch the LXC container:

$: lxc launch ubuntu:xenial xenial-svr
Creating xenial-svr
Retrieving image: 100%
Starting xenial-svr

wait few second and then check list with that command

$: lxc list
+-------------+---------+----------------------+------+------------+-----------+
| NAME        |  STATE  |        IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
+-------------+---------+----------------------+------+------------+-----------+
| xenial-svr  | RUNNING | 10.20.40.39 (eth0)   |      | PERSISTENT | 0         |
+-------------+---------+----------------------+------+------------+-----------+

then to connect on its bash run that command.

$: lxc exec xenial-svr bash

07 STEP – HOW TO LIST CONTAINERS

List the existing containers:

$: lxc list --fast
$: lxc list | grep RUNNING
$: lxc list | grep STOPPED
$: lxc list

08 STEP – HOW TO GET BASH SHELL

To gain login and gain shell access in a container named file-server , enter:

$: lxc exec xenial-svr bash

09 STEP – HOW TO STOP THE CONTAINERS

Stop containers using the following syntax:

$: lxc stop xenial-svr

10 STEP – HOW TO RESTART THE CONTAINERS

Want to restart your containers for any reasons? Try:

$: lxc restart xenial-svr

11 STEP – HOW TO DELETE THE CONTAINERS

The command is (be careful as the LXD containers are deleted immediately without any confirmation prompt i.e. keep backups):

$: lxc delete xenial-svr

this how-to at moment is ended..


“cya to the next 1…. Njoy !”
bye dakj

Disclaimer: All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate steps to date, we take no responsibility if you implement any of these steps in a production environment.

“We learn from our mistakes”