Install and setup Xen on Debian 4.0 Etch and 5.0 Lenny
Xen is a GPLed virtualization solution available in Debian repositories. This article help you to create a coherent Xen installation, with backups and minimum security enhancements. This guide fits my vision of a Xen setup. If it does not fit yours, you are free to adapt it to your needs, or to not use it.
For information
In this guide, i use the term "virtual host". Be aware that Xen use its own terminology. In a Xen setup, instanciated (aka. started) virtual hosts are called "Domains".
Once your computer configured for Xen, all the operating systems run in "domains". The original operating system, where the Xen's control tools are installed, is called "dom0" (for domain No zero). All the guest operating systems are said to be installed in a "domU" (for User domain).
As i'm a little lazy (well, yeah... very lazy), i don't want to endure this mind twisting, and i will mainly use the "virtual host" term in this guide. So, if you read this guide, don't by angry at me because of the terms i use.
Prerequisite
This guide use the capabilities of LVM. If you don't know what LVM is, and if you want to install it on your computer, i encourage you to read this article:
A simple introduction to working with LVM
Source: Thanks to Steve Kemp for his article on Debian Administration.
Installing on Debian 5.0 Lenny
First, we install the Xen hypervisor, the Xen kernel for the Dom0, and some more tools:
/usr/bin/apt-get install xen-hypervisor-i386 linux-image-2.6-xen-686 linux-headers-2.6-xen-686 \ xen-utils-3.2-1 xen-tools bridge-utils
Important: If you use a 64 bits version of Debian, use this command line:
/usr/bin/apt-get install xen-hypervisor-amd64 linux-image-2.6-xen-amd64 linux-headers-2.6-xen-amd64 \ xen-utils-3.2-1 xen-tools bridge-utils
Once this done, we reboot in order to use the new kernel:
reboot
Installing on Debian 4.0 Etch
First, we install the Linux kernel needed by Xen and a patched C library for Xen to work properly:
/usr/bin/apt-get install linux-image-2.6-xen-vserver-686 linux-headers-2.6-xen-vserver-686 libc6-xen \ xen-hypervisor-3.0.3-1-i386-pae xen-tools bridge-utils
Important: If you use a 64 bits version of Debian Etch, use the following command line:
/usr/bin/apt-get install linux-image-2.6-xen-vserver-amd64 linux-headers-2.6-xen-vserver-amd64 \ xen-hypervisor-3.0.3-1-amd64 xen-tools bridge-utils
Note: If your CPU has the needed instructions set, and if you want to install unmodified operating systems (such as Windows XP), you need to install some more packages:
/usr/bin/apt-get install xen-ioemu-3.0.3-1
Once this done, we restart the computer in order to use the new kernel:
reboot
Virtualized Ubuntu 8.04 Hardy Heron and Debian 5.0 Lenny operating systems on Debian 4.0 Etch
To be able to use Ubuntu 8.04 Hardy Heron and Debian 5.0 Etch in the DomU from a Debian 4.0 Dom0, we need to add some more tools (basically, a debootstrap backport) :
if [ "$(/bin/cat /etc/debian_version)" = "4.0" ]; then /bin/ln -s ubuntu.d /usr/lib/xen-tools/hardy.d /bin/ln -s debian.d /usr/lib/xen-tools/lenny.d /usr/bin/wget http://falcon.landure.fr/pool/etch/debootstrap/debootstrap_1.0.10_all.deb \ --output-document=/tmp/debootstrap_1.0.10_all.deb /usr/bin/dpkg -i /tmp/debootstrap_1.0.10_all.deb fi
Xen related errors
Kernel panic during domain creation
In order to avoid a kernel panic of the host computer (dom0) when starting a virtual host (domU), we need to limit the number of CPU available for the dom0. This is done by this command line:
/bin/sed -i -e 's/^[# ]*\((dom0-cpus\).*\().*\)$/\1 1\2/' \ /etc/xen/xend-config.sxp
Source: Thanks to Gerald Hochegger for his e-mail Bug#410807: workaround: linux-image-2.6.18-4-xen-amd64: Kernel BUG at drivers/xen/core/evtchn.c:481
Connexion problems to the virtual hosts networks
After a while, and with a certain amount of created virtual hosts, a network bug appear. One of the symptoms of this bug is the presence of the following messages in the syslog of the host computer (dom0):
xen_net: Memory squeeze in netback driver.
The correction of this problem need a restart of the computer. First, you need to edit the Grub configuration file in order to limit the amount of RAM allocated to the dom0 at boot time:
/bin/sed -i -e 's/\(xenhopt=.*\)/\1dom0_mem=256M/' \ /boot/grub/menu.lst
Apply this change to all the kernels listed by Grub:
/usr/sbin/update-grub
And apply the same amount of RAM to the minimum memory available to the dom0:
/bin/sed -i -e 's/^[# ]*\((dom0-min-mem\).*\().*\)$/\1 256\2/' \ /etc/xen/xend-config.sxp
To make this changes effective, you need to restart your computer:
reboot
Source: Thanks to Crucial Paradigm for their article Xen Bug: xen_net: Memory squeeze in netback driver (32bit PAE).
Errors /dev/mem: mmap: Bad address in the virtual hosts (domU)
This error is thrown by dmidecode. dmidecode try to access low level hardware informations in order to guess a adapted default configuration. This can not work in Xen guest computers, and can be ignored.
Source: Thanks to MingsVille for the reply to the question When doing an apt-get or aptitude update I see an error like /dev/mem: mmap: Bad address; is this a cause for concern ? that he found in the technical informations for BitFolk VPS customers.
Network configuration
Xen offers several network modes. This guide support 2 of them. For a short summary:
- Mode bridge: the virtual hosts are seen on the local network of the Xen host. They are available from other computers than the Xen host. It is the configuration to use if your host computer (dom0) is part of a local area network.
Note : This guide is now bridge ready. It was not in its previous versions. - Mode NAT: the virtual hosts are on a virtual local network. They are only available for the Xen host, unless a NAT rule is set. It is the configuration to use if your computer (dom0) is not part of a local network (dedicated server, ...)
Choosing the Xen network mode
If you want your virtual hosts to be protected by a NAT firewall, which I recommend unless you know what you are doing, choose the NAT network mode :
XEN_NETWORK_MODE=nat
Note : If you are wanting to use the bridge mode (not recommended if your dom0 is directly available on the internet), you only need to setup this environment variable :
XEN_NETWORK_MODE=bridge
Configuring the Xen network
The first step is to disable the previous network configuration:
/bin/sed -i -e 's/^(network-script .*).*$/# \0/' \ -e 's/^(vif-script .*).*$/# \0/' \ /etc/xen/xend-config.sxp
We now enable the network configuration that we choosed before :
/bin/sed -i -e "s/^#[ ]*\\((network-script.*network-${XEN_NETWORK_MODE}).*\\)\$/\\1/" \ -e "s/^#[ ]*\\((vif-script.*vif-${XEN_NETWORK_MODE}).*\\)\$/\\1/" \ /etc/xen/xend-config.sxp
We restart the Xen daemon to enable the new network configuration:
/etc/init.d/xend restart
Detecting Dom0 IP address
In order for this guide to work properly, we need to know the IP address of the Dom0. If you are using NAT networking, the IP is 10.0.0.128, but if you are using bridge networking, we need to use the IP address of the physical interface (here, we suppose that the physical interface is etch0) :
MAINDOM_IP=10.0.0.128 if [ $(/bin/grep "^[^#]*bridge" /etc/xen/xend-config.sxp | /usr/bin/wc --lines) -gt 0 ]; then MAINDOM_IP=$(/sbin/ifconfig eth0 | /bin/grep "inet " | /bin/sed -e 's/^.*inet [^:]*:\([^ ]*\) .*$/\1/') fi
You can check the detected IP address with this command :
/bin/echo "The IP address of this server for the DomU is: ${MAINDOM_IP}"
Note: If the guessed address is not correct, you can use this command line to correct it:
MAINDOM_IP=192.168.1.5
Configuring the host computer (dom0) SMTP server
Warning: The goal of this part of this Xen guide is to configure the SMTP server of the host computer (dom0). If your STMP server is already configured, the following command lines will replace your configuration. If this is a problem for you, and if you choosed NAT mode, please make your SMTP server listen on the 10.0.0.128 IP, and accept to relay mail from 10.0.0.0/24.
It is very important to watch the logs and the security updates of your servers. Many tools such as cron-apt, logwatch, rkhunter do this job for you. They send daily reports by e-mail. But a working SMTP server is needed for this tools to be useful. If you are used to SMTP servers configuration, i think that this part of the guide will be of small interest for you, but if it is not the case, we will setup a SMTP serveeer in the host computer (dom0) that will handle the e-mails from the guest computers (domU) created with this guide.
First, we setup the mail domain of our SMTP server. It is often the host name of the server. On a OVH server, it is quite easy to know it:
/bin/echo $HOSTNAME | /usr/bin/tee /etc/mailname
Note: This mail domain must be a valid and working hostname (ie. you must be able to ping this mail domain from any computer of the internet).
We now input the e-mail address to witch are sent the guest computers root mails:
ADMIN_EMAIL=my-email@yahoo.fr
Note: You should use your usual e-mail address.
We apply this configuration:
/bin/sed -i -e '/^root/d' /etc/aliases echo "root: ${ADMIN_EMAIL}" | /usr/bin/tee -a /etc/aliases
Install the Exim 4 configuration file with the needed configuration for this guide:
/bin/mkdir --parent /etc/exim4 /usr/bin/wget http://howto.landure.fr/gnu-linux/debian-4-0-etch/installer-et-configurer-xen-sur-debian-4-0-etch/update-exim4-conf.conf \ --output-document=/etc/exim4/update-exim4.conf.conf
Note: This configuration match a dedicated server from a provider like OVH. If you host your server at home, you should adapt this configuration to your Internet Service Provider. I will not give an example for a configuration that fit this need, since i have no use of it. But if you want to contribute to this guide, you can send me an email with the needed configuration by using the "contact" link at the top right of this site XD.
Update the Exim 4 configuration in order to accept incoming emails from the domU:
MAINDOM_RANGE=$(/bin/echo ${MAINDOM_IP} | /bin/sed -e 's/\(.*\)\.[0-9]*/\1/') /bin/sed -i -e "s|dc_local_interfaces=.*\$|dc_local_interfaces='127.0.0.1;${MAINDOM_IP}'|" \ -e "s|dc_relay_nets=.*\$|dc_relay_nets='${MAINDOM_RANGE}.0/24'|" \ /etc/exim4/update-exim4.conf.conf
Install Exim 4:
/usr/bin/apt-get install exim4
Note: if Exim 4 is already installed on your server, please update its configuration with the command:
/usr/sbin/update-exim4.conf /etc/init.d/exim4 restart
Test the configuration. In order to do this, install mailx:
/usr/bin/apt-get install mailx
And use this command line to send an email to the root user:
/bin/echo "test" | /usr/bin/mail -s "test" root
Solving problems
If Exim 4 is configured right, you should recieve an email with "test" as subject. If after some time you have not received this email, check the result of this command:
/usr/bin/mailq
and the content of the Exim log file:
tail -n 250 /var/log/exim4/mainlog
If the encountered error is something like :
root@my-hostname: Unrouteable address
The error come from a incoherence between the environment variable ${HOSTNAME} and the content of the /etc/hosts file. In order to solve this problem, the content of the /etc/mailname file shoud be "my-hostname.my-domain.lan". This command line has for goal to do this (but provide no success garanty) :
/bin/grep "${HOSTNAME}" /etc/hosts | /usr/bin/cut --field=2 | /usr/bin/tee /etc/mailname
Update the Exim 4 configuration :
/usr/sbin/update-exim4.conf /etc/init.d/exim4 restart
Delete the frozen messages from the Exim 4 mail queue:
mailq | grep frozen | sed -e 's/.* \(.\{6\}-.\{6\}-.\{2\}\) .*/\1/' | xargs exim4 -Mrm
Again, test your Exim 4 configuration by sending a "test" e-mail :
/bin/echo "test" | /usr/bin/mail -s "test" root
With luck, this new configuration work. To be assured of this, check that this command show a empty mail queue :
mailq
Configuring backups
This guide setup a complex backup infrastructure based on the use of the Backup Manager tool. Your guest computers (domU) will centralize their backups in the /var/archives folder of the host computer (dom0). We need to create this folder:
/bin/mkdir /var/archives /bin/chown root:backup /var/archives /bin/chmod ug+rwx /var/archives
Once this done, your host computer (dom0) can store the backup of the guest operating systems (domU).
Installing Backup Manager
Your backup configuration is not complete, since for the moment only the guest OS (domU) are backuped. You also need to backup the host computer (dom0). In order to do this, i encourage you to use my guide:
Install and configure Backup Manager on Debian 4.0 Etch
Once Backup Manager installed, we need to configure it so that the /var/archives folder can be writen by the guest OS (domU), and domU archives be purged at the same time that dom0 archives :
/bin/sed -i -e 's/[#]*\(.*BM_REPOSITORY_GROUP=\).*$/\1"backup"/' \ -e 's/[#]*\(.*BM_REPOSITORY_CHMOD=\).*$/\1"770"/' \ -e 's/[#]*\(.*BM_ARCHIVE_STRICTPURGE=\).*$/\1"false"/' \ /etc/backup-manager.conf
I strongly encourage you to copy your backups to another server with the help of Backup Manager. If you use a OVH server, this company provides a FTP space of the size of your server hard drives to you.
Configuring virtual guest creation by Xen Tools
Default configuration
By default, Xen Tools is configured to create Debian 3.1 Sarge based guest computers. We will change this. Use this command line to setup Xen Tools to create Debian 4.0 Etch or 5.0 Lenny guest computers by default (depending on the Dom0 distribution):
if [ "$(/bin/cat /etc/debian_version)" = "4.0" ]; then /bin/sed -i -e 's/^\(dist[ ]*=[ ]*\)[^ ]*\([ ]*#.*\)$/\1etch\2/' /etc/xen-tools/xen-tools.conf else /bin/sed -i -e 's/^\(dist[ ]*=[ ]*\)[^ ]*\([ ]*#.*\)$/\1lenny\2/' /etc/xen-tools/xen-tools.conf fi
Similarly, change the default swap size from 128Mo to 1Go (for safety reasons, even if most of the time it is not used):
/bin/sed -i -e 's/^\(swap[ ]*=[ ]*\)[^ ]*\([ ]*#.*\)$/\11Gb\2/' /etc/xen-tools/xen-tools.conf
Declare "debootstrap" as the default virtual computer OS installation method:
/bin/sed -i -e 's/^[# ]*\(debootstrap[ ]*=\).*$/\1 1/' /etc/xen-tools/xen-tools.conf
If Xen is configured to use the NAT mode, setup the default DomU network parameters for the creation of DomU with fixed IP:
if [ $(/bin/grep "^[^#]*nat" /etc/xen/xend-config.sxp | /usr/bin/wc --lines) -gt 0 ]; then /bin/sed -i -e 's/^[# ]*\(gateway[ ]*=\).*$/\1 10.0.0.128/' \ -e 's/^[# ]*\(netmask[ ]*=\).*$/\1 255.255.255.0/' \ /etc/xen-tools/xen-tools.conf fi
Note: If Xen is configured to use bridge mode, you can configure manually these settings by editing the /etc/xen-tools/xen-tools.conf file.
Using LVM volume group for virtual computer partitions
We will now configure Xen to use a LVM volume group. First, we detect the name of the first LVM volume group of the system :
LVM_VG=$(/sbin/vgdisplay \ | /bin/grep "VG Name" \ | /usr/bin/head --lines=1 \ | /usr/bin/cut --characters=25-)
Note: If your server has more than one volume group, you can manually set the volume group to use :
LVM_VG=vhd3
Update the Xen tools configuration file:
/bin/sed -i -e "s/^# \(lvm = \).*/\1$LVM_VG/" /etc/xen-tools/xen-tools.conf
Once this done, we configure Xen Tools to use the right kernel wen creating virtual computers:
/bin/sed -i -e "s/^\(kernel =\).*\$/\1 \/boot\/vmlinuz-$(uname -r)/" \ -e "s/^\(initrd =\).*\$/\1 \/boot\/initrd.img-$(uname -r)/" \ /etc/xen-tools/xen-tools.conf
Configuring the Debian repository mirror
By default, the APT mirror used is located in United States of America. It is best to use a mirror closer from your location:
APT_MIRROR=http://ftp.fr.debian.org/debian/
Note: If your server is hosted by OVH, you should use the following mirror:
APT_MIRROR=ftp://mir1.ovh.net/debian/
Once the mirror choosed, update the Xen tools configuration:
CLEAN_APT_MIRROR=$(/bin/echo $APT_MIRROR | sed -e 's/\//\\\//g') /bin/sed -i -e "s/^\(mirror = \).*/\1$CLEAN_APT_MIRROR/" /etc/xen-tools/xen-tools.conf
Creating a Xen Tools role for your virtual computers creation
In order to be able to correctly manage our guest operating systems (domU), we now create a Xen Tools role that will do the following on each virtual computer (domU) we are going to create:
- Installing the following packages: locales, sudo, ntp, cron-apt, logwatch, rkhunter, chkrootkit, vim, screen, backup-manager and fail2ban
- Creating the system user "xen-upgrade" with its rsa key for ssh authentication in order to ease upgrading guest operating systems (it is what we need sudo for).
- Configure Backup Manager to backup the guest operating systems (domU), and copy the backups in the /var/archives folder of the host computer (dom0).
First, we download the configuration file of our new role named "automatic":
/usr/bin/wget http://howto.landure.fr/gnu-linux/debian-4-0-etch/installer-et-configurer-xen-sur-debian-4-0-etch/automatic \ --output-document=/etc/xen-tools/role.d/automatic /bin/chmod +x /etc/xen-tools/role.d/automatic
We create the "xen-backup" system user on the host computer (dom0):
/usr/sbin/adduser --system --shell /bin/sh --home /var/lib/xen-backup --disabled-password xen-backup
And we add it to the "backup" group:
/usr/sbin/adduser xen-backup backup
We create the RSA keys allowing the SSH authentication without password. In order to do this, create the folder that we will use to store this keys:
/bin/mkdir /etc/xen-tools/ssh-keys
And create the keys for the automatic authentication of the "xen-upgrade" (created by the "automatic" role on the guest OS (domU)) and "xen-backup" (that we created earlier on the dom0) accounts:
/usr/bin/ssh-keygen -t rsa -C "upgrade account key" -N "" -f /etc/xen-tools/ssh-keys/xen-upgrade-rsa /usr/bin/ssh-keygen -t rsa -C "backup account key" -N "" -f /etc/xen-tools/ssh-keys/xen-backup-rsa
We setup the "xen-backup" user so that we can use the RSA key we just created to authenticated as it:
/bin/mkdir /var/lib/xen-backup/.ssh /bin/cat /etc/xen-tools/ssh-keys/xen-backup-rsa.pub \ | /usr/bin/tee -a /var/lib/xen-backup/.ssh/authorized_keys /bin/chown -R xen-backup:nogroup /var/lib/xen-backup/.ssh
We also install the script allowing us to upgrade the guest operating systems (domU) from the host computer (dom0):
/usr/bin/wget http://howto.landure.fr/gnu-linux/debian-4-0-etch/installer-et-configurer-xen-sur-debian-4-0-etch/xen-apt-get \ --output-document=/usr/bin/xen-apt-get /bin/chmod +x /usr/bin/xen-apt-get
This script allow to upgrade all of the guest operating systems with this two commands:
xen-apt-get update xen-apt-get upgrade
You can also upgrade a specific virtual computer (domU). You will learn more about its usage by reading the text displayed by:
xen-apt-get
Easing the access to guest operating systems (domU) shell (optionnal)
If you don't want to use password authentication for the root accounts of the guest operating systems, you can create a RSA key for authenticating as root on all guest operating systems. The "automatic" Xen Tools role will install it automaticaly if it find it. Please protect this RSA key with a strong password:
if [ ! -e ${HOME}/.ssh/id_rsa ]; then /usr/bin/ssh-keygen -t rsa -f ${HOME}/.ssh/id_rsa fi
Configuring virtual computers skeleton
We now create a skeleton with some default configurations (Exim 4, locales, timezone, bash and vim mainly) in order to reduce the configuration tasks needed by newly created virtual computers (domU).
First, we create the folder tree that will store the skeleton files (it is a standard linux organization: the skeleton is copied as is in the virtual operating system):
/bin/mkdir --parent /etc/xen-tools/skel/root /bin/mkdir --parent /etc/xen-tools/skel/root/.ssh /bin/mkdir --parent /etc/xen-tools/skel/etc/default /bin/mkdir --parent /etc/xen-tools/skel/etc/exim4 /bin/mkdir --parent /etc/xen-tools/skel/etc/cron.d /bin/mkdir --parent /etc/xen-tools/skel/etc/apt/apt.conf.d /bin/mkdir --parent /etc/xen-tools/skel/etc/xen-data /bin/mkdir --parent /etc/xen-tools/skel/usr/bin
We copy the configuration files of the host computer (dom0) into the skeleton, so that virtual operating systems has a very close configuration from our main host:
/bin/cp /root/.bashrc /etc/xen-tools/skel/root/ if [ -e /root/.vimrc ]; then /bin/cp /root/.vimrc /etc/xen-tools/skel/root/ fi /bin/cp /etc/timezone /etc/xen-tools/skel/etc/ /bin/cp /etc/localtime /etc/xen-tools/skel/etc/ /bin/cp /etc/locale.gen /etc/xen-tools/skel/etc/ /bin/cp /etc/environment /etc/xen-tools/skel/etc/ /bin/cp /etc/default/locale /etc/xen-tools/skel/etc/default/
Note: A part of this is only valid for Debian 4.0 Etch and 5.0 Lenny guest operating systems (but it work quite well on Ubuntu XD).
Add the Exim 4 configuration file in order for the root mails of guest operating systems (domU) to be processed by the SMTP server of the host computer (dom0) (we configured it earlier in this guide):
/usr/bin/wget http://howto.landure.fr/gnu-linux/debian-4-0-etch/installer-et-configurer-xen-sur-debian-4-0-etch/update-exim4-conf-conf.xen \ --output-document=/etc/xen-tools/skel/etc/exim4/update-exim4.conf.conf /bin/sed -i -e "s/dc_smarthost=.*\$/dc_smarthost='${MAINDOM_IP}'/" \ /etc/xen-tools/skel/etc/exim4/update-exim4.conf.conf echo "root@$(/bin/cat /etc/mailname)" > /etc/xen-tools/skel/root/.email
Add the SSH public key of the host computer (dom0) to the known hosts of the domU ssh client, in order for the backup to work correctly:
/usr/bin/ssh-keyscan -H -t rsa ${MAINDOM_IP} | /usr/bin/tee /etc/xen-tools/skel/root/.ssh/known_hosts
Also add the public RSA key of the Dom0 root account to the authorized keys for the domU root account:
if [ -e ${HOME}/.ssh/id_rsa.pub ]; then /bin/cat ${HOME}/.ssh/id_rsa.pub \ | /usr/bin/tee -a /etc/xen-tools/skel/root/.ssh/authorized_keys fi
Configuring guest operating systems backups
The "automatic" Xen Tools role install the Backup Manager tool. Most of the guides available on this site will be updated with the needed informations to manage backups with this tool. We will here setup the backup of our virtual computers data:
First, we install into the virtual operating systems skeleton the update-bm-folders script introduced to you by my guide Install and configure Backup Manager on Debian 4.0 Etch.
/usr/bin/wget http://howto.landure.fr/gnu-linux/debian-4-0-etch/installer-et-configurer-backup-manager-sur-debian-4-0-etch/update-bm-folders \ --output-document=/etc/xen-tools/skel/usr/bin/update-bm-folders /bin/chmod +x /etc/xen-tools/skel/usr/bin/update-bm-folders
We download the default Backup Manager configuration for the guest operating systems:
/usr/bin/wget http://howto.landure.fr/gnu-linux/debian-4-0-etch/installer-et-configurer-xen-sur-debian-4-0-etch/backup-manager.conf \ --output-document=/etc/xen-tools/skel/etc/backup-manager.conf /bin/sed -i -e "s|[#]*\(.*BM_UPLOAD_SSH_HOSTS=\).*$|\1\"${MAINDOM_IP}\"|" \ /etc/xen-tools/skel/etc/backup-manager.conf
And we add the "xen-backup" RSA private key of the host computer (dom0) to the guest operating systems (domU) skeleton:
/bin/cp /etc/xen-tools/ssh-keys/xen-backup-rsa /etc/xen-tools/skel/etc/xen-data/
Creating a Xen virtual computer
Base system installation
We now create our first Xen virtual computer. First we setup its hostname:
XEN_HOSTNAME=xen-vm1
And we choose a IP address for the guest computer (change the last digit of the example IP):
IP_ADDRESS=10.0.0.3
Now, create the virtual computer by using the command lines bellow.
Important: Do not loose the password you enter here, it is the root password of your virtual computer.
- For the default distribution (Debian 4.0 Etch or 5.0 Lenny):
/usr/bin/xen-create-image --hostname $XEN_HOSTNAME \ --ip $IP_ADDRESS --passwd --role automatic
Note: To create a domU obtaining its IP address by DHCP (you may want this if your are using bridge networking), you can use this command:
/usr/bin/xen-create-image --hostname $XEN_HOSTNAME \ --dhcp --passwd --role automatic
- For Ubuntu 8.04 Hardy Heron:
/usr/bin/xen-create-image --hostname $XEN_HOSTNAME \ --ip $IP_ADDRESS --passwd --role automatic \ --dist=hardy --mirror=http://archive.ubuntu.com/ubuntu/
Note: To create a domU obtaining its IP address by DHCP (you may want this if your are using bridge networking), you can use this command:
/usr/bin/xen-create-image --hostname $XEN_HOSTNAME \ --dhcp --passwd --role automatic \ --dist=hardy --mirror=http://archive.ubuntu.com/ubuntu/
Note: The Ubuntu based DomU are sometimes having problems with the locales configuration. In order to correct this, you need to generate the locales that fit your language:
locale-gen fr_FR.UTF-8
and edit the file /etc/environment for it to contain "LANG=fr_FR.UTF-8". Once this two settings applied, restart the domU to take account of them (there is probably other methods, but this is the simplest way to do it i've been able to find).
Creating a /var partition for the domU
Once the virtual computer created, we add a partition for /var. Here, i choosed to create a 10 Giga Bytes partition:
LVM_VG=$(/bin/grep lvm /etc/xen-tools/xen-tools.conf | /bin/sed -e 's/lvm = //') /sbin/lvcreate -n $XEN_HOSTNAME-var -L ${VAR_SIZE} $LVM_VG /sbin/mkfs.ext3 /dev/$LVM_VG/$XEN_HOSTNAME-var
We update the new virtual computer configuration to use this partition:
if [ $(/bin/grep "disk[\t ]*=.*\]" /etc/xen/$XEN_HOSTNAME.cfg | /usr/bin/wc --lines) -eq 0 ]; then /bin/sed -i -e "/^disk[\t ]*=.*/a\\ 'phy:$LVM_VG\/$XEN_HOSTNAME-var,sda3,w'," \ /etc/xen/$XEN_HOSTNAME.cfg else /bin/sed -i -e "s/^\(disk[\t ]*=.*\)\]/\1, 'phy:$LVM_VG\/$XEN_HOSTNAME-var,sda3,w' ]/" /etc/xen/$XEN_HOSTNAME.cfg fi
And we edit the fstab file of the virtual computer:
/bin/umount /dev/$LVM_VG/$XEN_HOSTNAME-disk MOUNT_POINT=$(/bin/mktemp -d) /bin/mount /dev/${LVM_VG}/${XEN_HOSTNAME}-disk ${MOUNT_POINT} /bin/echo "/dev/sda3 /var ext3 defaults 0 1" | /usr/bin/tee -a ${MOUNT_POINT}/etc/fstab VAR_MOUNT_POINT=$(/bin/mktemp -d) /bin/mount /dev/${LVM_VG}/${XEN_HOSTNAME}-var ${VAR_MOUNT_POINT} /bin/cp -a ${MOUNT_POINT}/var/* ${VAR_MOUNT_POINT} /bin/umount ${VAR_MOUNT_POINT} /bin/umount ${MOUNT_POINT}
Starting the virtual computer (domU)
We can now start our new virtual computer (domU):
/usr/sbin/xm create /etc/xen/$XEN_HOSTNAME.cfg
You can fetch the list of running virtual computers with the command:
/usr/sbin/xm list
We now use SSH to connect to our virtual computer:
/usr/bin/ssh root@$XEN_HOSTNAME
Note: If you have created a RSA key for root authentication, you'll be ask for the RSA key password, not for the Xen DomU root password.
"Hardware" access to the virtual computer
If you encounter problems to connect by SSH to your virtual computer, you can access to the "console (ttY)" of your newly created domaine with:
/usr/sbin/xm console $XEN_HOSTNAME
Warning: The Xen console act as a serial console. This can be a problem if you use Vim.
To exit this virtual console, use the shortcut : <Ctrl> <]>
Automatic startup of virtual computers
If you want some of your virtual computers to start automaticaly at your main system startup (this is needed for servers), you need to create a "auto" folder:
/bin/mkdir /etc/xen/auto
Once this done, place the configuration of virtual computers that need to be automaticaly started into this folder. For exemple, to start automaticaly the "apache2" virtual computer, use this command:
/bin/mv /etc/xen/apache2.cfg /etc/xen/auto/
Thanks
- Thanks to Michele Petrazzo for his article Debian Etch And Xen From The Debian Repository.
- Thanks to TMS for his article Sécuriser l’accès au serveur openssh avec une authentification par clés on Génération Libre.
- Thanks to Backup Manager tool developers.
- Thanks to Nyarla on irc.freenode.net#ubuntu-fr for anwsering my questions on Backup Manager.
- Thanks to the author of Backup Manager guide on documentation Ubuntu francophone.
- Thanks to Falko for his guide The Perfect Xen 3.1.0 Setup For Debian Etch.
- Thanks to the author of Xen Networking on Xen dedicated Wiki.
- Thanks to GaB for his article Manage SSH known_hosts entries.