Cédric Bosdonnat - Virtualizationhttps://bosdonnat.fr/2017-09-08T09:24:00+02:00A geek's perspectiveVirt-bootstrap 1.0.0 released2017-09-08T09:24:00+02:002017-09-08T09:24:00+02:00Cédric Bosdonnattag:bosdonnat.fr,2017-09-08:/virt-bootstrap-100.html<p>Yesterday, <a href="https://github.com/virt-manager/virt-bootstrap">virt-bootstrap</a> came to life. This tool aims at
simplifying the creation of root file systems for use with libvirt's
LXC container drivers. I started prototyping it a few months ago and
<strong>Radostin Stoyanov</strong> wonderfully took it over during this year's
<a href="https://www.redhat.com/archives/virt-tools-list/2017-August/msg00248.html">Google Summer of Code</a>.</p>
<p>For most users, this tool …</p><p>Yesterday, <a href="https://github.com/virt-manager/virt-bootstrap">virt-bootstrap</a> came to life. This tool aims at
simplifying the creation of root file systems for use with libvirt's
LXC container drivers. I started prototyping it a few months ago and
<strong>Radostin Stoyanov</strong> wonderfully took it over during this year's
<a href="https://www.redhat.com/archives/virt-tools-list/2017-August/msg00248.html">Google Summer of Code</a>.</p>
<p>For most users, this tool will just be used by <a href="https://virt-manager.org/">virt-manager</a>
(since version 1.4.2). But it can be used directly from any script or
command line.</p>
<p>The nice thing about virt-bootstrap is that will allow you to create
a root file system out of existing docker images, tarballs or <a href="http://libguestfs.org/virt-builder.1.html">virt-builder</a>
templates. For example, the following command will get and unpack the official
openSUSE docker image in <code>/tmp/foo</code>.</p>
<div class="highlight"><pre><span></span><code><span class="gp">$</span> virt-bootstrap docker://opensuse /tmp/foo
</code></pre></div>
<p>Virt-bootstrap offers options to:</p>
<ul>
<li>generate qcow2 image with backing chain instead of plain folder</li>
<li>apply user / group ID mapping</li>
<li>set the root password in the container</li>
</ul>
<p>Enjoy easy containers creation with libvirt ecosystem, and have fun!</p>System container images2017-03-08T16:37:00+01:002017-03-08T16:37:00+01:00Cédric Bosdonnattag:bosdonnat.fr,2017-03-08:/system-container-images.html<p>As of today creating libvirt lxc system container root file system is a pain.
Docker's fun came with its image sharing idea... why couldn't we do the same
for libvirt containers? I will expose here is an attempt at this.</p>
<p>To achieve such a goal we need:</p>
<ul>
<li>container images</li>
<li>something …</li></ul><p>As of today creating libvirt lxc system container root file system is a pain.
Docker's fun came with its image sharing idea... why couldn't we do the same
for libvirt containers? I will expose here is an attempt at this.</p>
<p>To achieve such a goal we need:</p>
<ul>
<li>container images</li>
<li>something to share them</li>
<li>a tool to pull and use them</li>
</ul>
<h2>Container images</h2>
<p><a href="http://openbuildservice.org/">OpenBuildService</a> thanks to <a href="https://opensuse.github.io/kiwi/">kiwi</a> knows how to create images,
even container images. There even are <a href="https://build.opensuse.org/project/subprojects/Virtualization:containers:images">openSUSE Docker images</a>.
To use them as system container images, some more packages need to be added
to those. I thus forked the project on github and branched the OBS projects
to get system container images for <a href="http://download.opensuse.org/repositories/home:/cbosdonnat:/branches:/Virtualization:/containers:/images:/openSUSE-42.1/images/">42.1</a>, <a href="http://download.opensuse.org/repositories/home:/cbosdonnat:/branches:/Virtualization:/containers:/images:/openSUSE-42.2/images/">42.2</a> and <a href="http://download.opensuse.org/repositories/home:/cbosdonnat:/branches:/Virtualization:/containers:/images:/openSUSE-Tumbleweed/images/">Tumbleweed</a>.</p>
<p>Using them is as simple as downloading them, unpacking them and use them as a
container's root file system. However, sharing them would be so fun!</p>
<h2>Sharing images</h2>
<p>There is no need to reinvent the wheel to share the images. We can just
consider them like any docker image. With the following commands we can
import the image and push it to a remote registry.</p>
<div class="highlight"><pre><span></span><code>docker import openSUSE-42.2-syscontainer-guest-docker.x86_64.tar.xz system/opensuse-42.2
docker tag system/opensuse-42.2 myregistry:5000/system/opensuse-42.2
docker login myregistry:5000
docker push myregistry:5000/system/opensuse-42.2
</code></pre></div>
<p>The good thing with this is that we can even use the <code>docker build</code> and
<code>Dockerfile</code> magic to create customized images and push them to the remote
repository.</p>
<h2>Instanciating containers</h2>
<p>Now we need a tool to get the images from the remote docker registry. Hopefully
there is a tool that helps a lot to do this: <a href="https://github.com/projectatomic/skopeo">skopeo</a>. I wrote a small
<a href="https://github.com/cbosdo/virt-bootstrap">virt-bootstrap</a> tool using it to instanciate the images as
root file systems.</p>
<p>Here is how instanciating a container looks like with it:</p>
<div class="highlight"><pre><span></span><code>virt-bootstrap.py --username myuser <span class="se">\</span>
--root-password <span class="nb">test</span> <span class="se">\</span>
docker://myregistry:5000/system/opensuse-42.2 /path/to/my/container
virt-install --connect lxc:/// -n <span class="m">422</span> --memory <span class="m">250</span> --vcpus <span class="m">1</span> <span class="se">\</span>
--filesystem /path/to/my/container,/ <span class="se">\</span>
--filesystem /etc/resolv.conf,/etc/resolv.conf <span class="se">\</span>
--network <span class="nv">network</span><span class="o">=</span>default
</code></pre></div>
<p>And voila! Creating an openSUSE 42.2 system container and running it with libvirt
is now super easy!</p>Easy sandboxed apps2015-08-25T09:54:00+02:002015-08-25T09:54:00+02:00Cédric Bosdonnattag:bosdonnat.fr,2015-08-25:/easy-sandboxed-apps.html<p>These days, I wanted to build some SUSE documentation, but <a href="http://opensuse.github.io/daps/">daps</a> drags quite a
few dependencies. Thus I decided to use that occasion to move it to a sandbox.
The idea is very similar to what was detailled in my post on <a href="https://bosdonnat.fr/libvirt-container-for-guiapps.html">containers for
GUI apps</a>, but will be made …</p><p>These days, I wanted to build some SUSE documentation, but <a href="http://opensuse.github.io/daps/">daps</a> drags quite a
few dependencies. Thus I decided to use that occasion to move it to a sandbox.
The idea is very similar to what was detailled in my post on <a href="https://bosdonnat.fr/libvirt-container-for-guiapps.html">containers for
GUI apps</a>, but will be made much easier thanks to the recent progresses
in <a href="http://sandbox.libvirt.org">virt-sandbox</a>.</p>
<p><strong>Note that this is only possible with recent virt-sandbox. To make sure you
have virt-sandbox with all features, install it from the <a href="https://software.opensuse.org/ymp/Virtualization/openSUSE_13.2/virt-sandbox.ymp?base=openSUSE%3A13.2&query=virt-sandbox">OBS Virtualization
repository</a>.</strong></p>
<h2>Creating the disk image</h2>
<p>Most of this part will be simplified, as I reused the <code>test-base.qcow2</code> image
created in the <a href="https://bosdonnat.fr/libvirt-container-for-guiapps.html">previous post</a>. I only added it a non-root user. Doing this
is pretty straight forward thanks to qemu-img's commit feature.</p>
<p>First create a working overlay image based on <code>test-base.qcow2</code>:</p>
<div class="highlight"><pre><span></span><code><span class="err">qemu-img create -f qcow2 \</span>
<span class="err"> -o backing_file=$PWD/test-base.qcow2 \</span>
<span class="err"> daps.qcow2</span>
</code></pre></div>
<p>As a non-privileged user, boot a sandbox running this disk image:</p>
<div class="highlight"><pre><span></span><code><span class="err">virt-sandbox -n daps \</span>
<span class="err"> --privileged \</span>
<span class="err"> -m host-image:/=$PWD/daps.qcow2,format=qcow2 \</span>
<span class="err"> -- \</span>
<span class="err"> /bin/sh</span>
</code></pre></div>
<p>Note that the <code>--privileged</code> parameter keeps you as root in the sandbox.
Otherwise you would be logged in as a user with the same UID as the one you ran
<code>virt-sandbox</code> with. You can do the changes you need in the base image. In our
case, I will add an unprivileged user.</p>
<div class="highlight"><pre><span></span><code><span class="err">useradd -m myuser</span>
</code></pre></div>
<p>As we want this change to propagate to the base image, please refrain from
installing <code>daps</code> or doing other things that you don't want to see in the base
image. Exit the shell to exit the sandbox and get back to your host command.</p>
<p>In order to have working network later in the sandboxes, we have to install
<code>util-linux-systemd</code> and <code>dhcp-client</code> as it wasn't done in when creating the
base image. For this we need to switch to root and mount the image with
<code>libguestfs tools</code> since zypper can't get any network so far.</p>
<div class="highlight"><pre><span></span><code><span class="err">sudo guestmount -a $PWD/daps.qcow2 -m /dev/sda:/ /mnt</span>
<span class="err">sudo zypper --root /mnt in dhcp-client util-linux-systemd</span>
<span class="err">sudo guestunmount /mnt</span>
</code></pre></div>
<p>We will now commit all the changes made in our overlay image to the base image:</p>
<div class="highlight"><pre><span></span><code><span class="err">qemu-img commit $PWD/daps.qcow2</span>
</code></pre></div>
<p>Note that you may have to get permissions to write on <code>test-base.qcow2</code> as we
created it as root in the previous post.</p>
<p>Now, we only have to install <code>daps</code> in our now-empty overlay image. To do so,
run the following command:</p>
<div class="highlight"><pre><span></span><code><span class="err">virt-sandbox -n daps \</span>
<span class="err"> --privileged \</span>
<span class="err"> -m host-image:/=$PWD/daps.qcow2,format=qcow2 \</span>
<span class="err"> -N dhcp \</span>
<span class="err"> -- \</span>
<span class="err"> /bin/sh</span>
</code></pre></div>
<p>Note the <code>-N dhcp</code> argument that will now run <code>dhclient</code> and will provide you
network in the sandbox. This network is limited, since it is user networking.
For more details on it, report to <a href="http://wiki.qemu.org/Documentation/Networking#User_Networking_.28SLIRP.29">this page</a>.</p>
<p>In the sandbox, we can now install <code>daps</code> normally:</p>
<div class="highlight"><pre><span></span><code><span class="err">zypper ar http://download.opensuse.org/repositories/Documentation:/Tools/openSUSE_13.2/Documentation:Tools.repo</span>
<span class="err">zypper in daps</span>
</code></pre></div>
<p>Exit the shell to end the sandbox: your disk image is ready.</p>
<h2>Running daps</h2>
<p>In order to have a smooth user experience with daps, better create a script to
run virt-sandbox for you. Create executable <code>~/bin/daps</code> with content similar to
this one:</p>
<table class="highlighttable"><tr><td class="linenos"><div class="linenodiv"><pre>1
2
3
4
5
6
7
8
9</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="ch">#!/bin/sh</span>
virt-sandbox <span class="se">\</span>
-n daps <span class="se">\</span>
-m host-image:/<span class="o">=</span>/path/to/daps.qcow2,format<span class="o">=</span>qcow2 <span class="se">\</span>
-m host-bind:/home/myuser<span class="o">=</span>/home/myuser <span class="se">\</span>
-m ram:/tmp<span class="o">=</span>100MiB <span class="se">\</span>
-m ram:/run<span class="o">=</span>100MiB <span class="se">\</span>
-- <span class="se">\</span>
/usr/bin/daps <span class="nv">$@</span>
</code></pre></div>
</td></tr></table>
<p>You can add options to mount your host folders in your sandbox. For example,
this will mount <code>/home/myuser</code> at the same place in the sandbox:</p>
<div class="highlight"><pre><span></span><code><span class="err">-m host-bind:/home/myuser=/home/myuser</span>
</code></pre></div>
<p>Make sure that your documentation sources will be mounted in the sandbox.</p>
<p>When running the <code>daps</code> command on your machine, you will run the daps command
within a super tiny KVM machine running with your UID. Note that I didn't add
the <code>-N dhcp</code> option in the script since daps doesn't need it, but you may need
it for other applications or to update your packages.</p>Libvirt container for GUI apps2015-05-06T10:13:00+02:002015-05-06T10:13:00+02:00Cédric Bosdonnattag:bosdonnat.fr,2015-05-06:/libvirt-container-for-guiapps.html<p>I recently tried to get the <a href="https://en.opensuse.org/openSUSE:Openfate">openFATE</a> client working in a container. I know
it may sound stupid, but I didn't want to pollute my Gnome machine with KDE
libraries (end of troll) and wanted to use this to seriously play with
application containers. My constraints were:</p>
<ul>
<li>minimize the duplication …</li></ul><p>I recently tried to get the <a href="https://en.opensuse.org/openSUSE:Openfate">openFATE</a> client working in a container. I know
it may sound stupid, but I didn't want to pollute my Gnome machine with KDE
libraries (end of troll) and wanted to use this to seriously play with
application containers. My constraints were:</p>
<ul>
<li>minimize the duplication of the root file system. I know docker already does
this, but I'm a libvirt hacker after all.</li>
<li>get the application show up on the host display for a smooth use.</li>
</ul>
<h2>Creating the root file system</h2>
<p>To minimze the file system duplication, I went with a qcow2 disk image with a
backing file. The first step is to create the base image with the file system to be
reused by other containers.</p>
<p>Create the disk image:</p>
<div class="highlight"><pre><span></span><code><span class="err">qemu-img create -f qcow2 test-base.qcow2 15G</span>
</code></pre></div>
<p>Create the nbd device from the disk image, format it and mount it. Depending on
your linux distribution you may even need to load the nbd kernel module, hence
the first line.</p>
<div class="highlight"><pre><span></span><code><span class="err">modprobe nbd</span>
<span class="err">/usr/bin/qemu-nbd --format qcow2 -n -c /dev/nbd0 $PWD/test-base.qcow2</span>
<span class="err">mkfs.ext3 /dev/nbd0</span>
<span class="err">mount /dev/nbd0 /mnt</span>
</code></pre></div>
<p>Populate the image with the openSUSE 13.2 Minimal_base pattern:</p>
<div class="highlight"><pre><span></span><code><span class="err">zypper --root /mnt ar http://download.opensuse.org/distribution/13.2/repo/oss/ main</span>
<span class="err">zypper --root /mnt ar http://download.opensuse.org/update/13.2/ updates</span>
<span class="err">zypper --root /mnt in -t pattern Minimal_base</span>
</code></pre></div>
<p>Unmount and clean up the nbd device:</p>
<div class="highlight"><pre><span></span><code><span class="err">umount /mnt</span>
<span class="err">pkill qemu-nbd</span>
</code></pre></div>
<p>Now, we will create an overlay image on top of the one we just created. In this
image, we will only install fate, and create an unprivileged user to run it.</p>
<p>Create the image. Note the <code>backing_file</code> and <code>backing_fmt</code> options as they
will actually setup the backing chain of qcow2 images.</p>
<div class="highlight"><pre><span></span><code><span class="err">qemu-img create -f qcow2 \</span>
<span class="err"> -o backing_fmt=qcow2,backing_file=$PWD/test-base.qcow2 \</span>
<span class="err"> myapp.qcow2</span>
</code></pre></div>
<p>Mount the new image via <code>qemu-nbd</code> again. Note that there is no need to format
that new image: it is really only a diff with the <code>test-base.qcow2</code> file.</p>
<div class="highlight"><pre><span></span><code><span class="err">/usr/bin/qemu-nbd --format qcow2 -n -c /dev/nbd0 $PWD/myapp.qcow2</span>
<span class="err">mount /dev/nbd0 /mnt</span>
</code></pre></div>
<p>Install the application (fate in this example):</p>
<div class="highlight"><pre><span></span><code><span class="err">zypper --root /mnt ar http://download.opensuse.org/repositories/FATE/openSUSE_13.2/ FATE</span>
<span class="err">zypper --root /mnt in fate</span>
</code></pre></div>
<p>Add an unprivileged user:</p>
<div class="highlight"><pre><span></span><code><span class="err">useradd -m myuser</span>
</code></pre></div>
<p>For the application to find the X server display, set the DISPLAY in the user's
profile. The <code>localhost:0</code> value here can be adjusted depending on the network
settings of the container. In this case, I will use a container without the
netns namespace to simplify, but with the default libvirt network the value
would be <code>192.168.122.1:0</code>.</p>
<div class="highlight"><pre><span></span><code><span class="err">echo 'export DISPLAY=localhost:0' > ~myuser/.profile</span>
</code></pre></div>
<p>Unmount and clean up the nbd device:</p>
<div class="highlight"><pre><span></span><code><span class="err">umount /mnt</span>
<span class="err">pkill qemu-nbd</span>
</code></pre></div>
<h2>Setting up the container</h2>
<p>Sadly, there is no other possible way to create the container than by manually
feeding the XML definition to libvirt. Here is a template for the definition:</p>
<div class="highlight"><pre><span></span><code><span class="nt"><domain</span> <span class="na">type=</span><span class="s">'lxc'</span><span class="nt">></span>
<span class="nt"><name></span>myapp<span class="nt"></name></span>
<span class="nt"><memory</span> <span class="na">unit=</span><span class="s">'MiB'</span><span class="nt">></span>256<span class="nt"></memory></span>
<span class="nt"><vcpu</span> <span class="na">placement=</span><span class="s">'static'</span><span class="nt">></span>1<span class="nt"></vcpu></span>
<span class="nt"><resource></span>
<span class="nt"><partition></span>/machine<span class="nt"></partition></span>
<span class="nt"></resource></span>
<span class="nt"><os></span>
<span class="nt"><type</span> <span class="na">arch=</span><span class="s">'x86_64'</span><span class="nt">></span>exe<span class="nt"></type></span>
<span class="nt"><init></span>/usr/bin/su<span class="nt"></init></span>
<span class="nt"><initarg></span>-<span class="nt"></initarg></span>
<span class="nt"><initarg></span>myuser<span class="nt"></initarg></span>
<span class="nt"><initarg></span>-c<span class="nt"></initarg></span>
<span class="nt"><initarg></span>/usr/bin/fate<span class="nt"></initarg></span>
<span class="nt"></os></span>
<span class="nt"><clock</span> <span class="na">offset=</span><span class="s">'utc'</span><span class="nt">/></span>
<span class="nt"><on_poweroff></span>destroy<span class="nt"></on_poweroff></span>
<span class="nt"><on_reboot></span>restart<span class="nt"></on_reboot></span>
<span class="nt"><on_crash></span>destroy<span class="nt"></on_crash></span>
<span class="nt"><devices></span>
<span class="nt"><controller</span> <span class="na">type=</span><span class="s">'ide'</span> <span class="na">index=</span><span class="s">'0'</span><span class="nt">/></span>
<span class="nt"><filesystem</span> <span class="na">type=</span><span class="s">'file'</span><span class="nt">></span>
<span class="nt"><driver</span> <span class="na">type=</span><span class="s">'nbd'</span> <span class="na">format=</span><span class="s">'qcow2'</span><span class="nt">/></span>
<span class="nt"><source</span> <span class="na">file=</span><span class="s">'/path/to/myapp.qcow2'</span><span class="nt">/></span>
<span class="nt"><target</span> <span class="na">dir=</span><span class="s">'/'</span><span class="nt">/></span>
<span class="nt"></filesystem></span>
<span class="nt"><filesystem</span> <span class="na">type=</span><span class="s">'mount'</span><span class="nt">></span>
<span class="nt"><source</span> <span class="na">dir=</span><span class="s">'/home/myuser/.Xauthority'</span><span class="nt">/></span>
<span class="nt"><target</span> <span class="na">dir=</span><span class="s">'/home/myuser/.Xauthority'</span><span class="nt">/></span>
<span class="nt"></filesystem></span>
<span class="nt"><filesystem</span> <span class="na">type=</span><span class="s">'ram'</span><span class="nt">></span>
<span class="nt"><source</span> <span class="na">usage=</span><span class="s">'10240'</span> <span class="na">units=</span><span class="s">'KiB'</span><span class="nt">/></span>
<span class="nt"><target</span> <span class="na">dir=</span><span class="s">'/run'</span><span class="nt">/></span>
<span class="nt"></filesystem></span>
<span class="nt"><filesystem</span> <span class="na">type=</span><span class="s">'ram'</span><span class="nt">></span>
<span class="nt"><source</span> <span class="na">usage=</span><span class="s">'102400'</span> <span class="na">units=</span><span class="s">'KiB'</span><span class="nt">/></span>
<span class="nt"><target</span> <span class="na">dir=</span><span class="s">'/tmp'</span><span class="nt">/></span>
<span class="nt"></filesystem></span>
<span class="nt"><console</span> <span class="na">type=</span><span class="s">'pty'</span><span class="nt">/></span>
<span class="nt"></devices></span>
<span class="nt"><seclabel</span> <span class="na">type=</span><span class="s">'dynamic'</span> <span class="na">model=</span><span class="s">'apparmor'</span> <span class="na">relabel=</span><span class="s">'yes'</span><span class="nt">/></span>
<span class="nt"></domain></span>
</code></pre></div>
<p>In this template, the memory, vcpu and paths to images need to be adapted to
your setup. Note that the qcow2 image is mounted in the container using the
filesystem nbd driver.</p>
<p>The host user's .Xauthority file is mounted in the container's user home. This
is needed for the X applications to connect to the host display.</p>
<p>For the application to be automatically launched as the unprivileged user, the
definition init command is set to</p>
<div class="highlight"><pre><span></span><code><span class="err">/usr/bin/su - myuser -c /usr/bin/fate</span>
</code></pre></div>
<h2>Running the application</h2>
<p>Before being able to connect to the host Xorg server, we need to have it listen
for TCP connection. In openSUSE, this can be achieved by changing
<code>DISPLAYMANAGER_XSERVER_TCP_PORT_6000_OPEN</code> to <code>yes</code> in
<code>/etc/sysconfig/displaymanager</code>.</p>
<p>The application needs to be able to authenticate on the host Xorg server. For
my tests I manually crafted the .Xauthority using xauth. Just remember that
the cookie will change, so either regenerate it before each start or mount
<code>$XAUTHORITY</code> directly to the container's <code>.Xauthority</code>.</p>
<div class="highlight"><pre><span></span><code><span class="err">xauth extract - $DISPLAY | xauth -f $HOME/.Xauthority merge -</span>
</code></pre></div>
<p>Running the application is as easy as starting the container:</p>
<div class="highlight"><pre><span></span><code><span class="err">virsh -c lxc:/// start --console myapp</span>
</code></pre></div>
<p>Of course it would be much more convenient to have it wrapped in a script with
proper sudo configuration to let normal users run the container without needing
to become root.</p>
<h2>Limitations</h2>
<p>This approach is still pretty complex and not that easy for newcomers.</p>
<ul>
<li>The image creation process will be simplified as part of
<a href="http://qemu-project.org/Google_Summer_of_Code_2015#Running_docker_containers_using_virt-sandbox">Google Summer of Code 2015 with libvirt</a>.</li>
<li>The container definition and start could be done with <a href="http://sandbox.libvirt.org/">virt-sandbox</a>, but the
host-image mount parameter should first be able to take qcow2 images.</li>
<li>libvirt doesn't stop the <code>qemu-nbd</code> process when the container is stopped, but
hey, that's a bug I can work on!</li>
</ul>Hackweek 122015-04-17T13:50:00+02:002015-04-17T13:50:00+02:00Cédric Bosdonnattag:bosdonnat.fr,2015-04-17:/hackweek-12.html<p>This week was <a href="https://hackweek.suse.com">hackweek 12</a> at SUSE. I decided to do whatever was needed to get
the <a href="http://openbuildservice.org">open build service (aka OBS)</a> able to create <a href="http://libguestfs.org/virt-builder.1.html">virt-builder</a> images repositories.
OBS is already able to create VM images using kiwi, but it is not able to generate the
signed index file and …</p><p>This week was <a href="https://hackweek.suse.com">hackweek 12</a> at SUSE. I decided to do whatever was needed to get
the <a href="http://openbuildservice.org">open build service (aka OBS)</a> able to create <a href="http://libguestfs.org/virt-builder.1.html">virt-builder</a> images repositories.
OBS is already able to create VM images using kiwi, but it is not able to generate the
signed index file and publish the compressed images.</p>
<p>I owe many thanks to <a href="http://www.adrian-schroeter.de">Adrian</a> for setting me on the right tracks to get this done.
In short, there are two pieces needed to reach the goal:</p>
<ul>
<li>a kiwi hook script running after the kiwi build. How to do this is largely described
in the <a href="https://github.com/openSUSE/containment-rpm/blob/master/README.rst">containment-rpm README</a>.</li>
<li>a patch for the bs_publisher script to create the index from the parts generated for
each image, sign it and publish it together with the compressed images.</li>
</ul>
<p>So far a project is ready in openSUSE's build service in the <a href="https://build.opensuse.org/project/show/home:cbosdonnat:Builder">home:cbosdonnat:Builder</a>
project. It only waits for the <a href="https://github.com/openSUSE/open-build-service/pull/909">PR#909</a> to land on this instance of OBS.</p>
<p>With this, I hope we will soon be able to provide official openSUSE images for our
openSUSE virt-builder users.</p>Hackweek 112014-10-25T18:03:00+02:002014-10-25T18:03:00+02:00Cédric Bosdonnattag:bosdonnat.fr,2014-10-25:/hackweek-11.html<p>Last week was <a href="https://hackweek.suse.com">hackweek 11</a> at SUSE. I chose to work on starting an Android
client application for libvirt. I know there is already something like this on
the Google play, but it's closed source and seems unmaintained... and anyway it
helped me dive in Android NDK and native code …</p><p>Last week was <a href="https://hackweek.suse.com">hackweek 11</a> at SUSE. I chose to work on starting an Android
client application for libvirt. I know there is already something like this on
the Google play, but it's closed source and seems unmaintained... and anyway it
helped me dive in Android NDK and native code building.</p>
<p>As of today, the application can connect to a remote server using libssh2 and list
the domains on it. The good thing is that the hard work to get libvirt.so, JNA
and the Java code running together is solved.</p>
<p>The code can be found on <a href="https://github.com/cbosdo/libvirt-droid">github.com/cbosdo/libvirt-droid</a> even if it still
needs a huge bunch of love!</p>Migrating an LXC container to libvirt lxc2014-07-17T15:41:00+02:002014-07-17T15:41:00+02:00Cédric Bosdonnattag:bosdonnat.fr,2014-07-17:/migrate-lxc-container-to-libvirt.html<p>I am now working on libvirt and the tools gravitating around it for almost a
year and still haven't blogged about anything related to it. As a first post on
virtualization, I'll tell you how to migrate your LXC container to use the
libvirt goodness.</p>
<p>In order to achieve the …</p><p>I am now working on libvirt and the tools gravitating around it for almost a
year and still haven't blogged about anything related to it. As a first post on
virtualization, I'll tell you how to migrate your LXC container to use the
libvirt goodness.</p>
<p>In order to achieve the migration, the host machine needs to be upgraded to
openSUSE 13.1 or later. Quite some of the required features are only in
Factory, so those of you running 13.1 will need to add the Virtualization
repository like this:</p>
<div class="highlight"><pre><span></span><code><span class="err">host ~# zypper ar -f http://download.opensuse.org/repositories/Virtualization/openSUSE_13.1/Virtualization.repo</span>
</code></pre></div>
<p>The first thing to do is to install <strong>libvirt-daemon-lxc</strong>, to drag all the
needed packages to run containers on libvirt. Of course, installing
<strong>virt-manager</strong> will also provide you a convenient GUI to handle them.</p>
<div class="highlight"><pre><span></span><code><span class="err">host ~# zypper in libvirt-daemon-lxc</span>
</code></pre></div>
<p>The LXC container files are usually living in the <strong>/var/lib/lxc/</strong> folder.
Let's assument for this example, that we have an lxc container named
<strong>mycontainer</strong> to migrate.</p>
<p>To avoid the boring task of creating a similar configuration for the libvirt
container, use the <strong>virt-lxc-convert</strong> to generate an equivalent libvirt domain
configuration for the container.</p>
<div class="highlight"><pre><span></span><code><span class="err">host ~# virt-lxc-convert /var/lib/lxc/mycontainer/config >mycontainer.xml</span>
</code></pre></div>
<p>The dumped configuration file, needs to be reviewed before feeding it to
libvirt. Most of the configuration will be OK, but the network configuration may
need some adjustments. For example, for now, libvirt doesn't have any equivalent
to <strong>lxc.network.ipv*</strong>.</p>
<p>When the configuration is ready, define the libvirt container using the
following command:</p>
<div class="highlight"><pre><span></span><code><span class="err">host ~# virsh -c lxc:/// define mycontainer.xml</span>
</code></pre></div>
<p>To be able to connect as root on the console, <strong>/dev/pts/0</strong> needs to be added
to the container <strong>securetty</strong>:</p>
<div class="highlight"><pre><span></span><code><span class="err">host ~# echo "pts/0" >> /var/lib/lxc/mycontainer/rootfs/etc/securetty</span>
</code></pre></div>
<p>Due to <a href="https://bugzilla.redhat.com/show_bug.cgi?id=966807">rhbz#966807</a>, the kernel 3.14 or later is required to be able to
login on the container.</p>
<p>After this you should be able to start and connect to the container console
using <strong>virsh</strong> or <strong>virt-manager</strong>.</p>
<h2>Updating the openSUSE in the container</h2>
<p>This is just normal zypper manipulation, just that the --root parameter needs to
be added to tell zypper to work on the container's root file system. In the
following command this will just be aliased as <em>zypper-mycont</em>.</p>
<p>The following example commands will just replace the container repositories by
openSUSE 13.1 repositories, refresh them and run a dist upgrade.</p>
<div class="highlight"><pre><span></span><code><span class="err">host ~# alias zypper-mycont="zypper --root=/var/lib/lxc/mycontainer/rootfs"</span>
<span class="err">host ~# zypper-mycont lr</span>
<span class="err"># | Alias | Name | Enabled | Refresh</span>
<span class="err">--+----------+----------+---------+--------</span>
<span class="err">1 | repo-oss | repo-oss | Yes | No</span>
<span class="err">2 | update | update | Yes | No</span>
<span class="err">host ~# zypper-mycont rr repo-oss</span>
<span class="err">host ~# zypper-mycont rr update</span>
<span class="err">host ~# zypper-mycont ar http://download.opensuse.org/distribution/13.1/repo/oss repo-oss</span>
<span class="err">host ~# zypper-mycont ar http://download.opensuse.org/update/13.1/ udpate</span>
<span class="err">host ~# zypper-mycont ref</span>
<span class="err">host ~# zypper-mycont dup</span>
</code></pre></div>