diff --git a/README.md b/README.md
index ded9bee7d412..b004af9e941f 100644
--- a/README.md
+++ b/README.md
@@ -1,27 +1,18 @@
# Table of contents
* [ Introduction ](#intro)
-* [ SLES-15 ](#sles-15)
- * [ Prepare Host OS ](#sles-15-host)
- * [ Prepare VM ](#sles-15-prep-vm)
- * [ Launch SEV VM ](#sles-15-launch-vm)
-* [ Fedora-28 ](#fc-28)
- * [ Prepare Host OS ](#fc-28-host)
- * [ Prepare VM ](#fc-28-prep-vm)
- * [ Launch SEV VM ](#fc-28-launch-vm)
-* [ Ubuntu-18.04 ](#ubuntu18)
- * [ Prepare Host OS ](#ubuntu18-host)
- * [ Prepare VM ](#ubuntu18-prep-vm)
- * [ Launch SEV VM ](#ubuntu18-launch-vm)
-* [ openSuse-Tumbleweed](#tumbleweed)
- * [ Prepare Host OS ](#tumbleweed-host)
- * [ Launch SEV VM ](#tumbleweed-launch-vm)
-* [ Additional resources ](#resources)
+* [ Kata Containers with SEV ](#kata-sev)
+ * [ External Dependencies ](#kata-deps)
+ * [ Ubuntu-18.04 ](#ubuntu18)
+ * [ Prepare Host OS ](#ubuntu18-kata-host)
+ * [ Install Kata ](#ubuntu18-kata-install)
+ * [ Launch SEV Containers ](#ubuntu18-kata-launch)
+* [ Additional Resources ](#resources)
* [ FAQ ](#faq)
- * [ How do I know if Hypervisor supports SEV ](#faq-1)
- * [ How do I know if SEV is enabled in the guest](#faq-2)
- * [ Can I use virt-manager to launch SEV guest](#faq-3)
- * [ How to increase SWIOTLB limit](#faq-4)
- * [ virtio-blk fails with out-of-dma-buffer error](#faq-5)
+ * [ How do I know if my hypervisor supports SEV? ](#faq-1)
+ * [ How do I know if SEV is enabled in the guest? ](#faq-2)
+ * [ Can I use virt-manager to launch SEV guests? ](#faq-3)
+ * [ virtio-blk devices fail with an out-of-dma-buffer error! ](#faq-4)
+ * [ How do I increase the SWIOTLB limit? ](#faq-5)
# Secure Encrypted Virtualization (SEV)
@@ -30,343 +21,149 @@ SEV is an extension to the AMD-V architecture which supports running encrypted
virtual machine (VMs) under the control of KVM. Encrypted VMs have their pages
(code and data) secured such that only the guest itself has access to the
unencrypted version. Each encrypted VM is associated with a unique encryption
-key; if its data is accessed to a different entity using a different key the
-encrypted guests data will be incorrectly decrypted, leading to unintelligible
-data.
+key; if the guest data is accessed from a different entity using a different key,
+then the encrypted guest's data will be incorrectly decrypted into unintelligible
+plaintext.
SEV support has been accepted in upstream projects. This repository provides
scripts to build various components to enable SEV support until the distros
-pick the newer version of components.
+include the newer versions.
-To enable the SEV support we need the following versions.
+
+# Kata Containers with SEV
-| Project | Version |
-| ------------- |:------------------------------------:|
-| kernel | >= 4.16 |
-| libvirt | >= 4.5 |
-| qemu | >= 2.12 |
-| ovmf | >= commit (75b7aa9528bd 2018-07-06 ) |
-
-> * Installing newer libvirt may conflict with existing setups hence script does
-> not install the newer version of libvirt. If you are interested in launching
-> SEV guest through the virsh commands then build and install libvirt 4.5 or
-> higher. Use LaunchSecurity tag https://libvirt.org/formatdomain.html#sev for
-> creating the SEV enabled guest.
->
-> * SEV support is not available in SeaBIOS. Guest must use OVMF.
-
-
-
-## SLES-15
-
-SUSE Linux Enterprise Server 15 GA includes the SEV support; we do not need
-to compile the sources.
-
-> SLES-15 does not contain the updated libvirt packages yet hence we will
-use QEMU command line interface to launch VMs.
-
-
-### Prepare Host OS
-
-SEV is not enabled by default, lets enable it through kernel command line:
-
-Append the following in /etc/defaults/grub
-
-```
-GRUB_CMDLINE_LINUX_DEFAULT=".... mem_encrypt=on kvm_amd.sev=1"
-```
-
-Regenerate grub.cfg and reboot the host
-
-```
-# grub2-mkconfig -o /boot/efi/EFI/sles/grub.cfg
-# reboot
-```
-
-Install the qemu launch script. The launch script can be obtained from this project.
-
-```
-# git clone https://github.com/AMDESE/AMDSEV.git
-# cd AMDSEV/distros/sles-15
-# ./build.sh
-```
-
-### Prepare VM image
-
-Create empty virtual disk image
-
-```
-# qemu-img create -f qcow2 sles-15.qcow2 30G
-```
-
-Create a new copy of OVMF_VARS.fd. The OVMF_VARS.fd is a "template" used
-to emulate persistent NVRAM storage. Each VM needs a private, writable
-copy of VARS.fd.
-
-```
-#cp /usr/share/qemu/ovmf-x86_64-suse-4m-vars.bin OVMF_VARS.fd
-```
-
-Download and install sles-15 guest
-
-```
-# launch-qemu.sh -hda sles-15.qcow2 -cdrom SLE-15-Installer-DVD-x86_64-GM-DVD1.iso -nosev
-```
-Follow the screen to complete the guest installation.
-
-
-### Launch VM
-
-Use the following command to launch SEV guest
-
-```
-# launch-qemu.sh -hda sles-15.qcow2
-```
-NOTE: when guest is booting, CTRL-C is mapped to CTRL-], use CTRL-] to stop the guest
-
-
-## Fedora-28
-
-Fedora-28 includes newer kernel and ovmf packages but has older qemu. We will need to update the QEMU to launch SEV guest.
-
-
-### Prepare Host OS
-
-SEV is not enabled by default, lets enable it through kernel command line:
-
-Append the following in /etc/defaults/grub
-
-```
-GRUB_CMDLINE_LINUX_DEFAULT=".... mem_encrypt=on kvm_amd.sev=1"
-```
-
-Regenerate grub.cfg and reboot the host
-
-```
-# grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg
-# reboot
-```
-
-Build and install newer qemu
-
-```
-# cd distros/fedora-28
-# ./build.sh
-```
-
-
-### Prepare VM image
-
-Create empty virtual disk image
-
-```
-# qemu-img create -f qcow2 fedora-28.qcow2 30G
-```
-
-Create a new copy of OVMF_VARS.fd. The OVMF_VARS.fd is a "template" used
-to emulate persistent NVRAM storage. Each VM needs a private, writable
-copy of VARS.fd.
-
-```
-# cp /usr/share/OVMF/OVMF_VARS.fd OVMF_VARS.fd
-```
+[ Kata Containers ](https://katacontainers.io) is an OpenStack project designed to leverage hardware virtualization technology to provide maximum isolation for container workloads in cloud environments. On AMD systems, SEV can be applied to further protect the confidentiality of container workloads from the host and other tenant containers.
-Download and install fedora-28 guest
+
+## External Dependencies
-```
-# launch-qemu.sh -hda fedora-28.qcow2 -cdrom Fedora-Workstation-netinst-x86_64-28-1.1.iso
-```
-Follow the screen to complete the guest installation.
+To enable SEV support with Kata Containers, the following component versions are required:
-
-### Launch VM
+| Project | Version |
+|---------------|--------------------------------------|
+| kernel | >= 4.17 |
+| qemu | >= 3.0 |
+| ovmf | >= commit (75b7aa9528bd 2018-07-06 ) |
-Use the following command to launch SEV guest
+> NOTE: SEV support is not available in SeaBIOS. Guests must use OVMF.
-```
-# launch-qemu.sh -hda fedora-28.qcow2
-```
-
-NOTE: when guest is booting, CTRL-C is mapped to CTRL-], use CTRL-] to stop the guest
+The [ Prepare Host OS ](#ubuntu18-kata-host) section contains instructions for satisfying these dependencies.
## Ubuntu 18.04
-Ubuntu 18.04 does not includes the newer version of components to be used as SEV
-hypervisor hence we will build and install newer kernel, qemu, ovmf.
+The packaged versions of the Linux kernel, qemu, and OVMF in Ubuntu 18.04 do not yet support SEV, so it is necessary to build them from source.
-
+
### Prepare Host OS
-* Enable source repositories [See](https://askubuntu.com/questions/158871/how-do-i-enable-the-source-code-repositories)
+The **build.sh** script in the distros/ubuntu-18.04 directory will build and install SEV-capable versions of the host kernel, qemu, and OVMF:
-* Build and install newer components
+> NOTE: build.sh will use 'sudo' as necessary to gain privileges to install files, so build.sh should be run as a normal user.
```
-# cd distros/ubuntu-18.04
-# ./build.sh
+$ cd distros/ubuntu-18.04
+$ ./build.sh
```
-
-### Prepare VM image
-
-Create empty virtual disk image
+Once the kernel has been installed, reboot and choose the SEV kernel:
```
-# qemu-img create -f qcow2 ubuntu-18.04.qcow2 30G
+$ sudo reboot
```
-Create a new copy of OVMF_VARS.fd. The OVMF_VARS.fd is a "template" used
-to emulate persistent NVRAM storage. Each VM needs a private, writable
-copy of VARS.fd.
+At this point, the host is ready to act as a SEV-capable hypervisor. For more information about running SEV guests, see [README.md](https://github.com/AMDESE/AMDSEV/blob/master/README.md) in the master branch.
-```
-# cp /usr/local/share/qemu/OVMF_VARS.fd OVMF_VARS.fd
-```
-
-Install ubuntu-18.04 guest
-
-```
-# launch-qemu.sh -hda ubuntu-18.04.qcow2 -cdrom ubuntu-18.04-desktop-amd64.iso
-```
-Follow the screen to complete the guest installation.
+
+### Install Kata
-
-### Launch VM
+Once the host is running an SEV-capable kernel, execute **build-kata.sh** in distros/ubuntu-18.04 to build, install, and configure the Kata Containers system along with the latest version of Docker CE:
-Use the following command to launch SEV guest
+> NOTE: build-kata.sh will use 'sudo' as necessary to gain privileges to install files, so build-kata.sh should be run as a normal user.
```
-# launch-qemu.sh -hda ubuntu-18.04.qcow2
+$ cd distros/ubuntu-18.04
+$ ./build-kata.sh
```
-NOTE: when guest is booting, CTRL-C is mapped to CTRL-], use CTRL-] to stop the guest
-
-## openSUSE-Tumbleweed
+At this point, docker is installed and configured to use the SEV-capable kata-runtime as the default runtime for containers. In addition, kata-runtime is configured to use SEV for all containers by default.
-Latest version of openSUSE Tumbleweed distro contains all the pre-requisite packages to launch an SEV guest. But the SEV feature is not enabled by default, this section documents how to enable the SEV feature.
+
+### Launch SEV Containers
-
-### Prepare Host OS
-
-* Add new udev rule for the /dev/sev device
-
- ```
- # cat /etc/udev/rules.d/71-sev.rules
- KERNEL=="sev", MODE="0660", GROUP="kvm"
- ```
-* Clean libvirt caches so that on restart libvirt re-generates the capabilities
-
- ```
- # rm -rf /var/cache/libvirt/qemu/capabilities/
- # systemctl restart libvirtd
- ```
-* SEV feature is not enabled in kernel by default, lets enable it through kernel command line:
-
- Append the following in /etc/defaults/grub
- ```
- GRUB_CMDLINE_LINUX_DEFAULT=".... mem_encrypt=on kvm_amd.sev=1"
- ```
- Regenerate grub.cfg and reboot the host
-
- ```
- # grub2-mkconfig -o /boot/efi/EFI/opensuse/grub.cfg
- # reboot
- ```
-
-
-### Launch SEV VM
-
-Since virt-manager does not support SEV yet hence we need to use 'virsh' command to launch the SEV guest. See xmls/sample.xml on how to add SEV specific information in existing xml. Use the following command to launch SEV guest
+Use the following command to launch a busybox container protected by SEV:
```
-# virsh create sample.xml
+$ sudo docker run -it busybox sh
```
-> The sample xml was generated through virt-manager and then edited with SEV specific information. The main changes are:
->
->* For virtio devices we need to enable DMA APIs. The DMA APIs are enable through (aka iommu_platform=on) tag
-
-```
-
-
-
-
-
- ```
-> * Add LaunchSecurity tag to tell libvirt to enable memory-encryption
+To verify that SEV is active in the guest, look for messages in the kernel logs containing "SEV":
```
-
- 0x0001
- 47
- 1
-
+# dmesg | grep SEV
+ [ 0.001000] AMD Secure Encrypted Virtualization (SEV) active
+ [ 0.219196] SEV is active and system is using DMA bounce buffers
```
-> * QEMU pins the guest memory during the SEV guest launch hence we need to set the domain specific memory parameters to raise the memlock rlimits. e.g the below memtune tags raise the memlock limit to 5GB.
-
-```
-
- 5
- 5
-
-```
-
# Additional Resources
-[SME/SEV white paper](http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2013/12/AMD_Memory_Encryption_Whitepaper_v7-Public.pdf)
+[SME/SEV White Paper](http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2013/12/AMD_Memory_Encryption_Whitepaper_v7-Public.pdf)
-[SEV API Spec](http://support.amd.com/TechDocs/55766_SEV-KM%20API_Specification.pdf)
+[SEV Key Management API Spec](http://support.amd.com/TechDocs/55766_SEV-KM%20API_Specification.pdf)
[APM Section 15.34](http://support.amd.com/TechDocs/24593.pdf)
-[KVM forum slides](http://www.linux-kvm.org/images/7/74/02x08A-Thomas_Lendacky-AMDs_Virtualizatoin_Memory_Encryption_Technology.pdf)
+[KVM Forum Slides](http://www.linux-kvm.org/images/7/74/02x08A-Thomas_Lendacky-AMDs_Virtualizatoin_Memory_Encryption_Technology.pdf)
-[KVM forum videos](https://www.youtube.com/watch?v=RcvQ1xN55Ew)
+[KVM Forum Videos](https://www.youtube.com/watch?v=RcvQ1xN55Ew)
-[Linux kernel](https://elixir.bootlin.com/linux/latest/source/Documentation/virtual/kvm/amd-memory-encryption.rst)
+[Linux Kernel Memory Encryption Documentation (RST)](https://elixir.bootlin.com/linux/latest/source/Documentation/virtual/kvm/amd-memory-encryption.rst)
-[Linux kernel](https://elixir.bootlin.com/linux/latest/source/Documentation/x86/amd-memory-encryption.txt)
+[Linux Kernel Memory Encryption Documentation (TXT)](https://elixir.bootlin.com/linux/latest/source/Documentation/x86/amd-memory-encryption.txt)
-[Libvirt LaunchSecurity tag](https://libvirt.org/formatdomain.html#sev)
+[Libvirt LaunchSecurity Tag](https://libvirt.org/formatdomain.html#sev)
[Libvirt SEV domainCap](https://libvirt.org/formatdomaincaps.html#elementsSEV)
-[Qemu doc](https://git.qemu.org/?p=qemu.git;a=blob;f=docs/amd-memory-encryption.txt;h=f483795eaafed8409b1e96806ca743354338c9dc;hb=HEAD)
+[Qemu Memory Encryption Documentation](https://git.qemu.org/?p=qemu.git;a=blob;f=docs/amd-memory-encryption.txt;h=f483795eaafed8409b1e96806ca743354338c9dc;hb=HEAD)
+
+[Kata Architecture](https://github.com/kata-containers/documentation/blob/master/architecture.md)
+
+[Kata Developer Guide](https://github.com/kata-containers/documentation/blob/master/Developer-Guide.md)
# FAQ
- * How do I know if hypervisor supports SEV feature ?
-
- a) When using libvirt >= 4.15 run the following command
-
+ * **How do I know if my hypervisor supports the SEV feature?**
+
+ a) When using libvirt >= 4.15, run the following command as root:
+
```
# virsh domcapabilities
```
- If hypervisor supports SEV feature then sev tag will be present.
-
- >See [Libvirt DomainCapabilities feature](https://libvirt.org/formatdomaincaps.html#elementsSEV)
-for additional information.
-
- b) Use qemu QMP 'query-sev-capabilities' command to check the SEV support. If SEV is supported then command will return the full SEV capabilities (which includes host PDH, cert-chain, cbitpos and reduced-phys-bits).
-
- > See [QMP doc](https://github.com/qemu/qemu/blob/master/docs/devel/writing-qmp-commands.txt) for details on how to interact with QMP shell.
-
+
+ If the hypervisor supports the SEV feature, then the **sev** tag will be present.
+
+ > See [Libvirt DomainCapabilities feature](https://libvirt.org/formatdomaincaps.html#elementsSEV) for additional information.
+
+ b) Use the QMP 'query-sev-capabilities' command to check for SEV support. If SEV is supported, then the command will return the full SEV capabilities (which includes the host PDH, cert-chain, cbitpos and reduced-phys-bits).
+
+ > See [QMP doc](https://github.com/qemu/qemu/blob/master/docs/devel/writing-qmp-commands.txt) for details on how to interact with QMP shell.
+
- * How do I know if SEV is enabled in the guest ?
+ * **How do I know if SEV is enabled in the guest?**
- a) Check the kernel log buffer for the following message
+ a) Check the kernel log buffer for the following message:
+
```
# dmesg | grep -i sev
AMD Secure Encrypted Virtualization (SEV) active
```
-
- b) MSR 0xc0010131 (MSR_AMD64_SEV) can be used to determine if SEV is active
-
+
+ b) MSR 0xc0010131 (MSR_AMD64_SEV) can be used to determine if SEV is active:
+
```
# rdmsr -a 0xc0010131
```
@@ -376,26 +173,36 @@ for additional information.
- * Can I use virt-manager to launch SEV guest?
+ * **Can I use virt-manager to launch SEV guests?**
- virt-manager uses libvirt to manage VMs, SEV support has been added in libvirt but virt-manager does use the newly introduced [LaunchSecurity](https://libvirt.org/formatdomain.html#sev) tags yet hence we will not able to launch SEV guest through the virt-manager.
- > If your system is using libvirt >= 4.15 then you can manually edit the xml file to use [LaunchSecurity](https://libvirt.org/formatdomain.html#sev) to enable the SEV support in the guest.
+ virt-manager uses libvirt to manage VMs. SEV support has been added in libvirt, but virt-manager does not use the newly introduced [LaunchSecurity](https://libvirt.org/formatdomain.html#sev) tags yet. Hence, we will not able to launch SEV guests through virt-manager.
+ > If your system is using libvirt >= 4.15, then you can manually edit the xml file to use [LaunchSecurity](https://libvirt.org/formatdomain.html#sev) to enable SEV support in the guest.
- * How to increase SWIOTLB limit ?
-
- When SEV is enabled, all the DMA operations inside the guest are performed on the shared memory. Linux kernel uses SWIOTLB bounce buffer for DMA operations inside SEV guest. A guest panic will occur if kernel runs out of the SWIOTLB pool. Linux kernel default to 64MB SWIOTLB pool. It is recommended to increase the swiotlb pool size to 512MB. The swiotlb pool size can be increased in guest by appending the following in the grub.cfg file
-
- Append the following in /etc/defaults/grub
-
-```
-GRUB_CMDLINE_LINUX_DEFAULT=".... swiotlb=262144"
-```
+ * **virtio-blk devices fail with an out-of-dma-buffer error!**
-And regenerate the grub.cfg.
+ To support the multiqueue mode, virtio-blk drivers inside the guest allocate a large number of DMA buffers. SEV guests use SWIOTLB for DMA buffer allocation/mapping, hence the kernel exhausts the SWIOTLB pool quickly and triggers the out-of-memory error. In those cases, consider [ increasing the SWIOTLB pool size ](#faq-5), or use a virtio-scsi device.
+ > NOTE: If the device containing the container rootfs image is changed from virtio-blk to virtio-scsi, then the kernel_params variable in /etc/kata-containers/configuration.toml must be updated with root=/dev/sda1 (instead of /dev/vda1). Otherwise, the container will appear to hang during startup.
+
+ The root device can be changed from the command line using sed:
+ ```
+ sudo sed -i -e "s/vda1/sda1/g" /etc/kata-containers/configuration.toml
+ ```
- * virtio-blk device runs out-of-dma-buffer error
-
- To support the multiqueue mode, virtio-blk drivers inside the guest allocates large number of DMA buffer. SEV guest uses SWIOTLB for the DMA buffer allocation or mapping hence kernel runs of the SWIOTLB pool quickly and triggers the out-of-memory error. In those cases consider increasing the SWIOTLB pool size or use virtio-scsi device.
-
+ * **How do I increase the SWIOTLB limit?**
+
+ When SEV is enabled, all DMA operations inside the guest must be performed on shared (i.e. unencrypted) memory. The Linux kernel uses SWIOTLB bounce buffers to meet this requirement. A guest panic will occur if the kernel exhausts the SWIOTLB pool. The Linux kernel defaults to a 64MB SWIOTLB pool. It is recommended to increase the SWIOTLB pool size to 512MB. The SWIOTLB pool size can be increased in the guest by appending the "swiotlb=" parameter to the Linux kernel command line in the configuration.toml file.
+
+ Append the "swiotlb=" parameter to the kernel_params variable in /etc/kata-containers/configuration.toml:
+
+ ```
+ kernel_params = " ... swiotlb=262144"
+ ```
+
+ Alternatively, this can be done from the command line using sed:
+
+ ```
+ sudo sed -i -e "s/^kernel_params = \"\(.*\)\"/kernel_params = \"\1 swiotlb=262144\"/g" /etc/kata-containers/configuration.toml
+ ```
+
diff --git a/distros/common.sh b/distros/common.sh
index 7b0d53908da0..091271f942ab 100755
--- a/distros/common.sh
+++ b/distros/common.sh
@@ -21,6 +21,8 @@ build_kernel()
if [ ! -d $BUILD_DIR/linux ]; then
run_cmd "mkdir -p ${BUILD_DIR}/linux"
run_cmd "git clone --single-branch -b ${KERNEL_COMMIT} ${KERNEL_GIT_URL} ${BUILD_DIR}/linux"
+ else
+ run_cmd "git checkout ${KERNEL_COMMIT}"
fi
pushd $BUILD_DIR/linux
@@ -42,7 +44,7 @@ build_kernel()
install_kernel()
{
pushd $BUILD_DIR
- run_cmd "dpkg -i *.deb"
+ run_cmd "sudo dpkg -i *.deb"
popd
}
@@ -57,7 +59,7 @@ build_install_ovmf()
fi
pushd $BUILD_DIR/edk2
- #run_cmd "make -C BaseTools"
+ run_cmd "make -C BaseTools"
. ./edksetup.sh --reconfig
run_cmd "nice build --cmd-len=64436 \
-DDEBUG_ON_SERIAL_PORT=TRUE \
@@ -68,9 +70,34 @@ build_install_ovmf()
-DSMM_REQUIRE \
-DSECURE_BOOT_ENABLE=TRUE \
-p OvmfPkg/OvmfPkgIa32X64.dsc"
- run_cmd "mkdir -p /usr/local/share/qemu"
- run_cmd "cp Build/Ovmf3264/DEBUG_GCC5/FV/OVMF_CODE.fd $*"
- run_cmd "cp Build/Ovmf3264/DEBUG_GCC5/FV/OVMF_VARS.fd $*"
+ run_cmd "sudo mkdir -p /usr/local/share/qemu"
+ run_cmd "sudo cp Build/Ovmf3264/DEBUG_GCC5/FV/OVMF_CODE.fd $*"
+ run_cmd "sudo cp Build/Ovmf3264/DEBUG_GCC5/FV/OVMF_VARS.fd $*"
+ popd
+}
+
+build_install_kata_ovmf()
+{
+ if [ ! -d $BUILD_DIR/edk2-kata ]; then
+ run_cmd "mkdir -p ${BUILD_DIR}/edk2-kata"
+ run_cmd "git clone ${EDK2_GIT_URL} ${BUILD_DIR}/edk2-kata"
+ pushd $BUILD_DIR/edk2-kata
+ run_cmd "git submodule update --init --recursive"
+ popd
+ fi
+
+ pushd $BUILD_DIR/edk2-kata
+ run_cmd "make -C BaseTools"
+ . ./edksetup.sh --reconfig
+ run_cmd "nice build --cmd-len=64436 \
+ -DDEBUG_ON_SERIAL_PORT=TRUE \
+ -n $(getconf _NPROCESSORS_ONLN) \
+ -a X64 \
+ -t GCC5 \
+ -p OvmfPkg/OvmfPkgX64.dsc"
+ run_cmd "sudo mkdir -p /usr/local/share/qemu"
+ run_cmd "sudo cp Build/OvmfX64/DEBUG_GCC5/FV/OVMF_CODE.fd $*/OVMF_CODE.fd.kata"
+ run_cmd "sudo cp Build/OvmfX64/DEBUG_GCC5/FV/OVMF_VARS.fd $*/OVMF_VARS.fd.kata"
popd
}
@@ -84,6 +111,229 @@ build_install_qemu()
pushd $BUILD_DIR/qemu
run_cmd "./configure --target-list=x86_64-softmmu --prefix=$*"
run_cmd "make -j$(getconf _NPROCESSORS_ONLN)"
- run_cmd "make -j$(getconf _NPROCESSORS_ONLN) install"
+ run_cmd "sudo make -j$(getconf _NPROCESSORS_ONLN) install"
+ popd
+}
+
+build_install_kata_qemu()
+{
+ # Remove 'https://' from the repo url to be able to clone the repo using 'go get'
+ QEMU_REPO=${QEMU_GIT_URL/https:\/\//}
+ PACKAGING_REPO="github.com/kata-containers/packaging"
+ QEMU_CONFIG_SCRIPT="${BUILD_DIR}/packaging/scripts/configure-hypervisor.sh"
+ PREFIX=/usr/local
+
+ if [ ! -d $BUILD_DIR/packaging ]; then
+ run_cmd "git clone https://${PACKAGING_REPO}.git $BUILD_DIR/packaging"
+ fi
+
+ if [ ! -d ${BUILD_DIR}/qemu ]; then
+ run_cmd "mkdir -p ${BUILD_DIR}/qemu"
+ run_cmd "git clone --single-branch -b ${QEMU_COMMIT} ${QEMU_GIT_URL} ${BUILD_DIR}/qemu"
+ fi
+
+ pushd "${BUILD_DIR}/qemu"
+ [ -d "capstone" ] || run_cmd "git clone https://github.com/qemu/capstone.git capstone"
+ [ -d "ui/keycodemapdb" ] || run_cmd "git clone https://github.com/qemu/keycodemapdb.git ui/keycodemapdb"
+
+ # Apply required patches
+ QEMU_PATCHES_PATH="${BUILD_DIR}/packaging/obs-packaging/qemu-lite/patches"
+ run_cmd "git am -3 ${QEMU_PATCHES_PATH}/*.patch"
+
+ echo "Build Qemu"
+ run_cmd "make clean"
+ PREFIX=${PREFIX} "${QEMU_CONFIG_SCRIPT}" "qemu" | xargs ./configure
+ run_cmd "make -j $(getconf _NPROCESSORS_ONLN)"
+
+ echo "Install Qemu"
+ run_cmd "sudo -E make install"
+ popd
+}
+
+install_go()
+{
+ go_version="${1:-"1.8"}"
+ install_dir="${2:-"/usr/local"}"
+
+ if which go; then
+ # A version of Go is installed already,
+ # so we can run the install script from the kata test repo
+ repo="github.com/kata-containers/tests"
+ run_cmd go get -d ${repo}
+ GOPATH="${GOPATH:-"${HOME}/go"}"
+ PATH=${PATH}:${GOPATH}/src/${repo}/.ci
+ run_cmd "sudo env GOPATH=${GOPATH} PATH=${PATH} install_go.sh -d ${install_dir} ${go_version}"
+ else
+ # Install Go manually
+ go_file="go${go_version}.linux-amd64.tar.gz"
+ run_cmd curl -Lo "${BUILD_DIR}/${go_file}" "https://storage.googleapis.com/golang/${go_file}"
+ run_cmd mkdir -p "${install_dir}"
+ run_cmd sudo tar -C "${install_dir}" -xzf "${BUILD_DIR}/${go_file}"
+ echo "Go ${go_version} successfully installed to ${install_dir}"
+ fi
+}
+
+install_kata()
+{
+ # If a kata config file exists, back it up
+ default_config=/usr/share/defaults/kata-containers/configuration.toml
+ config_file=/etc/kata-containers/configuration.toml
+ [ -f ${config_file} ] && run_cmd "sudo mv ${config_file} ${config_file}.orig"
+
+ # Install Go
+ go_dir=/usr/local
+ install_go 1.8 ${go_dir}
+ GOPATH=${HOME}/go
+
+ # Install the packaged kata binaries using kata-manager
+ repo="github.com/kata-containers/tests"
+ PATH=${PATH}:${GOPATH}/bin:${go_dir}/go/bin:${GOPATH}/src/${repo}/cmd/kata-manager
+ run_cmd "go get -d $repo"
+ run_cmd "sudo env PATH=${PATH} kata-manager.sh install-packages"
+ run_cmd "sudo mkdir -p /etc/kata-containers"
+ run_cmd "sudo cp ${default_config} ${config_file}"
+ kata_initrd=/usr/share/kata-containers/kata-containers-initrd.img
+ run_cmd "sudo env PATH=${PATH} kata-manager.sh configure-initrd ${kata_initrd}"
+ run_cmd "sudo env PATH=${PATH} kata-manager.sh enable-debug"
+ run_cmd "sudo env PATH=${PATH} kata-manager.sh install-docker-system"
+
+ # Build the kata-runtime with SEV support
+ sudo curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
+ go get -d github.com/AMDESE/runtime
+ pushd $GOPATH/src/github.com/AMDESE/runtime
+ BRANCH="sev-v1.1.0"
+ if git branch | grep ${BRANCH}; then
+ run_cmd "git checkout ${BRANCH}"
+ run_cmd "git checkout Gopkg.toml"
+ else
+ run_cmd "git checkout -b ${BRANCH} origin/${BRANCH}"
+ fi
+ cat >> Gopkg.toml <<- EOF
+
+ [[override]]
+ name = "github.com/kata-containers/runtime"
+ source = "github.com/AMDESE/runtime"
+ branch = "sev-v1.1.0"
+
+ [[override]]
+ name = "github.com/intel/govmm"
+ source = "github.com/AMDESE/govmm"
+ branch = "sev-v1.1.0"
+
+ [[override]]
+ name = "github.com/intel-go/cpuid"
+ source = "github.com/AMDESE/cpuid"
+ branch = "sev"
+ EOF
+ run_cmd "tail -15 Gopkg.toml"
+ run_cmd "dep ensure"
+ run_cmd "make -j$(getconf _NPROCESSORS_ONLN)"
+ run_cmd "sudo -E PATH=$PATH make install"
+ popd
+}
+
+build_kata_kernel()
+{
+ if [ ! -d $BUILD_DIR/packaging ]; then
+ run_cmd "git clone https://github.com/kata-containers/packaging.git $BUILD_DIR/packaging"
+ fi
+
+ if [ ! -d $BUILD_DIR/linux/ ]; then
+ build_kernel
+ fi
+
+ pushd $BUILD_DIR/linux
+
+ if ! git branch -r | grep ${KATA_KERNEL_COMMIT}; then
+ run_cmd "git remote add -f -t ${KATA_KERNEL_COMMIT} kata ${KATA_KERNEL_GIT_URL}"
+ run_cmd "git checkout -b ${KATA_KERNEL_COMMIT} kata/${KATA_KERNEL_COMMIT}"
+ else
+ run_cmd "git checkout kata/${KATA_KERNEL_COMMIT}"
+ fi
+
+ run_cmd "cp $BUILD_DIR/packaging/kernel/configs/${KATA_KERNEL_CONFIG} .config"
+ ./scripts/config --enable CONFIG_AMD_MEM_ENCRYPT
+ ./scripts/config --enable AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT
+ ./scripts/config --enable CONFIG_KVM_AMD_SEV
+ ./scripts/config --disable CONFIG_DEBUG_INFO
+ ./scripts/config --disable CRYPTO_DEV_SP_PSP # The PSP is not currently exposed to guests
+ ./scripts/config --disable CRYPTO_DEV_CCP_DD # Ditto for the CCP
+ ./scripts/config --disable CONFIG_CRYPTO_DEV_CCP # Same here
+ ./scripts/config --disable CONFIG_LOCALVERSION_AUTO
+ ./scripts/config --enable CONFIG_X86_PAT
+ ./scripts/config --disable CONFIG_CPU_SUP_INTEL
+ ./scripts/config --enable CONFIG_CPU_SUP_AMD
+ yes "" | make olddefconfig
+ run_cmd "make ARCH=x86_64 -j `getconf _NPROCESSORS_ONLN` LOCALVERSION=-${KATA_KERNEL_COMMIT}.container"
+ run_cmd "sudo cp vmlinux /usr/share/kata-containers/vmlinux-${KATA_KERNEL_COMMIT}.container"
+ run_cmd "sudo cp arch/x86_64/boot/bzImage /usr/share/kata-containers/vmlinuz-${KATA_KERNEL_COMMIT}.container"
popd
}
+
+configure_kata_runtime()
+{
+ config_file=/etc/systemd/system/docker.service.d/kata-containers.conf
+ runtime="\/usr\/local\/bin\/kata-runtime"
+
+ # Configure docker to use the SEV runtime
+ if [ -f ${config_file} ]; then
+ echo -n "Configuring ${config_file} for SEV..."
+ sudo sed -i -e \
+ "s/\(--add-runtime kata-runtime=[^ ]*kata-runtime\)/\1 --add-runtime sev-runtime=${runtime}/" ${config_file}
+ sudo sed -i -e "s/--default-runtime=kata-runtime/--default-runtime=sev-runtime/" ${config_file}
+ echo "Done."
+ run_cmd "sudo systemctl daemon-reload"
+ run_cmd "sudo systemctl restart docker"
+ fi
+
+ default_config=/usr/share/defaults/kata-containers/configuration.toml
+ config_file=/etc/kata-containers/configuration.toml
+ sev_qemu="\/usr\/local\/bin\/qemu-system-x86_64"
+ sev_machine="q35"
+ sev_kernel="\/usr\/share\/kata-containers\/vmlinuz-sev.container"
+ sev_kernel_params="root=\/dev\/vda1 rootflags=data=ordered,errors=remount\-ro"
+ sev_firmware="\/usr\/local\/share\/qemu\/OVMF_CODE\.fd.kata"
+ sev_blk_dev_drv="virtio-blk"
+
+ # Copy the default config to /etc
+ run_cmd "sudo cp ${default_config} ${config_file}"
+
+ echo -n "Configuring ${config_file} for SEV..."
+
+ # Pass the container rootfs via initrd
+ sudo sed -i "s/^\(image =.*\)/# \1/g" ${config_file}
+
+ # Set the SEV qemu
+ sudo sed -i "s/^path *=.*qemu.*\$/path = \"${sev_qemu}\"/g" $config_file
+
+ # Set the SEV machine type
+ sudo sed -i "s/^machine_type *=.*\$/machine_type = \"${sev_machine}\"/g" $config_file
+
+ # Set the SEV kernel
+ sudo sed -i "s/^kernel *=.*\$/kernel = \"${sev_kernel}\"/g" $config_file
+
+ # Set the SEV OVMF firmware
+ sudo sed -i "s/^firmware *=.*\$/firmware = \"${sev_firmware}\"/g" $config_file
+
+ # Set the default block device driver
+ sudo sed -i "s/^block_device_driver *=.*\$/block_device_driver = \"${sev_blk_dev_drv}\"/g" $config_file
+
+ # Set the default block device driver
+ sudo sed -i "s/^block_device_driver *=.*\$/block_device_driver = \"${sev_blk_dev_drv}\"/g" $config_file
+
+ # Enable memory encryption
+ sudo sed -i -e "s/^# *\(enable_mem_encryption\).*=.*$/\1 = true/g" $config_file
+
+ # When booting from the rootfs image, the rootfs is on the vda device
+ sudo sed -i -e "s/^kernel_params = \"\(.*\)\"/kernel_params = \"\1 ${sev_kernel_params[*]}\"/g" $config_file
+
+ # Enable all debug options
+ sudo sed -i -e "s/^# *\(enable_debug\).*=.*$/\1 = true/g" ${config_file}
+ sudo sed -i -e "s/^kernel_params = \"\(.*\)\"/kernel_params = \"\1 agent.log=debug initcall_debug\"/g" ${config_file}
+
+ # Remove any "//" occurances
+ sudo sed -i -e "s/\/\//\//g" ${config_file}
+
+ echo "Done."
+}
+
diff --git a/distros/stable-commits b/distros/stable-commits
index a133f5a98487..2ecd2538b6a8 100644
--- a/distros/stable-commits
+++ b/distros/stable-commits
@@ -8,7 +8,7 @@ KERNEL_COMMIT=v4.17
# qemu commit
QEMU_GIT_URL=http://git.qemu.org/git/qemu.git
-QEMU_COMMIT=v2.12.0
+QEMU_COMMIT=v3.0.0
# guest bios
EDK2_GIT_URL=https://github.com/tianocore/edk2.git
@@ -16,3 +16,8 @@ EDK2_GIT_URL=https://github.com/tianocore/edk2.git
# libvirt commit
LIBVIRT_GIT_URL=https://libvirt.org/git/libvirt.git
LIBVIRT_COMMIT=v4.5.0
+
+# kata forks
+KATA_KERNEL_GIT_URL=https://github.com/AMDESE/linux.git
+KATA_KERNEL_COMMIT=sev
+KATA_KERNEL_CONFIG=x86_64_kata_kvm_4.14.x
diff --git a/distros/ubuntu-18.04/build-kata.sh b/distros/ubuntu-18.04/build-kata.sh
new file mode 100755
index 000000000000..24f820c5c85a
--- /dev/null
+++ b/distros/ubuntu-18.04/build-kata.sh
@@ -0,0 +1,31 @@
+#!/bin/bash
+
+. ../common.sh
+
+qemu_share=/usr/local/share/qemu
+
+# Install additional tools
+run_cmd "sudo apt-get -y install sudo curl systemd gnupg libelf-dev"
+
+# install kata containers
+install_kata
+build_kata_kernel
+build_install_kata_ovmf ${qemu_share}
+build_install_kata_qemu
+configure_kata_runtime
+
+cat << EOM
+***********************************************************************
+Kata Containers are installed and configured to use AMD SEV!
+
+As a test, start a busybox container like so:
+
+ $ sudo docker run -it busybox sh
+
+ / # dmesg | grep SEV
+ [ 0.001000] AMD Secure Encrypted Virtualization (SEV) active
+ [ 0.219196] SEV is active and system is using DMA bounce buffers
+
+Enjoy!
+
+EOM
diff --git a/distros/ubuntu-18.04/build.sh b/distros/ubuntu-18.04/build.sh
index 7da884873c3c..e1bc35b00fa1 100755
--- a/distros/ubuntu-18.04/build.sh
+++ b/distros/ubuntu-18.04/build.sh
@@ -2,9 +2,17 @@
. ../common.sh
+# host setup
+SRC_LIST=/etc/apt/sources.list.d/amdsev.list
+run_cmd "mkdir -p ${BUILD_DIR}"
+grep deb-src /etc/apt/sources.list | grep -v "arch=amd64" | sudo tee ${SRC_LIST}
+sudo sed -i -e "s/^\#\ *deb-src/deb-src/g" ${SRC_LIST}
+run_cmd "sudo apt-get update"
+
# build linux kernel image
-run_cmd "apt-get build-dep linux-image-$(uname -r)"
-run_cmd "apt-get install flex"
+UNAME=$(ls /lib/modules | grep generic | head -1)
+run_cmd "sudo apt-get -y build-dep linux-image-${UNAME}"
+run_cmd "sudo apt-get -y install flex bison fakeroot bc kmod cpio libssl-dev"
build_kernel
# install newly built kernel
@@ -12,10 +20,11 @@ install_kernel
# install qemu build deps
# build and install QEMU 2.12
-run_cmd "apt-get build-dep qemu"
+run_cmd "sudo apt-get -y build-dep qemu"
build_install_qemu "/usr/local"
-run_cmd "apt-get build-dep ovmf"
+run_cmd "sudo apt-get -y build-dep ovmf"
build_install_ovmf "/usr/local/share/qemu"
-run_cmd "cp ../launch-qemu.sh /usr/local/bin"
+run_cmd "sudo cp ../launch-qemu.sh /usr/local/bin"
+[ -f ${SRC_LIST} ] && run_cmd "sudo rm -f ${SRC_LIST}"