Kernel开发环境配置

前言

最近在看LKM相关的东西,分别尝试用Yocto和Buildroot生成内核和rootfs镜像,QEMU运行。Yocto相比Buildroot可用的工具多一些,但配好后发现无法编译LKM,所以转而使用了Buildroot。Buildroot轻量化,速度快,搭配Docker运行不怕破坏host环境,但不同内核版本存在差异,解决后可以兼容QEMU;LKM代码在host机上编译好传输到虚拟机中就可以安装。本篇记录了kernel开发的环境配置,由于是在Linux下写的记录,就直接英文写了。

Yocto+QEMU

Information

  • Yocto version: 3.1.2
  • host system: ubuntu 20.04
  • host architecture: x86-64
  • target architecture: x86-64
  • workdir: any_dir/qemu/

Installing qemu

Use apt to install qemu on ubuntu.

1
sudo apt install qemu

Configuring yocto

Install toolchain and download images.

  1. toolchain
    For quick development, we choose not to compile yocto from source, but use pre-built images instead.
    Thus, we can install stand-alone cross-development toolchain for simplicity.
    1
    2
    3
    wget http://downloads.yoctoproject.org/releases/yocto/yocto-3.1.2/toolchain/x86_64/poky-glibc-x86_64-core-image-sato-core2-64-qemux86-64-toolchain-3.1.2.sh
    chmod +x poky-glibc-x86_64-core-image-sato-core2-64-qemux86-64-toolchain-3.1.2.sh
    sudo ./poky-glibc-x86_64-core-image-sato-core2-64-qemux86-64-toolchain-3.1.2.sh

Now poky should have been installed in /opt/poky/3.1.2.

The toolchain installation file name is composed of several parts. Here is the offical explaination.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
poky-glibc-host_system-image_type-arch-toolchain-release_version.sh
Where:
host_system is a string representing your development system:
i686 or x86_64.

image_type is a string representing the image you wish to
develop a Software Development Toolkit (SDK) for use against.
The Yocto Project builds toolchain installers using the
following BitBake command:
bitbake core-image-sato -c populate_sdk

arch is a string representing the tuned target architecture:
i586, x86_64, powerpc, mips, armv7a or armv5te

release_version is a string representing the release number of the
Yocto Project:
1.8, 1.8+snapshot

I cannot find a perfect match(x86_64) for arch so I use core2 as the value of arch, it works for my case though.

  1. pre-built image files
    Yocto linux needs kernel and filesystem to boot, both of which are called image file. Kernel is the core of linux, while filesystem contains files and tools that are needed during staring period and user development.

x86-64 pre-built images can be found in http://downloads.yoctoproject.org/releases/yocto/yocto-3.1.2/machines/ web directory, containing 1 type of kernel(.bin file) and several kinds of filesystem: minimal, minimal-dev, sato, sato-dev, sato-sdk, sato-sdk-ptest. Each kind of filesystem are packaged in 3 format: ext4, wic, tar.bz2. We choose ext4 as it can be used by qemu without extraction.

Download kernel image into workdir.

1
wget http://downloads.yoctoproject.org/releases/yocto/yocto-3.1.2/machines/qemu/qemux86-64/bzImage-qemux86-64.bin

Download filesystem image into workdir. Here we use sato-sdk filesystem because it provides GUI and SSH access, along with gdb which is quite during kernel development.

1
wget http://downloads.yoctoproject.org/releases/yocto/yocto-3.1.2/machines/qemu/qemux86-64/core-image-sato-sdk-qemux86-64.ext4

Emulating with qemu

  1. bind pre-built images with qemu
    Download qemu config file into workdir. Older version releases did not contain this config file, but in newer versions this file is required when running yocto with qemu.
    1
    wget http://downloads.yoctoproject.org/releases/yocto/yocto-3.1.2/machines/qemu/qemux86-64/core-image-sato-sdk-qemux86-64.qemuboot.conf

The qb_mem is the memory allocated for virtual machine, 512Mb by default. We can change it if needed.

Now we have 3 files in workdir.

1
2
3
4
houwd@Desktop:~/Desktop/qemu$ ls
bzImage-qemux86-64.bin
core-image-sato-dev-qemux86-64.ext4
core-image-sato-dev-qemux86-64.qemuboot.conf

  1. set up emulation environment

    1
    source /opt/poky/3.1.2/environment-setup-core2-64-poky-linux
  2. run qemu
    Poky registers runqemu script in current shell environment. We can start emulation using the following command.

    1
    runqemu .

QEMU window should appear and we can interact with the yocto system GUI now.

Basic usage

SSH is recommended in case GUI does not work smoothly.
We can transfer files using scp.

1
ssh root@192.168.7.2

Root login password is disabled in the system.
gcc, g++, make and other compilation tools are pre-installed in sato-sdk filesystem.
gdb, gdbserver, trace and strace are pre-installed in sato-sdk filesystem.

LKM development

According to Official Documents, we can develop new kernel modules which are called out-of-tree modules, by creating some scripts.

1
2
3
4
cd /usr/src/kernel
make scripts
cd /lib/modules/`uname -r`/build
make scripts

However, I was still unable to build kernel module.
Try buildroot instead!

Conclusion

You can use other filesystem. Minimal filesystem does not have ssh installed and only sdk filesytem provides gdb.

References

Buildroot+QEMU

Information

Buildroot Compilation

I use docker container as the building environmnet. Here is the directory structure.

1
2
3
4
5
- linux-kernel
| - Dockerfile
| - docker-compose.yml
| - buildroot-2020.02.6
| ......

Dockerfile

1
2
3
4
5
6
7
FROM ubuntu:20.04

RUN sed -i 's/archive.ubuntu.com/mirrors.aliyun.com/g' /etc/apt/sources.list && \
apt update && \
apt install -y bc curl gcc git libssl-dev libncurses5-dev lzop make u-boot-tools flex bison file g++ wget cpio unzip rsync python3

WORKDIR /mnt

docker-compose.yml

1
2
3
4
5
6
7
8
version: "3"
services:
ubuntu:
build: ./
volumes:
- "./buildroot-2020.02.6:/mnt/buildroot-2020.02.6"
- "./linux-5.4.66.tar:/mnt/linux-5.4.66.tar"
command: "tail -F /dev/null"

Enter docker container environment. Cd into buildroot directory and start configuring.

1
2
3
4
cd builtroot-2020.02.6
make list-defconfigs # list supported platforms
make qemu_x86_64_defconfig # configuration written to /mnt/buildroot-2020.02.6/.config
make menuconfig # configure

Use busybox as the init system.

1
> System configuration > Init system > BusyBox

Enable make for later development.

1
> Target packages > Development tools > make

Disable host package QEMU since we have installed it on our host machine.

1
> Host utilities > host qemu

Download source code. Generate kernel image and rootfs image.

1
2
make FORCE_UNSAFE_CONFIGURE=1  # compile start
# I'm running as root in Docker container, so FORCE_UNSAFE_CONFIGURE=1 is needed.

Configure linux kenrel details. Enable PCI support to use virtual IO disk. Enable IA32 Emulation, otherwise kernel cannot detect /dev/vda.

1
2
3
make linux-menuconfig
> Device Drivers - PCI support
> Binary Emulation - IA32_Emulation

Check some configuration parameters that should be enabled if kernel failed to mount root fs in QEMU.
Use / to search parameters.

1
2
3
4
5
6
7
8
CONFIG_EXT4_FS=y
CONFIG_IA32_EMULATION=y
CONFIG_VIRTIO_PCI=y (Virtualization -> PCI driver for virtio devices)
CONFIG_VIRTIO_BALLOON=y (Virtualization -> Virtio balloon driver)
CONFIG_VIRTIO_BLK=y (Device Drivers -> Block -> Virtio block driver)
CONFIG_VIRTIO_NET=y (Device Drivers -> Network device support -> Virtio network driver)
CONFIG_VIRTIO=y (automatically selected)
CONFIG_VIRTIO_RING=y (automatically selected)

Emulating with QEMU

Cd into buildroot image outputs directory on our host machine.

1
2
3
4
5
cd any/buildroot-2020.02.6/output/images && ls -lh
total 26M
-rw-r--r-- 1 root root 4.3M 9月 29 18:45 bzImage
-rw-r--r-- 1 root root 60M 9月 29 18:56 rootfs.ext2
lrwxrwxrwx 1 root root 11 9月 29 17:20 rootfs.ext4 -> rootfs.ext2

Now we have bzImage as the kernel image, rootfs.ext2 and rootfs.ext4 as the root filesystem.

Start QEMU with TCP forwarding on. VM port 22 is forwarded to host port 5555 for ssh connection.

1
2
3
4
5
sudo qemu-system-x86_64 -kernel bzImage -drive file=rootfs.ext4,if=virtio,format=raw \
-netdev user,id=net0,net=192.168.76.0/24,dhcpstart=192.168.76.9,hostfwd=tcp::5555-:22 \
-device virtio-net-pci,netdev=net0,mac=52:54:00:12:34:02 \
-cpu core2duo -m 2048 \
-append 'root=/dev/vda rw console=ttyS0' -enable-kvm

Openssh does not allow remote root login by default, so we need to change sshd_config and restart sshd in the QEMU window.

1
2
3
4
5
vi /etc/ssh/sshd_config
# add the following line
PermitRootLogin yes

/etc/init.d/S50sshd restart

Ssh into VM.

1
ssh root@localhost -p 5555

LKM Development

Buildroot system does not have gcc or g++ installed. So we need to compile kernel modules on our host machine and scp .ko file into VM.
Kernel source codes and headers are needed to compile kernel modules. Usually those files can be installed by package managers (linux-devel and linux-headers for apt).
Our kernel source are placed in any/buildroot-2020.02.6/output/build/linux-5.4.61, which should be used in LKM Makefile.

1
2
3
4
5
KDIR = ~/Desktop/linux-kernel/buildroot-2020.02.6/output/build/linux-5.4.61
all:
make -C $(KDIR) M=`pwd` modules
clean:
make -C $(KDIR) M=`pwd` clean

Kbuild file:

1
2
KVERSION = 5.4.61
obj-m = hello.o

hello.c

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/module.h>
MODULE_LICENSE("GPL");

static int hello_init(void)
{
printk(KERN_INFO "Hello from hello\n");
return 0;
}

static void hello_exit(void)
{
printk(KERN_INFO "Good bye\n");
}

module_init(hello_init);
module_exit(hello_exit);

Compile the module to generate .ko file.

1
2
3
4
5
6
7
8
9
10
11
12
$ make
make -C ~/Desktop/linux-kernel/buildroot-2020.02.6/output/build/linux-5.4.61 M=`pwd` modules
make[1]: Entering directory '/home/houwd/Desktop/linux-kernel/buildroot-2020.02.6/output/build/linux-5.4.61'
CC [M] /home/houwd/Desktop/hello_test/hello.o
Building modules, stage 2.
MODPOST 1 modules
CC [M] /home/houwd/Desktop/hello_test/hello.mod.o
LD [M] /home/houwd/Desktop/hello_test/hello.ko
make[1]: Leaving directory '/home/houwd/Desktop/linux-kernel/buildroot-2020.02.6/output/build/linux-5.4.61'
$ ls
hello.c hello.mod hello.mod.o Kbuild modules.order
hello.ko hello.mod.c hello.o Makefile Module.symvers

scp hello.ko into VM /root directory and install it in VM

1
2
3
$ scp -P 5555 hello.ko root@localhost:/root
root@localhost's password:
hello.ko 100% 3416 2.1MB/s 00:0

1
2
3
cd /root
insmod hello.ko && lsmod | grep hello
rmmod hello.ko

Check kernel messages generated by our LKM.

1
2
3
4
5
6
$ dmesg | tail -5
[51256.846327] tsc: Marking TSC unstable due to clocksource watchdog
[53923.247828] hello: loading out-of-tree module taints kernel.
[53923.250505] Hello from hello
[53945.360817] Good bye
[54098.636185] Hello from hello

Success!

I use a shellscript for quick deployment.

1
2
3
4
#!/bin/bash
make && \
scp -P 5555 hello.ko root@localhost:/root && \
ssh root@localhost -p 5555 "rmmod hello && insmod /root/hello.ko && lsmod | grep hello && dmesg | tail -10"

1
sudo qemu-system-x86_64 -kernel bzImage -drive file=rootfs.ext4,if=virtio,format=raw -device virtio-net-pci,netdev=net0,mac=52:54:00:12:34:02 -netdev tap,id=net0,ifname=tap0,script=no,downscript=no -show-cursor -usb -device usb-tablet -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0 -cpu core2duo -m 2048 -serial mon:vc -serial null -append 'root=/dev/vda rw mem=2048M ip=192.168.7.2::192.168.7.1:255.255.255.0 oprofile.timer=1' -enable-kvm

We can use kernel module to modify or add functions in the kernel space, such as implementing character/block device drivers, hooking syscalls, setting up network filters (using netfilter).
Character/block device drivers can be created by implementing several methods and register as a device.
Syscall hooks can be created by the following steps: get the address of syscall table, modify register to allow change of syscall table, replace the target function with your own function in the table, modify register to disallow change of syscall table.
Network filters can be easily implemented by using netfilter which is embedded in kernel by default, iptables also uses netfilter as backend.

GDB debugging

We can use QEMU and GDB to debug the kernel.
Start QEMU with -s option to start gdbserver at localhost:1234, -S to freeze cpu at startup.

1
2
3
4
5
6
sudo qemu-system-x86_64 -kernel bzImage -drive file=rootfs.ext4,if=virtio,format=raw \
-netdev user,id=net0,net=192.168.76.0/24,dhcpstart=192.168.76.9,hostfwd=tcp::5555-:22 \
-device virtio-net-pci,netdev=net0,mac=52:54:00:12:34:02 \
-cpu core2duo -m 2048 \
-append 'root=/dev/vda rw console=ttyS0' \
-s -S

In another terminal, start GDB using vmlinux file.

1
2
3
4
gdb any/buildroot-2020.02.6/output/build/linux-5.4.61/vmlinux
(gdb) target remote localhost:1234 # connect to QEMU gdbserver
(gdb) b start_kernel
(gdb) c

Due to KVM is enabled by default, gdb is unable to break on start_kernel breakpoint.
Currently I don’t need GDB as debugger, but many BIOS firmwares support KVM control.

References

×

纯属好玩

扫码支持
扫码打赏,你说多少就多少

打开支付宝扫一扫,即可进行扫码打赏哦

文章目录
  1. 1. 前言
  2. 2. Yocto+QEMU
    1. 2.1. Information
    2. 2.2. Installing qemu
    3. 2.3. Configuring yocto
    4. 2.4. Emulating with qemu
    5. 2.5. Basic usage
    6. 2.6. LKM development
    7. 2.7. Conclusion
    8. 2.8. References
  3. 3. Buildroot+QEMU
    1. 3.1. Information
    2. 3.2. Buildroot Compilation
    3. 3.3. Emulating with QEMU
    4. 3.4. LKM Development
    5. 3.5. GDB debugging
    6. 3.6. References
,