Create a Linux kernel module

Jun 20, 2024 by Thibault Debatty | 286 views

Linux Sysadmin

https://cylab.be/blog/344/create-a-linux-kernel-module

In a previous blog post, I presented how to build your own Linux kernel. This time I will show how to create, compile and load a very simple kernel module...

source-code.jpg

Prerequisites

To compile your kernel module, you'll need a compile toolchain:

sudo apt install  -y build-essential libncurses-dev flex bison libelf-dev libssl-dev

And you'll also need the headers of the target kernel. If you are using the stock Ubuntu kernel, and want to compile the module for yourself, you can install these headers with:

sudo apt install linux-headers-`uname -r`

These headers are installed in /lib/modules/$(shell uname -r)/build/.

linux-headers.png

Module source code

We can now directly create the source code of your simple module testmod.c:

#include <linux/init.h>
#include <linux/module.h>

MODULE_LICENSE("Dual MIT/GPL");

static int __init testmod_init(void)
{
    printk(KERN_INFO "Hi there!\n");
    return 0;
}

static void __exit testmod_exit(void)
{
    printk(KERN_INFO "Exit!\n");
}

module_init(testmod_init);
module_exit(testmod_exit);

In this simple example:

  • the MODULE_LICENSE is required;
  • we use the module_init and module_exit macros to define the init and exit methods of your module;
  • the methods themselves are pretty simple and use printk to output a kernel message.

Building

The kernel source tree already contains a Makefile that allows to compile kernel modules (make modules). This Makefile is also part of the kernel headers (which we installed in the Prerequisites). So we will create a simple Makefile that:

  • change directory to the headers directory -C $(HEADERS)
  • comes back to the current directory at the end of the building process M=$(PWD)
  • overwrites the obj-m variable to only compile our module

Here is the content of our Makefile:

PWD := $(shell pwd)
HEADERS := /lib/modules/$(shell uname -r)/build/
obj-m := testmod.o

modules:
    make -C $(HEADERS) M=$(PWD) modules

clean:
    make -C $(HEADERS) M=$(PWD) clean

We can now compile the module with:

make modules

make-modules.png

Testing

We can now try to load our module with

sudo insmod testmod.ko

If all went well, the "init" message of our module will appear at the end of dmesg:

sudo dmesg

insmod.png

Keep in mind that you can only load a kernel module if it was compiled against the headers of the exact same kernel (same version, and same configuration options). If you wish to create an out-of-tree kernel module, you must

  • compile the module for every possible target kernel or
  • use the Dynamic Kernel Module Support (DKMS) framework to automatically compile the module on the target.

You can also unload the module with

sudo rmmod testmod

Final words

This is very simple example, but I hope it will help you understand how the Linux kernel works. I will probably supplement this topic with other blog posts about module auto-loading, DKMS and others...

This blog post is licensed under CC BY-SA 4.0

A Practical Introduction to eBPF
Have you ever wanted to enhance your favorite distribution kernel with debugging, tracing, networking, security or plenty of other features without going through a long approval/testing/integration process managed by the Linux community? The extended Berkeley Package Filter (eBPF) is a Linux kernel feature that aims at running user-space code (eBPF programs) in the kernel, safely and efficiently, via the in-kernel eBPF machine. Let's discover how to build such programs.
Performance of virtual storage (part 2) : QEMU
In a previous blog post, I evaluated the performance penalty of virtual storage. I compared different host filesystems and different hypervisors (including QEMU). The conclusion was pretty harsh: in all tested configurations, virtual disks are almost 10 times slower than host drives. In this blog post, I will test additional QEMU configuration options, to see if I can get better results...
Performance penalty of storage virtualization
In a previous blog post, I showed how to use sysbench to benchmark a Linux system. I ran the tool on various systems I had access to, and I was staggered by the performance penalty of virtual storage: a virtual disk (vdi) is roughly 10 times slower than the actual disk it is reading from or writing to. In this blog post, I want to show the results of some additional tests that, sadly enough, will only confirm this observation...
This website uses cookies. More information about the use of cookies is available in the cookies policy.
Accept