Common Sources of a Kernel Panic when Developing a Kernel Module

A common issues for the errors like this:

[896809] kernel panic - not syncing: Fatal exception

is probably, because of a bad memory allocation or utilization. For instance:

  • You haven’t allocated enough memory space and then, if you try to access or put an element at a certain memory position, in which doesn’ t exist you’re gonna have kernel panic. This is similar to what happens if you are working on userspace level, but instead of getting ‘kernel panic’ you’ll probably get ‘segmentation fault’ as the error.
  • You have a pointer variable which is not currently pointing at any memory space. It’s NULL. In this way, if you try to use that pointer with the operator ‘->’, you will get ‘kernel panic’ as well. This is similar to what would happen on userpace level, but instead of getting ‘kernel panic’, we probably would get something like ‘segmentation fault’ or simply, the program freezes.
  • You have a quite long run loop and at each iteration, kernel memory allocation is happening, which may result in lack of memory space.

Hope I helped you,

That’s all.

Kernel’s macro: likely() and unlikely()

Linux kernel has a optimized way to check for validity of conditions. They are likely(x) and unlikely(x) defined macros.

#define likely(x)      __builtin_expect(!!(x), 1)
#define unlikely(x)    __builtin_expect(!!(x), 0)

This may see little confuse, but the general ideal that I keep in mind is: likely(x) means “I expect x to be true”, and unlikely(x) means “I expect x to be false”. This helps me when I use these macros. I’m not sure how much these bultin macros can improve performance, but as we can see in many piece of kernel’s code, I think they can help a lot. (I’m quite newbie in kernel programming :))

Printing Integer Numbers with PRI* in C

If the processor is 64 bits and we wanna use 32 integers to print/read them, we can use inttypes.h library to explicitly tells the processor which length we want to use.

#include <inttypes.h>

uint8_t 8bit_int;
int32_t 32bit_int;

printf("Hexadecimal format %" PRIx8 "\n");
printf("Integer format %" PRIi8 "\n");

printf("Hexadecimal format %" PRIx32 "\n");
printf("Integer format %" PRIi32 "\n");

First three letters:
PRI for printf format
SCN for scanf format (Not shown in the example, but it is used in a similar way witch scanf())

Fourth letter:
x for hexadecimal formatting
u for unsigned formatting
o for octal formatting
i for integer formatting
d for decimal formatting

GSO (Generic Segmentation Offload)

In this post, I’m just sharing the original text posted here.

GSO (“Generic Segmentation Offload”) is a performance optimization which is a generalisation of the concept ofTSO .

It has been added into Linux 2.6.18

Taken from Herbert Xu’s posting on linux-netdev

Many people have observed that a lot of the savings in TSO come from traversing the networking stack once rather than many times for each super-packet. These savings can be obtained without hardware support. In fact, the concept can be applied to other protocols such as TCPv6, UDP, or even DCCP.

The key to minimising the cost in implementing this is to postpone the segmentation as late as possible. In the ideal world, the segmentation would occur inside each NIC driver where they would rip the super-packet apart and either produce SG (scatter/gather) lists which are directly fed to the hardware, or linearise each segment into pre-allocated memory to be fed to the NIC. This would elminate segmented skb’s altogether.

Unfortunately this requires modifying each and every NIC driver so it would take quite some time. A much easier solution is to perform the segmentation just before the entry into the driver’s xmit routine. This concept is called GSO: Generic Segmentation Offload.

Herbert Xu has also posted some numbers on the performance gains by doing this:

The test was performed through the loopback device which is a fairly good approxmiation of an SG-capable NIC. GSO like TSO is only effective if the MTU is significantly less than the maximum value of 64K. So only the case where the MTU was set to 1500 is of interest. There we can see that the throughput improved by 17.5% (3061.05Mb/s => 3598.17Mb/s). The actual saving in transmission cost is in fact a lot more than that as the majority of the time here is spent on the RX side which still has to deal with 1500-byte packets.

The worst-case scenario is where the NIC does not support SG and the user uses write(2) which means that we have to copy the data twice. The files gso-off/gso-on provide data for this case (the test was carried out on e100). As you can see, the cost of the extra copy is mostly offset by the reduction in the cost of going through the networking stack.

For now GSO is off by default but can be enabled through ethtool. It is conceivable that with enough optimisation GSO could be a win in most cases and we could enable it by default.

However, even without enabling GSO explicitly it can still function on bridged and forwarded packets. As it is, passing TSO packets through a bridge only works if all constiuents support TSO. With GSO, it provides a fallback so that we may enable TSO for a bridge even if some of its constituents do not support TSO.

This provides massive savings for Xen as it uses a bridge-based architecture and TSO/GSO produces a much larger effective MTU for internal traffic between domains.

Friendship in C++

Hi,

C++ allows users to define a “friendzone” comprised by functions or classes. The purpose of a friend in C++, is to make private and protected section, be accessible by the portion out of that section (scope of the code).

In other words:

In principle, private and protected members of a class cannot be accessed from outside the same class in which they are declared. However, this rule does not apply to ‘friends’” (cplusplus.org)

A friend class in C++ can access the “private” and “protected” members of the class in which it is declared as a friend” (Wikipedia)

Simple example (original here):

class B {
    friend class A; /* A is a friend of B */
 
private:
    int i;
protected:
    int j;
public:
    int p;
};
 
class A {
public:
    A(B b) {
        b.i = 0; /* legal access due to friendship */
        b.j = 0; /* legal access due to friendship */
        b.p = 0; /* legal access, but it is not because of friendship. 
                    Is accessible because the function is defined as public within B */
    }
};

Object B has a private member named “i”, which could only be accessible by functions defined within the class A, if A wasn’t a friend of B. Thus, A (is a friend of B) can access everything of B.

That’s all,

Virtual Function or Method in C++

Hi,

As an object-oriented programming language, C++ enables polymorphism. How can we do that? Well, we basically, we need to define a virtual function name (without implementation) within a base class, in which a derived class will actually implements the virtual function name. The derived class, will give us meaning to that defined virtual function within the base class.

Here is a simple, but a good example about the use of the keyword virtual in C++: (originally posted here)

Without “virtual” you get “early binding”. Which implementation of the method is used gets decided at compile time based on the type of the pointer that you call through.

With “virtual” you get “late binding”. Which implementation of the method is used gets decided at run time based on the type of the pointed-to object – what it was originally constructed as. This is not necessarily what you’d think based on the type of the pointer that points to that object.

class Base
{
  public:
            void Method1 ()  {  std::cout << "Base::Method1" << std::endl;  }
    virtual void Method2 ()  {  std::cout << "Base::Method2" << std::endl;  }
};

class Derived : public Base
{
  public:
    void Method1 ()  {  std::cout << "Derived::Method1" << std::endl;  }
    void Method2 ()  {  std::cout << "Derived::Method2" << std::endl;  }
};

/* Note - constructed as Derived, but pointer stored as Base*  */
Base* obj = new Derived ();

obj->Method1 ();  //  Prints "Base::Method1"
obj->Method2 ();  //  Prints "Derived::Method2"

That’s all,

Coloring Standard Output from C codes

Hi, everyone, it’s been a while.

Today, I’m posting my last post of this month. It’s nothing about configuration, but it might help you when debugging C codes. I’m talking about coloring standard output texts. You know, as we go through debugging process, sometimes is useful to print out values of a variable just to make sure that such variable is holding what we want, but when there are many printed lines we may lose readability.

Hopefully, we can color our printed lines and there is a simple way to do it in C codes. I think it would be helpful to see what we want if the print is colored. So, for that, we only need to use something like this “\x1b[31m” when calling, for instance, printf(). I know it is hard to remember such sequences of characters… It’s been a long time since when I used it for the last time. Long time ago.

Recently, I needed it. However, I wasn’t remembering such special sequence of characters and, in order to remember, I went to google search :D. And, whithout looking for, I found a cool way here to color my prints.

I’m putting some examples below.

#define ANSI_COLOR_RED     "\x1b[31m"
#define ANSI_COLOR_GREEN   "\x1b[32m"
#define ANSI_COLOR_YELLOW  "\x1b[33m"
#define ANSI_COLOR_BLUE    "\x1b[34m"
#define ANSI_COLOR_MAGENTA "\x1b[35m"
#define ANSI_COLOR_CYAN    "\x1b[36m"
#define ANSI_COLOR_RESET   "\x1b[0m"  

After defined, we are free to use it.

printf(ANSI_COLOR_CYAN "Colored line" ANSI_COLOR_RESET);
printf(ANSI_COLOR_CYAN "Colored line with integer argument %d" ANSI_COLOR_RESET, 5);
printf(ANSI_COLOR_CYAN "Colored line with integer argument %d and new line" ANSI_COLOR_RESET "\n", 10);

That’s all,