Articles / The Linux Kernel and Linux …

The Linux Kernel and Linux Distributions

Whenever a new kernel comes out, there's a lag time between when it's adopted by those who don't mind compiling it themselves and by those who are waiting to get it bundled in an already-tested package from the maintainers of their distributions. In part, the delay is just the result of the difference between those who live on the edge and those who stick with the tried-and-true, but could it be shortened by reducing the work that the distributions have to do to adopt the new kernel? In today's editorial, Jeff Garzik of MandrakeSoft describes the process of fitting the two together.

Update: Developers from Conectiva have written to share their thoughts on the subject.Readying a distribution for a new release of the kernel at MandrakeSoft is, like other open projects, a constant process of refinement, testing, and interaction with non-MandrakeSoft Open Source developers.

Mandrake's bleeding-edge development distribution, "cooker", is always available to outside testers on the Internet. When a new kernel appears on the horizon, the first step is to package the new development kernel in a special RPM called "hackkernel". (Mandrake uses the "hack" prefix to indicate a development/unstable version of a package.) Once hackkernel is packaged, we can begin testing the distribution with the newer kernel. Testing involves hardware testing at MandrakeSoft's labs, but, even more importantly, it involves the cooker users out on the Internet. They are our biggest resource; no amount of internal testing can replace beta testing on the Internet at large.

Once the feedback starts arriving from cooker users and internal Q/A, the process of refining the distribution for the new kernel begins. This process involves the evaluation of each and every package to determine how each can best take advantage of the new kernel while still retaining full compatibility with the older kernels. The package changes which make the distribution better suited to the new kernel can come from many sources: internal developers, Open Source developers out on the Internet, and other distributions. These changes are all integrated, in patch form, into each RPM package. These updates are posted on cooker, and the process of refinement begins again.

The final step in readying a distribution for a new kernel is to update the installer. Since each distribution has its own installer (for the most part), this is typically a MandrakeSoft-only affair. (However, all our updates, including installer updates, are GPLed and available freely to all.) Eventually, based on cooker user and internal Q/A feedback and schedules, the cooker distribution will be frozen into a stable form and given a release code name like "oxygen" or "odyssey." After this freeze occurs, no more new additions occur, including kernel changes. Any final incompatibilities between the new kernel and other packages are smoothed out at this point. Finally, the CDs are pressed, and ISOs are uploaded to the Internet for all to enjoy.

The dialogue between MandrakeSoft and Open Source developers and users is constant. New changes to the kernel that appear on the linux-kernel mailing list and elsewhere on various project Web sites must be evaluated and tested. Bugfixes and features from MandrakeSoft must be fed back to the maintainers of the Open Source projects on the Internet. In the case of the kernel itself, this usually involves sending patches to Linus and/or the linux-kernel mailing list.

Linux-Mandrake is not unlike many established Open Source projects. The process of development, integration of new kernels, and testing is a constant cycle of refinement.

Editor's note:

I'd like to hear any of your thoughts about the relationship between the kernel and the distributions built around it. If you're a user, does the wait for a new version of your distribution with the new kernel and features you need frustrate you, or do you just compile a new kernel on your own and hope the distribution is ready to handle it? If you're a distribution maintainer, what work do you have to do to accommodate a new version of the kernel? Are the communication channels between you and the kernel developers good enough that you get all the information you need in a timely manner? Is there anything that could be done to make the job easier for you? Do you find yourself spending much time talking to third parties who haven't made the necessary changes to their software to work with the new kernel, and convincing them to do it? Is there anything else the community could do to shorten the time between a kernel release and its inclusion in the distributions? Do you have any thoughts on distributions that use patches that haven't made it into the official kernel yet (journaling filesystem foo or USB support bar)? What about proprietary binary-only modules (video card drivers, etc.)?

Thoughts from Conectiva Developers

Claudio Matsuoka (claudio@conectiva.com) and Marcelo Tosatti (marcelo@conectiva.com) of Conectiva share their thoughts on the subject:

The perfect integration of a new kernel in the distribution is, of course, one of the major concerns of every Linux distributor, and it involves expertise in kernel hackery as well as user feedback to find any problems not detected in internal testing. Internal testing just ensures that minimal quality standards are met; only the feedback of a larger circle of users can provide the real measurement of the distribution's quality.

Conectiva's kernel hackery expertise is supplied by a number of kernel developers including MM guy Rik Van Riel, Marcelo Tosatti (who actually builds the kernel packages), Arnaldo Melo, Aristeu Rozanski, and others. To extend its testing circles beyond Latin America, Conectiva is currently working to mirror its daily updated snapshot distribution in as many places as possible (and get as much community feedback as possible) through the bug tracking system and mailing lists.

The kernel bundled in the distribution in heavily patched. How many patches it carries is a compromise between features and maintenance costs. Keeping our tree close to the official tree avoids effort replication within the community and also it makes maintenance easier. However, there are some features which are interesting for us but not acceptable in the stable kernel tree for a variety of reasons. If these features are interesting enough for us to spend time testing and maintaining, they will be adopted. Examples are the USB backport, raw I/O, bigmem, ReiserFS, and others. This is basically the way all commercial distribution vendors work.

The drawback is that once you provide USB backports, LVM, ReiserFS, or any other extra functionality, you can't remove them and say to the users that it's not supported anymore. If something happens to the main development team, it's up to the distributor to keep maintaining this functionality. As for the latest stable release, Conectiva included all these patches as well as many others such as lm_sensors, I2C, DRDB, iBCS, IPVS, VM patches, supermount, raw I/O, fair scheduling, and dxr2. Locally developed patches are always sent upstream to the official maintainers.

Some of these addons require a set of userspace tools to work properly, and these tools also need maintenance. LVM, for example, has different, incompatible I/O protocol versions in different kernel releases, and the userspace tools must allow the user to boot the system using kernels with different IOP versions. In the snapshot release, Conectiva supports IOP6 (used in later 2.2 and early 2.4.0-test kernels) as well as IOP10 (used in the current 2.4). The use of wrappers and lvm-iop packages has been adopted by Conectiva and TurboLinux.

Additionally, the system should still work with plain, non-patched kernels. An administrator must be able to compile her own stock kernel and still have the system working.

Binary modules are not included in the kernel package. To be able to support the kernel code which is in the official distribution, we cannot accept unfixable and unmaintainable code. There may be binary modules on the additional CDs that are shipped with the distribution for user convenience, but they are not supported and the documentation is clear about that.

Recent comments

23 Feb 2001 19:02 Avatar albertfuller

Re: Where are the warriors today?
off the bat, let me say that I am learning Linux
and computing. A friend helped me put in RH 5.x
some time ago and I kept it as a second OS for a
long time. I remember my first year of Linux. I
spent most of my modest computer time configuring
and checking things out.

I discovered MDK and wow! I came on board. Now I
run Linux as my OS-of-choice at home. I am still
learning; but often I put off the tinker gloves
and decide to USE my computer to surf, write, etc.

Nothing is a culture of one (except maybe
insanity). For the expert, and the character with
way too much time on his/her hands, for those
virtual individuals, I am sure spending the next
36 hrs finding out why some item did not fly is
the first choice in a worthwhile life--and that's
great. But the world is always more diverse than
some people are willing to admit.

(The existence of Mandrake itself speaks against a
1 class society of warriors.)


I need you guys who are the warriors of Linux, but
please don't dispise the rest of the tribe: after
all you are defending the village from that evil
invader who, at this moment, is about to lay seige
to to this village/(ok town):

get ready folks! the rape and pillaging of Linux
has yet to begin... what! you think evilB will
just donate his money to charity and lose
disappear in an infinite smile.

11 Feb 2001 12:55 Avatar rsmith

DIY kernel upgrades
I'm used to installing a new kernel if it contains a bugfix or feature that I'm looking for. If you use LILO _and RTFM_, it's not that difficult.

Having said that, I think that the packaging systems of some distributions make it very difficult to install anything that's not in the distribution's preferred packaging format. One self-installed piece of software can invalidate your whole packaging system database. I don't think I need to elaborate on that.

That's why I favor distributions that do not have a packaging system that functions like a straightjacket. (such as slackware or linuxfromscratch)

So my take on it is that DIY upgrades will be limited to people with at least an understanding of of what constitutes a Linux system, and which pieces depend on each other.

Those that do not have that understanding have the option of learning that, or waiting for the distribution vendor to help them out.

09 Feb 2001 23:15 Avatar ice0

Re: Lack of kernel updates for released Linux distributions

> The main problem in doing such
> releases comes from the updates in the
> userspace utilities such as ReiserFS
> tools or LVM utilities.

In other words, what you're writing here means what I've written earlier. I would need to maintain my own distribution of required update packages just to try a new kernel. I'd better stick to a stable distribution. Not an old one like you say, but the latest release.


> When you prepare
> a recent kernel and tools to
> work in an old distribution, you'll
> also need an extended test period to
> ensure that no (potentially harmful)
> bugs will be introduced.

I wouldn't mind if there were any bugs in a `hack''-prefixed kernel update series. I would try such packages at -my own risc-. But by trying them, I would be able to find and report bugs. Of course, if I need a complete beta distribution to support a new kernel series, this is an unacceptable condition. With a new distribution like Rawhide usually come a high number of package upgrades which change the entire system and which often introduce new and annoying bugs.

09 Feb 2001 16:30 Avatar cmatsuoka

Re: Lack of kernel updates for released Linux distributions

> I'd try out a new kernel version more
> often if there were kernel update
> packages for non-beta releases of my
> current Linux distribution. Means, I
> don't like to fetch packages from a beta
> or development version of my Linux
> distribution and mix these with my
> non-beta distribution. Actually I'd
> prefer ``hack''-prefixed kernel update
> packages which I could run with my
> current distribution without having to
> put my hands on packages from any beta
> distribution.


The main problem in doing such releases comes from the updates in the userspace utilities such as ReiserFS tools or LVM utilities. When you prepare a recent kernel and tools to
work in an old distribution, you'll also need an extended test period to ensure that no (potentially harmful) bugs will be introduced. That's
especially true for heavily patched kernels; for distributions shipping stock kernels it shouldn't be so problematic (problems in stock kernels
can be readily detected by stock kernel users, which largely outnumber distro-specific kernel users).


In such situation, experimental recent kernel packages for an old distribution are as risky as using beta packages. If they get sufficient
testing, they won't be recent anymore. If you don't want to build your own recent stock kernel and want new features from kernels shipped in
recent distros It's usually simplier just to apt-get dist-upgrade to a newer distribution version.

09 Feb 2001 15:12 Avatar dazk

Re: Where are the warriors today?

> If you're a user, does the wait for a
> new version of your distribution
> with the new kernel and features you
> need frustrate you, or do you just
> compile a new kernel on your own and
> hope the distribution is ready to handle
> it?
>
> Compile and let it fly, after screwing
> up your distribution with an
> incompatible kernel
> you can always run windows when you
> want to be productive again. :)
>
> In all seriousness, the GNU/Linux OS
> is NOT ready for people who would get
> frustrated and unwilling to download a
> newer stable kernel with the features
> they need. So if you are waiting around
> for your distro to include the kernel,
> just run windows to get the features you
> need.
>

Yeah right, and all the hell with it. Ever tried upgrading drivers?
On Linux the worst thing that can happen is a kernel that
doesn't do it. Make sure you include your old Kernel in lilo.conf
as every kernel compile howto tells you to and you can boot
your old kernel. Windows is not even capable of removing
drivers completely. If you uninstall a driver very often windows
finds and installs it again after reboot without you having a
chance to stop it. The driver just has to be low level enough. Is
that really better? Is it actually easier for someone to remove a
driver by hand, deleting the appropriate inf and driver files
maybe even clean the stuff that corrupts your system out of the
registry mess? I just realize you have not the slightest idea what
you are talking about. I agree with you in one point. If you are
not interested in getting to know what's going on on your
computer, windows might be the better choice because you
couldn't find out anyways. But if you spend a little bit of your
brain power and time on reading and understanding howtos and
if you are prepared to accept the hints experienced people give
you, it's not all that bad. As I said, you even have the fallback
option.

I agree, upgrading to a new kernel major version often is more
difficult, since it often involves library and tools upgrades as
well. Then again, you can compare this procedure more to an
install or upgrade of a new windows version. This is something
that doesn't work most of the time with windows either. So
where is your point?

It would be really nice, if people that start flaming about Linux
would at least know what they were talking about. Thanks.

Cheers,

Richard


Screenshot

Project Spotlight

Kigo Video Converter Ultimate for Mac

A tool for converting and editing videos.

Screenshot

Project Spotlight

Kid3

An efficient tagger for MP3, Ogg/Vorbis, and FLAC files.