failure and diagnosis
This section explains what happens when the system crashes and (very briefly)
how to analyze crash dumps.
When the system crashes voluntarily it prints a message of the form
panic: why i gave up the ghost
on the console and enters the kernel debugger,
If you wish to report this panic, you should include the output of the
commands. Unless the ‘ddb.log’ sysctl has been disabled, anything
output to screen will be appended to the system message buffer, from where it
may be possible to retrieve it through the
command after a warm
reboot. If the debugger command boot dump
entered, or if the debugger was not compiled into the kernel, or the debugger
was disabled with sysctl(8)
then the system dumps the contents of physical memory onto a mass storage
peripheral device. The particular device used is determined by the
‘dumps on’ directive in the
file used to build the
After the dump has been written, the system then invokes the automatic reboot
procedure as described in
. If auto-reboot is
disabled (in a machine dependent way) the system will simply halt at this
Upon rebooting, and unless some unexpected inconsistency is encountered in the
state of the file systems due to hardware or software failure, the system will
copy the previously written dump into /var/crash
resuming multi-user operations.
The system has a large number of internal consistency checks; if one of these
fails, then it will panic with a very short message indicating which one
failed. In many instances, this will be the name of the routine which detected
the error, or a two-word description of the inconsistency. A full
understanding of most panic messages requires perusal of the source code for
The most common cause of system failures is hardware failure (e.g., bad memory)
which can reflect itself in different ways. Here are the messages which are
most likely, with some hints as to causes. Left unstated in all cases is the
possibility that a hardware or software error produced the message in some
- no init
- This panic message indicates filesystem problems, and
reboots are likely to be futile. Late in the bootstrap procedure, the
system was unable to locate and execute the initialization process,
init(8). The root filesystem
is incorrect or has been corrupted, or the mode or type of
/sbin/init forbids execution.
- trap type %d, code=%x,
- A unexpected trap has occurred within the system; the trap
types are machine dependent and can be found listed in
The code is the referenced address, and the pc is the program counter at the
time of the fault is printed. Hardware flakiness will sometimes generate
this panic, but if the cause is a kernel bug, the kernel debugger
ddb(4) can be used to locate
the instruction and subroutine inside the kernel corresponding to the PC
value. If that is insufficient to suggest the nature of the problem, more
detailed examination of the system status at the time of the trap usually
can produce an explanation.
- init died
- The system initialization process has exited. This is bad
news, as no new users will then be able to log in. Rebooting is the only
fix, so the system just does it right away.
- out of mbufs: map full
- The network has exhausted its private page map for network
buffers. This usually indicates that buffers are being lost, and rather
than allow the system to slowly degrade, it reboots immediately. The map
may be made larger if necessary.
That completes the list of panic types you are likely to see.
When the system crashes it writes (or at least attempts to write) an image of
memory, including the kernel image, onto the dump device. On reboot, the
kernel image and memory image are separated and preserved in the directory
To analyze the kernel and memory images preserved as
, you should run
, loading in the images with
the following commands:
GNU gdb 6.3
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB. Type "show warranty" for details.
This GDB was configured as "i386-unknown-openbsd4.6".
(gdb) file /var/crash/bsd.0
Reading symbols from /var/crash/bsd.0...(no debugging symbols found)...done.
(gdb) target kvm /var/crash/bsd.0.core
[Note that the “kvm” target is currently only supported by
on some architectures.]
After this, you can use the where
command to show
trace of procedure calls that led to the crash.
For custom-built kernels, you should use bsd.gdb
instead of bsd
, thus allowing
to show symbolic names for
addresses and line numbers from the source.
Analyzing saved system images is sometimes called post-mortem debugging. There
are a class of analysis tools designed to work on both live systems and saved
images, most of them are linked with the
library and share option
flags to specify the kernel and memory image. These tools typically take the
- Normally this core is an
image produced by
savecore(8) but it can be
/dev/mem too, if you are looking at the live
- Takes a kernel system
image as an argument. This is where the symbolic information is gotten
from, which means the image cannot be stripped. In some cases, using a
bsd.gdb version of the kernel can assist even
The following commands understand these options:
and many others. There
are exceptions, however. For instance,
has renamed the
argument to be -C
Examples of use:
# ps -N /var/crash/bsd.0 -M /var/crash/bsd.0.core -O paddr
The -O paddr
option prints each process'
is very useful information if you are analyzing process contexts in
# vmstat -N /var/crash/bsd.0 -M /var/crash/bsd.0.core -m
This analyzes memory allocations at the time of the crash. Perhaps some resource
was starving the system?
Like the tools mentioned above,
can be used to analyze a
live system as well. This can be accomplished by not specifying a crash dump
when selecting the “kvm” target:
It is possible to inspect processes that entered the kernel by specifying a
address to the
(gdb) kvm proc 0xd69dada0
#0 0xd0355d91 in sleep_finish (sls=0x0, do_sleep=0)
After this, the where
command will show a trace of
procedure calls, right back to where the selected process entered the kernel.
The following example should make it easier for a novice kernel developer to
find out where the kernel crashed.
First, in ddb(4)
find the function
that caused the crash. It is either the function at the top of the traceback
or the function under the call to panic
The point of the crash usually looks something like this
Find the function in the sources, let's say that the function is in
Go to the kernel build directory, e.g.,
, and do the
# objdump -S foo.o | less
Find the function in the output. The function will look something like this:
0: 17 47 11 42 foo %x, bar, %y
4: foo bar allan %kaka
8: XXXX boink %bloyt
The first number is the offset. Find the offset that you got in the ddb trace
(in this case it's 4711).
When reporting data collected in this way, include ~20 lines before and ~10
lines after the offset from the objdump output in the crash report, as well as
the output of ddb(4)
registers" command. It's important that the output from objdump includes
at least two or three lines of C code.
If you are sure you have found a reproducible software bug in the kernel, and
need help in further diagnosis, or already have a fix, use
to send the
developers a detailed description including the entire session from