NAME
uvm_map
,
uvm_map_pageable
,
uvm_map_pageable_all
,
uvm_map_checkprot
,
uvm_map_protect
,
uvmspace_alloc
,
uvmspace_exec
,
uvmspace_fork
,
uvmspace_free
,
uvmspace_share
,
uvm_uarea_alloc
,
uvm_uarea_free
, UVM_MAPFLAG
— virtual address space
management interface
SYNOPSIS
#include
<sys/param.h>
#include <uvm/uvm.h>
int
uvm_map
(vm_map_t
map, vaddr_t
*startp, vsize_t
size, struct uvm_object
*uobj, voff_t
uoffset, vsize_t
alignment, unsigned int
flags);
int
uvm_map_pageable
(vm_map_t
map, vaddr_t start,
vaddr_t end,
boolean_t new_pageable,
int lockflags);
int
uvm_map_pageable_all
(vm_map_t
map, int flags,
vsize_t limit);
boolean_t
uvm_map_checkprot
(vm_map_t
map, vaddr_t start,
vaddr_t end,
vm_prot_t
protection);
int
uvm_map_protect
(vm_map_t
map, vaddr_t start,
vaddr_t end,
vm_prot_t new_prot,
int et,
boolean_t set_max,
boolean_t
checkimmutable);
struct vmspace *
uvmspace_alloc
(vaddr_t
min, vaddr_t max,
boolean_t pageable,
boolean_t
remove_holes);
void
uvmspace_exec
(struct
proc *p, vaddr_t
start, vaddr_t
end);
struct vmspace *
uvmspace_fork
(struct
process *pr);
void
uvmspace_free
(struct
vmspace *vm);
struct vmspace *
uvmspace_share
(struct
process *pr);
vaddr_t
uvm_uarea_alloc
(void);
void
uvm_uarea_free
(struct
proc *p);
unsigned int
UVM_MAPFLAG
(vm_prot_t
prot, vm_prot_t
maxprot, vm_inherit_t
inh, int advice,
int flags);
DESCRIPTION
The
uvm_map
()
function establishes a valid mapping in map map, which
must be unlocked. The new mapping has size size, which
must be in PAGE_SIZE
units. If
alignment is non-zero, it describes the required
alignment of the list, in power-of-two notation. The
uobj and uoffset arguments can
have four meanings. When uobj is
NULL
and uoffset is
UVM_UNKNOWN_OFFSET
,
uvm_map
() does not use the machine-dependent
PMAP_PREFER
function. If
uoffset is any other value, it is used as the hint to
PMAP_PREFER
. When uobj is not
NULL
and uoffset is
UVM_UNKNOWN_OFFSET
,
uvm_map
() finds the offset based upon the virtual
address, passed as startp. If
uoffset is any other value, we are doing a normal
mapping at this offset. The start address of the map will be returned in
startp.
flags passed to
uvm_map
()
are typically created using the
UVM_MAPFLAG
()
macro, which uses the following values. The prot and
maxprot can take a mix of the following values:
#define PROT_MASK 0x07 /* protection mask */ #define PROT_NONE 0x00 /* protection none */ #define PROT_READ 0x01 /* read */ #define PROT_WRITE 0x02 /* write */ #define PROT_EXEC 0x04 /* exec */
The values that inh can take are:
#define MAP_INHERIT_MASK 0x30 /* inherit mask */ #define MAP_INHERIT_SHARE 0x00 /* "share" */ #define MAP_INHERIT_COPY 0x10 /* "copy" */ #define MAP_INHERIT_NONE 0x20 /* "none" */ #define MAP_INHERIT_ZERO 0x30 /* "zero" */
The values that advice can take are:
#define MADV_NORMAL 0x0 /* 'normal' */ #define MADV_RANDOM 0x1 /* 'random' */ #define MADV_SEQUENTIAL 0x2 /* 'sequential' */ #define MADV_MASK 0x7 /* mask */
The values that flags can take are:
#define UVM_FLAG_FIXED 0x0010000 /* find space */ #define UVM_FLAG_OVERLAY 0x0020000 /* establish overlay */ #define UVM_FLAG_NOMERGE 0x0040000 /* don't merge map entries */ #define UVM_FLAG_COPYONW 0x0080000 /* set copy_on_write flag */ #define UVM_FLAG_TRYLOCK 0x0100000 /* fail if we can not lock map */ #define UVM_FLAG_HOLE 0x0200000 /* no backend */ #define UVM_FLAG_QUERY 0x0400000 /* do everything, except actual execution */ #define UVM_FLAG_NOFAULT 0x0800000 /* don't fault */ #define UVM_FLAG_UNMAP 0x1000000 /* unmap to make space */ #define UVM_FLAG_STACK 0x2000000 /* page may contain a stack */
The UVM_MAPFLAG
macro
arguments can be combined with an or operator. There are also some
additional macros to extract bits from the flags. The
UVM_PROTECTION
, UVM_INHERIT
,
UVM_MAXPROTECTION
and
UVM_ADVICE
macros return the protection,
inheritance, maximum protection and advice, respectively.
uvm_map
()
returns a standard errno.
The
uvm_map_pageable
()
function changes the pageability of the pages in the range from
start to end in map
map to new_pageable. The
uvm_map_pageable_all
()
function changes the pageability of all mapped regions. If
limit is non-zero and
pmap_wired_count
()
is implemented, ENOMEM
is returned if the amount of
wired pages exceed limit. The map is locked on entry
if lockflags contain
UVM_LK_ENTER
, and locked on exit if
lockflags contain UVM_LK_EXIT
.
uvm_map_pageable
() and
uvm_map_pageable_all
() return a standard errno.
The
uvm_map_checkprot
()
function checks the protection of the range from start
to end in map map against
protection. This returns either
TRUE
or FALSE
.
The
uvm_map_protect
()
function changes the protection start to
end in map map to
new_prot, also setting the maximum protection to the
region to new_prot if set_max is
non-zero. The et parameter should be 0, unless a
PROT_READ
| PROT_WRITE
mapping is being changed to extend the stack limit, then it may be
UVM_ET_STACK
. This function returns a standard
errno.
The
uvmspace_alloc
()
function allocates and returns a new address space, with ranges from
min to max, setting the
pageability of the address space to pageable. If
remove_holes is non-zero, hardware
‘holes’ in the virtual address space will be removed from the
newly allocated address space.
The
uvmspace_exec
()
function either reuses the address space of process p
if there are no other references to it, or creates a new one with
uvmspace_alloc
(). The range of valid addresses in
the address space is reset to start through
end.
The
uvmspace_fork
()
function creates and returns a new address space based upon the address
space of process pr and is typically used when
allocating an address space for a child process.
The
uvmspace_free
()
function lowers the reference count on the address space
vm, freeing the data structures if there are no other
references.
The
uvm_uarea_alloc
()
function allocates a thread's ‘uarea’, the memory where its
kernel stack and PCB are stored. The
uvm_uarea_free
()
function frees the uarea for thread p, which must no
longer be running.