-
Like most modern kernels, Linux employs a technique known as virtual memory management. The aim of this technique is to make efficient use of both the CPU and RAM (physical memory) by exploiting a property that is typical of most programs: locality of reference.
-
source of all the things below: The Linux Programming Interface.
Most programs demonstrate two kinds of locality:
-
Spatial locality is the tendency of a program to reference memory addresses that are near those that were recently accessed (because of sequential process- ing of instructions, and, sometimes, sequential processing of data structures).
-
Temporal locality is the tendency of a program to access the same memory addresses in the near future that it accessed in the recent past (because of loops).

he upshot of locality of reference is that it is possible to execute a program while maintaining only part of its address space in RAM.
A virtual memory scheme splits the memory used by each program into small, fixed-size units called pages. Correspondingly, RAM is divided into a series of page frames of the same size. At any one time, only some of the pages of a program need to be resident in physical memory page frames; these pages form the so-called resident set. Copies of the unused pages of a program are maintained in the swap area—a reserved area of disk space used to supplement the computer’s RAM—and loaded into physical memory only as required. When a process references a page that is not currently resident in physical memory, a page fault occurs, at which point the kernel suspends execution of the process while the page is loaded from disk into memory.

In order to support this organization, the kernel maintains a page table for each process (Figure above). The page table describes the location of each page in the process’s virtual address space (the set of all virtual memory pages available to the process). Each entry in the page table either indicates the location of a virtual page in RAM or indicates that it currently resides on disk.
Not all address ranges in the process’s virtual address space require page-table entries. Typically, large ranges of the potential virtual address space are unused, so that it isn’t necessary to maintain corresponding page-table entries. If a process tries to access an address for which there is no corresponding page-table entry, it receives a SIGSEGV signal.
A process’s range of valid virtual addresses can change over its lifetime, as the kernel allocates and deallocates pages (and page-table entries) for the process. This can happen in the following circumstances:
-
as the stack grows downward beyond limits previously reached;
-
when memory is allocated or deallocated on the heap, by raising the program break using brk(), sbrk(), or the malloc family of functions (Chapter 7);
-
when System V shared memory regions are attached using shmat() and detached using shmdt() (Chapter 48); and
-
when memory mappings are created using mmap() and unmapped using munmap() (Chapter 49).
Process es 121 Virtual memory management separates the virtual address space of a process from the physical address space of RAM. This provides many advantages: -
Processes are isolated from one another and from the kernel, so that one pro- cess can’t read or modify the memory of another process or the kernel. This is accomplished by having the page-table entries for each process point to distinct sets of physical pages in RAM (or in the swap area).
-
Where appropriate, two or more processes can share memory. The kernel makes this possible by having page-table entries in different processes refer to the same pages of RAM. Memory sharing occurs in two common circumstances:
-
Multiple processes executing the same program can share a single (read- only) copy of the program code. This type of sharing is performed implicitly when multiple programs execute the same program file (or load the same shared library).
-
Processes can use the shmget() and mmap() system calls to explicitly request sharing of memory regions with other processes. This is done for the purpose of interprocess communication.
-
-
The implementation of memory protection schemes is facilitated; that is, page- table entries can be marked to indicate that the contents of the corresponding page are readable, writable, executable, or some combination of these protec- tions. Where multiple processes share pages of RAM, it is possible to specify that each process has different protections on the memory; for example, one process might have read-only access to a page, while another has read-write access.
-
Programmers, and tools such as the compiler and linker, don’t need to be con- cerned with the physical layout of the program in RAM.
-
Because only a part of a program needs to reside in memory, the program loads and runs faster. Furthermore, the memory footprint (i.e., virtual size) of a process can exceed the capacity of RAM.
One final advantage of virtual memory management is that since each process uses less RAM, more processes can simultaneously be held in RAM. This typically leads to better CPU utilization, since it increases the likelihood that, at any moment in time, there is at least one process that the CPU can execute.