慶應義塾大学
2007年度 春学期

システム・ソフトウェア
System Software / Operating Systems

2007年度春学期 火曜日2時限
科目コード: 60730
開講場所:SFC
授業形態:講義
担当: Rodney Van Meter
E-mail: rdv@sfc.keio.ac.jp

第7回 5月22日 ページ置換アルゴリズム
Lecture 7, May 22: Virtual Memory and Page Replacement Algorithms

Outline

Simple Swapping

As we discussed last week, the original form of multiprogramming actually involved swapping complete processes into and out of memory, to a special reserved area of disk (or drum). This approach allowed each process to act as if it owned all of the memory in the system, without worrying about other processes. However, swapping a process out and in is not fast! We want to be able to share the resources of the computer among multiple processes, allowing fast process context switches so that multiple programs can appear to be using the CPU and other resources at the same time.

Introduction to Virtual Memory

Finally, we come to virtual memory (仮想記録). With virtual memory, each process has its own address space. This concept is a very important instance of naming. Virtual memory (VM) provides several important capabilities:

In most modern microprocessors intended for general-purpose use, a memory management unit, or MMU, is built into the hardware. The MMU's job is to translate virtual addresses into physical addresses.

Page Tables

Viritual memory is usually done by dividing memory up into pages, which in Unix systems are typically, but not necessarily, four kilobytes (4KB) each. The page table is the data structure that holds the mapping from virtual to physical addresses. The page frame is the actual physical storage in memory.

The simplest approach would be a large, flat page table with one entry per page. The entries are known as page table entries, or PTEs. However, this approach results in a page table that is too large to fit inside the MMU itself, meaning that it has to be in memory. In fact, for a 4GB address space, with 32-bit PTEs and 4KB pages, the page table alone is 4MB! That's big when you consider that there might be a hundred processes running on your system.

The solution is multi-level page tables. As the size of the process grows, additional pages are allocated, and when they are allocated the matching part of the page table is filled in.

The translation from virtual to physical address must be fast. This fact argues for as much of the translation as possible to be done in hardware, but the tradeoff is more complex hardware, and more expensive process switches. Since it is not practical to put the entire page table in the MMU, the MMU includes what is called the TLB: translation lookaside buffer.

Linux Page Tables

PGD is the page global directory. PTE is page table entry, of course. PMD is page middle directory.

(Images from O'Reilly's book on Linux device drivers, and from lvsp.org.)

We don't have time to go into the details right now, but you should be aware that doing the page tables for a 64-bit processor is a lot more complicated, when performance is taken into consideration.

Linux uses a three-level (or, sometimes, four-level) page table system. Each level supports 512 entries: "With Andi's patch, the x86-64 architecture implements a 512-entry PML4 directory, 512-entry PGD, 512-entry PMD, and 512-entry PTE. After various deductions, that is sufficient to implement a 128TB address space, which should last for a little while," says Linux Weekly News.

#define IA64_MAX_PHYS_BITS      50      /* max. number of physical address bits (architected) */
...
/*
 * Definitions for fourth level:
 */
#define PTRS_PER_PTE    (__IA64_UL(1) << (PTRS_PER_PTD_SHIFT))

Hard and Soft Faults

When an application attempts to reference a memory address, and the address is not part of the process's address space, a page fault occurs. The fault traps into the kernel, which must decide what to do about it. If the process is not allowed to access the page, on a Unix machine a segmentation fault is signalled to the application. If the kernel finds the memory that the application was attempting to access elsewhere in memory, it can add that page to the application's address space. We call this a soft fault. If the desired page must be retrieved from disk, it is known as a hard fault.

Wired Memory

Wired memory (or pinned memory) is memory that the kernel keeps locked in physical memory; it cannot be paged out. Below is a good description of MacOS X's wired memory usage, from http://developer.apple.com/documentation/Performance/Conceptual/ManagingMemory/Articles/AboutMemory.html.

Wired Memory

Wired memory (also called resident memory) stores kernel code and data structures that should never be paged out to disk. Applications, frameworks, and other user-level software cannot allocate wired memory. However, they can affect how much wired memory exists at any time. There is memory overhead associated with each kernel resource expended on behalf of a program.

Table 2 lists some of the wired-memory costs for user-generated entities.

Table 2  Wired memory generated by user-level software

Resource

Wired Memory Used by Kernel

Process

16 kilobytes

Thread

blocked in a continuation: 5 kilobytes; blocked: 21 kilobytes

Mach port

116 bytes

Mapping

32 bytes

Library

2 kilobytes plus 200 bytes for each task that uses it

Memory region

160 bytes

Note: These measurements will change with each new Mac OS X release. They are provided here to give you a rough estimate of the relative cost of system resource usage.

As you can see, each thread created, each subprocess forked, and each library linked contributes to the resident footprint of the system.

In addition to wired memory generated through user-level requests, the following kernel entities also use wired memory:

Wired data structures are also associated with the physical page and map tables used to store virtual-memory mapping information, Both of these entities scale with the amount of available physical memory. Consequently, when you add memory to a system the wired memory increases even if nothing else changes. When the computer is first booted into the Finder, with no other applications running, wired memory consumes approximately 14 megabytes of a 64 megabyte system and 17 megabytes of a 128 megabyte system.

Wired memory is not immediately released back to the free list when it becomes invalid. Instead it is “garbage collected” when the free-page count falls below the threshold that triggers page out events.

(end of text from Apple)

Memory Pressure and the Reference Trace

We will discuss the process of paging, where parts of memory are stored on disk when memory pressure is high. The memory pressure is the number of pages that processes and the kernel are currently trying to access, compared to the number of physical pages (or page frames) that are available in the system. When the pressure is low, everything fits comfortably in memory, and only things that have never been referenced have to be brought in from disk. When the pressure is high, not everything fits in memory, and we must swap out entire processes or page out portions of processes (in some systems, parts of the kernel may also be pageable).

The reference trace is the way we describe what memory has recently been used. It is the total history of the system's accesses to memory. The reference trace is an important theoretical concept, but we can't track it exactly in the real world, so various algorithms for keeping approximate track have been developed.

Page Replacement Algorithms

When the kernel decides to page something in, and the memory is full, it must decide what to page out. We are looking for several features in a page replacement algorithm:

The optimal algorithm (known as OPT or the clairvoyant algorithm) is known: throw out the page that will not be reused for the long time in the future. Unfortunately, it's impossible to implement, since we don't know the exact future reference trace until we get there!

There are many algorithms for doing page replacement algorithms, some of which require extra hardware support. (Most take advantage of the referenced and modified bits in the PTE.) Here are a few:

FIFO is pretty obvious: first-in, first-out. It doesn't work terribly well. The others we'll look at one by one.

NRU

NRU, or Not Recently Used, uses the referenced and modified bits (Tanenbaum refers to these as the R and M bits; on x86, they are Accessed and Dirty) in the PTE to implement a very simple algorithm. The two bits divide the pages up into four classes. Any page from the lowest occupied class can be chosen to be replaced. An important feature of this algorithm is that it prefers to replace pages that have not been modified, which saves a disk I/O to write them out.

When a clock interrupt is received, all of the R and M bits in all of the page tables of all of the memory-resident processes are cleared. (Obviously, this is expensive, but there are a number of optimizations that can be done.) The R bit is then set whenever a page is accessed, and the M bit is set whenever a page is modified. The MMU may do this automatically in hardware, or it can be emulated by setting the protection bits in the PTE to trap, then letting the trap handler manipulate R and M appropriately and clear the protection bits.

Because NRU does not distinguish among the pages in one of the four classes, it often makes poor decisions about what pages to page out.

LRU

LRU, or Least Recently Used, is a pretty good approximation to OPT, but far from perfect. The full implementation of LRU would require being able to exactly order the PTEs according to recency of access. A linked list could be used, or a counter stored in the PTE itself. In either case, every memory access requires updating an in-memory data structure, which is too expensive.

According to one source, Linux, FreeBSD and Solaris may all use a very heavily-modified form of LRU. (I suspect this information is out of date, but have not had time to dig through the Linux kernel yet.)

Clock (Second Chance)

With the clock algorithm, all of the page frames are ordered in a ring. The order never has to change, and few changes are made to the in-memory structures, so its execution performance is good. The algorithm uses a clock hand that points to a position in the ring. When a page fault occurs, the memory manager checks the page currently pointed to. If the R bit is zero, that page is replaced. If R is one, then R is set to zero, and the clock hand is advanced until a page with R = 0 is found. This algorithm is also called second chance.

I believe early versions of BSD used clock; I'm not sure if they still do.

Working Set

Early on in the history of virtual memory, researchers recognized that not all pages are accessed uniformly. Every process has some pages that it accesses frequently, and some that are accessed only occasionally. The set of pages that a process is currently accessing is known as its working set. Some VM systems attempt to track this set, and page it in or out as a unit. Wikipedia says that VMS uses a form of FIFO, but my recollection is that it actually uses a form of working set.

Global v. Local (per-process) Replacement

So far, we have described paging in terms of a single process, or a single reference string. However, in a multiprogrammed machine, several processes are active at effectively the same time. How do you allocate memory among the various processes? Is all of memory treated as one global pool, or are there per-process limits?

Most systems include at least a simple way to set a maximum upper bound on the size of virtual memory and on the size of the resident memory set. On a Unix or Linux system, the shell usually provides a builtin function called ulimit, which will tell you what those limits are. The corresponding system calls are getrlimit and setrlimit. VMS has many parameters that control the behavior of the VM system.

Note that while the other algorithms conceivably work well when treating all processes as a single pool, working set does not.

Impact of Streaming I/O

Streaming I/O (video, audio, etc.) data tends to be used only once. However, the VM system does not necessarily know this. If the VM system behaves normally, streaming I/O pages are recently referenced, and any other pages will be paged out in preference.

Page and Swap Files and Partitions

Historically, VM systems often used a dedicated area of the disk, known as the swap partition to hold pages of a process that have been paged out. There were two good reasons for this:

Now it is generally accepted that file system performance is acceptable, and that being able to dynamically (or, at least, without repartitioning the disk drive) allocate space for swapping is important.

In modern systems, multiple page files are usually supported, and can often be added dynamically. See the system call swapon() on Unix/Linux systems.

Paging Daemons

Often, the responsibility for managing pages to be swapped in and out, especially the I/O necessary, is delegated to a special process known as the page daemon.

Paging in Other Contexts

The exact same set of algorithms and techniques can be used inside e.g. a relational database to decide which objects to keep in memory, and which to flush. The memory cache in a CPU uses exactly the same set of techniques, except that it must all be implemented in hardware. The same aging and garbage collection techniques apply to any finite cache, including a DNS name translation cache.

Homework

This week's homework:

  1. Go back and rerun your memory copy experiments for sizes up to 100MB or so, and produce a graph with error bars and a linear fit. What is your Y intercept (the fixed, overhead cost) and your slope (the per-unit cost)? Tell me why you believe the linear fit does or does not represent the actual cost of the operation.
  2. Now run up to sizes much larger than your physical memory. What happens? Graph the output. (Note: this may take a long time to run!)

Readings for Next Week and Followup for This Week

その他 Additional Information