When using physical addresses directly, there is no virtual to physical translation overhead. Assume it takes 100 nanoseconds to make a memory reference. If we used physical addresses directly, then all memory references will take 100 nanoseconds each.
100 + 100 = 200 ns
0.75*100 + 0.25*200 = 75 + 50 = 125 ns
0.995*100 + 0.005*200 = 99.5 + 1 = 100.5 ns
32 - 10 = 22 bits
0x0000FFFF = 0000 0000 0000 0000 1111 1111 1111 1111 & 0xFFFFFC00 = 1111 1111 1111 1111 1111 1100 0000 0000 (first 22 bits are page number) = 0x0000FC00 = 0000 0000 0000 0000 1111 1100 0000 0000 = 0xFC00 0xFC00 is the page address. The page number shifts off the offset from the page address: 0x0000FC00 = 0000 0000 0000 0000 1111 1100 0000 0000 shift right >> 10 bits: 0x0000003F = 0000 0000 0000 0000 0000 0000 0011 1111 0x3F is the virtual page number. (We'll accept 0xFC00 here, but generally speaking it is important to distinguish between the virtual page number and the virtual address of the page.)
0x0000FFFF = 0000 0000 0000 0000 1111 1111 1111 1111 & 0x000003FF = 0000 0000 0000 0000 0000 0011 1111 1111 (last 10 bits are the offset) = 0x000003FF = 0000 0000 0000 0000 0000 0011 1111 1111 = 0x3FF 0x3FF is the offset.
The offset is 10 bits. To convert a page number to a page address, we shift the page number left by the offset (10 bits): 0x4 << 10 bits = 0100 << 10 bits = 0001 0000 0000 0000 = 0x1000
The physical page for 0xFFFF is physical page 0x4.
The physical address of physical page 0x4 is 0x1000.
The offset of virtual address 0xFFFF is 0x3FF.
So the physical address of virtual address 0xFFFF is (0x1000 & 0x3FF) = 0x13FF.
The virtual address space is 244 bytes in size. Thus, there are 244/216 = 228 pages.
Each page table page can hold 216/4, or 214, page table entries. Thus, for each page table level, we will use 14 bits of the virtual address to index into that page table page. We will use 16 bits to address a specific byte within the data page. Conveniently, this all adds up to 44 bits, so the breakdown is: 14 bits for top-level page table, 14 bits for next page table, and 16 bits for offset.
4 GB translates to 216 pages of memory. So, we will need 216 page table entries, which can fit into 4 page tables. We also need to add in one additional page table for the top-level of the page table tree. Thus, we need 216 + 5 page frames to store all data and page table pages.
Page faults incur an additional cost of 10 ms for every 20,000,000 instructions, which is an average additional cost of .5 ns per instruction. So, if instructions take 2 nanoseconds, the average instruction time is 2.5 ns, and if instructions take 1 nanosecond, the average instruction time is 1.5 ns.
[FIFO] + 4 : <- 4 + 42 : <- 2 + 427 : <- 7 427 : (2) + 4275 : <- 5 + 2753 : <- 3, -> 4 2753 : (3) 2753 : (2) 2753 : (3) + 7531 : <- 1, -> 2 + 5312 : <- 2, -> 7 + 3126 : <- 6, -> 5 = 8 page faults with pages 3126 in memory. [LRU] + 4 : <- 4 + 42 : <- 2 + 427 : <- 7 472 : (2 moved to end on access) + 4725 : <- 5 + 7253 : <- 3, -> 4 2753 : (3) 7532 : (2 moved to end on access) 7523 : (3 moved to end on access) + 5231 : <- 1, -> 7 5312 : (2 moved to end on access) + 3126 : <- 6, -> 5 = 7 page faults with pages 3126 in memory.
What would the working set algorithm try to accomplish, and why? (Hint: These two cases represent extremes that could lead to problematic behavior.)
The working set algorithm will try to keep in memory the working sets for the processes, i.e., the pages that each process needs to keep page faults to a reasonable level.
(More explanation, not needed in an answer)
Keeping every process in memory does mean that there will be a process to context switch to on a page fault. However, if the system is heavily faulting, then the OS will spend a lot of overhead serving page faults (thrashing). Much CPU time will be spent serving page faults, reducing CPU utilization for applications.
Allocating large amounts of memory to a small number of processes will reduce their page fault rates considerably, but it is also likely that those few processes will be given more physical memory than they need. We could instead use that physical memory for other processes, increasing the number of active processes in the system and reducing average execution time.
CPU utilization: 20%
Paging disk: 97.7% (demand, not storage)
Other I/O devices: 5%
For each of the following, say whether it will (or is likely to) improve CPU utilization. Briefly explain your answers.
Not likely to help, since the CPU is not the bottleneck.
Will not help. We are told storage capacity of the paging disk is not an issue. Using additional capacity—paging more data out to disk—is likely to hurt, not help performance.
Will probably hurt. Memory in the system is overcommitted, so adding more programs will probably make the situation worse. (If, however, the additional jobs added are CPU-intensive but do not require much memory, then it is possible utilization may increase, as these extra jobs use the CPU cycles previously spent waiting on paging.)
Opposite of the case above—likely to help utilization by freeing up memory.
Most likely to improve utilization, by reducing the need to page data to disk.
May help marginally, by speeding reads and writes to the paging disk, though since disks are still slow, unlikely to help a great deal.
Could help, by bringing pages in to memory shortly before they are needed and reducing the need to wait on the paging disk. However, could also hurt if the pages brought in are not actually needed, thus increasing paging activity and memory pressure.
No large impact, though may hurt by increasing internal fragmentation and wasting memory (which is already in short supply).