CSE 120: Homework #3 Solutions

Spring 2023

  1. Nachos VM Worksheet

  2. When using physical addresses directly, there is no virtual to physical translation overhead. Assume it takes 100 nanoseconds to make a memory reference. If we used physical addresses directly, then all memory references will take 100 nanoseconds each.

    1. If we use virtual addresses with page tables to do the translation, then without a TLB we must first access the page table to get the approprate page table entry (PTE) for translating an address, do the translation, and then make a memory reference. Assume it takes 100 nanoseconds to access the page table and do the translation. In this scheme, what is the effective memory reference time (time to access the page table + time to make the memory reference)?

      100 + 100 = 200 ns

    2. If we use a TLB, PTEs will be cached so that translation can happen as part of referencing memory. But, TLBs are very limited in size and cannot hold all PTEs, so not all memory references will hit in the TLB. Assume translation using the TLB adds no extra time and the TLB hit rate is 75%. What is the effective average memory reference time with this TLB?

      0.75*100 + 0.25*200 = 75 + 50 = 125 ns

    3. If we use a TLB that has a 99.5% hit rate, what is the effective average memory reference time now? (This hit rate is close to what TLBs typically achieve in practice.)

      0.995*100 + 0.005*200 = 99.5 + 1 = 100.5 ns

  3. Consider a 32-bit system with 1K pages and simple single-level paging.
    1. With 1K pages, the offset is 10 bits. How many bits are in the virtual page number (VPN)?

      32 - 10 = 22 bits

    2. For a virtual address of 0xFFFF, what is the virtual page number?

      
      0x0000FFFF = 0000 0000 0000 0000 1111 1111 1111 1111
      &
      0xFFFFFC00 = 1111 1111 1111 1111 1111 1100 0000 0000 (first 22 bits are page number)
      =
      0x0000FC00 = 0000 0000 0000 0000 1111 1100 0000 0000 = 0xFC00
      
      0xFC00 is the page address.  The page number shifts off the offset from the page address:
      
      0x0000FC00 = 0000 0000 0000 0000 1111 1100 0000 0000
      shift right >> 10 bits:
      0x0000003F = 0000 0000 0000 0000 0000 0000 0011 1111
      
      0x3F is the virtual page number.
      
    3. For a virtual address of 0xFFFF, what is the value of the offset?

      
      0x0000FFFF = 0000 0000 0000 0000 1111 1111 1111 1111
      &
      0x000003FF = 0000 0000 0000 0000 0000 0011 1111 1111 (last 10 bits are the offset)
      =
      0x000003FF = 0000 0000 0000 0000 0000 0011 1111 1111 = 0x3FF
      
      0x3FF is the offset.
      
    4. What is the physical address of the base of physical page number 0x4?

      The offset is 10 bits.  To convert a page number to a page address,
      we shift the page number left by the offset (10 bits):
      
        0x4 << 10 bits
      = 0100 << 10 bits
      = 0001 0000 0000 0000
      = 0x1000
      
    5. If the virtual page for 0xFFFF is mapped to physical page number 0x4, what is the physical address corresponding to the virtual address 0xFFFF?

      The physical page for 0xFFFF is physical page 0x4.
      The physical address of physical page 0x4 is 0x1000.
      The offset of virtual address 0xFFFF is 0x3FF.
      So the physical address of virtual address 0xFFFF is (0x1000 & 0x3FF) = 0x13FF.

  4. [Crowley] Suppose we have a computer system with a 44-bit virtual address, page size of 64K, and 4 bytes per page table entry.
    1. How many pages are in the virtual address space? (Express using exponentiation.)

      The virtual address space is 244 bytes in size. Thus, there are 244/216 = 228 pages.

    2. Every process has its own page table, and every page table has a root page and many secondary pages. Suppose we arrange for each kind of page table page (root and secondary) to have the size of a single page frame. How will the bits of the address be divided up among the offset, index into a secondary page, and index into the root page?

      Each page table page can hold 216/4, or 214, page table entries. Thus, for each page table level, we will use 14 bits of the virtual address to index into that page table page. We will use 16 bits to address a specific byte within the data page. Conveniently, this all adds up to 44 bits, so the breakdown is: 14 bits for top-level page table, 14 bits for next page table, and 16 bits for offset.

    3. Suppose we have a 4 GB program such that the entire program and all necessary page tables (using two-level pages from above) are in memory. (Note: It will be a lot of memory.) How much memory, in page frames, is used by the program, including its page tables?

      4 GB translates to 216 pages of memory. So, we will need 216 page table entries, which can fit into 4 page tables. We also need to add in one additional page table for the top-level of the page table tree. Thus, we need 216 + 5 page frames to store all data and page table pages.

  5. [Crowley] Suppose we have an average of one page fault every 20,000,000 instructions, a normal instruction takes 2 nanoseconds, and a page fault causes the instruction to take an additional 10 milliseconds. What is the average instruction time, taking page faults into account? Redo the calculation assuming that a normal instruction takes 1 nanosecond instead of 2 nanoseconds.

    Page faults incur an additional cost of 10 ms for every 20,000,000 instructions, which is an average additional cost of .5 ns per instruction. So, if instructions take 2 nanoseconds, the average instruction time is 2.5 ns, and if instructions take 1 nanosecond, the average instruction time is 1.5 ns.

  6. [Tanenbaum] If FIFO page replacement is used with four page frames and eight pages (numbered 0–7), how many page faults will occur with the reference pattern 427253323126 if the four frames are initially empty? Which pages are in memory at the end of the references? Repeat this problem for LRU.

    [FIFO]
    
    + 4 : <- 4
    + 42 : <- 2
    + 427 : <- 7
      427 : (2)
    + 4275 : <- 5
    + 2753 : <- 3, -> 4
      2753 : (3)
      2753 : (2)
      2753 : (3)
    + 7531 : <- 1, -> 2
    + 5312 : <- 2, -> 7
    + 3126 : <- 6, -> 5
    
    = 8 page faults with pages 3126 in memory.
    
    [LRU]
    
    + 4 : <- 4
    + 42 : <- 2
    + 427 : <- 7
      472 : (2 moved to end on access)
    + 4725 : <- 5
    + 7253 : <- 3, -> 4
      7253 : (3)
      7532 : (2 moved to end on access)
      7523 : (3 moved to end on access)
    + 5231 : <- 1, -> 7
      5312 : (2 moved to end on access)
    + 3126 : <- 6, -> 5
    
    = 7 page faults with pages 3126 in memory.
    	    

  7. [Silberschatz] Consider a demand-paging system with the following time-measured utilizations:

        CPU utilization: 20%
        Paging disk: 97.7% (demand, not storage)
        Other I/O devices: 5%

    For each of the following, say whether it will (or is likely to) improve CPU utilization. Briefly explain your answers.

       a. Install a faster CPU

    No. The CPU is already underutlized and a faster CPU would be even more underutilized.

       b. Install a bigger paging disk

    No. There is no indication that the disk has insufficient storage space.

       c. Increase the degree of multiprogramming

    No. Dividing the current programs into more programs will likely increase contention for memory.

       d. Decrease the degree of multiprogramming

    Yes. This would decrease contention for memory, reduce page faults, and allow programs to spend less time waiting for swapping to complete and more time utilizing the CPU.

       e. Install more main memory

    Yes. This would also decrease contention for memory.

       f. Install a faster hard disk, or multiple controllers with multiple hard disks

    Yes. This would allow pages to be swapped in and out more quickly, causing programs to spend less time waiting for swapping to complete.

       g. Add prepaging to the page-fetch algorithms

    No. This would not decrease the amount of paging required and might increase it if the prefetching algorithm was inaccurate.

       h. Increase the page size

    Unlikely. This would typically increase the amount of memory swapped in and out.

    For (g), the page-fetch algorithm is the page replacement algorithm. Prepaging is an optimization that tries to anticipate near-future page accesses by loading pages from disk that the page replacement algorithm predicts will be accessed soon, but before those pages are actually accessed (effectively prefetching those pages).