4 thoughts on “Optimal Oracle Configuration for Efficient Table Scanning (Part Two)”

  1. James,

    very interesting – and I do think that the hand-written pictures make everything stick much better in my mind … I wonder why.

    I have a naive question I have been always curious about.

    Even in the good old days of 9i, do you happen to know whether there was any good reason that prevented Oracle, after noticing a couple of cached blocks inside a big DB_FILE_MULTIBLOCK_READ_COUNT-sized (say, 128) intended multiblock read, from issuing a 128 read anyway and then simply discard the couple of blocks already cached?

    Thanks in advance 🙂

    1. Hi Alberto,

      Thanks for the feedback, I’m glad that you enjoyed my experiment in illustration approach!
      Regarding the handling of partially cached blocks, I suspect that this did not happen because of the layered architecture of the Oracle kernel, combined with legacy. It was probably a complex job to undo much of the logic in the ‘read through cache’ codepath to support this rather than create a new (and simple) codepath to just read the blocks into the PGA. But it’s just a guess 🙂


Leave a Reply