Monday, May 21, 2018

The Data Block, the center of the cloud


From my point of view the data block is really the magic of Oracle.  Without this basic part of the system all the super cool stuff within Oracle just wouldn’t be possible or would be much more complex.   An example of this is row locking. 

The locking model in Oracle at its core is pretty simple.  You lock rows while you are changing them.  This could be an insert update or delete.  And when you lock a row that is exactly what you lock, just that row.  If you lock every row in the table except one, you have a lock on every row except one.  Oracle never escalates a lock; there isn’t a “block lock” in Oracle. 

There is block pinning which is a latching of the block while you’re in the block.  That is held for the very brief moment of time while you’re in the block doing something (even a select).  This is done to protect the block while changes happen to the block but is released as soon as you’re done in the block.  Any rows locked while you were there stay locked after you release the pin.  This pinning has to be done even in a select because you might do some maintenance on the block during the select.

So how does Oracle do this?  The locking is built into the block and rows.  In the header of the block are ITL (Interested Transaction List) entries.   When we lock one or more rows in the block we take out an ITL.  Within each row is a lock byte, it’s the second byte of each row.  This lock byte points to the ITL that has that row lock, or did have it lock at some point.  It may point to an ITL that is competed (a committed transaction).

The ITL in turn points to the Rollback (Undo) segment header that holds the transactional information.   In this way each row is able to be locked by different transactions within the same block and over many blocks with no additional overhead.   This diagram gives the basic overall view of what happens for locking data.


Of course there are limits.  The block at the very max can hold only 255 ITL entries, which is highly unlikely that you will get to this.  Each one is about 23 bytes, and takes up space in the header of the block.   For a table there is 1 ITL in the block when it’s first created and can go up to 255.  There are parameters you can set INITTRANS and MAXTRANS.  INITTRANS is how many to allocate to the blocks when first added to the table and MAXTRANS is the max it will go to.   Since version 10, MAXTRANS ignored, so even if you set it to say 100, it will be set to the max of 255 anyway. 


Since you can only have so many ITL entries in a block it is possible that someone will want to lock rows in a particular block and there aren’t enough ITL entries and there isn’t enough free space to allocate a new one.  When this happens you’ll see this wait event “enq: TX - allocate ITL entry”.  If this is happening a lot for a given table, consider recreating the table with a higher INITTRANS.  Altering the table to add a higher INITTRANS will only cause new blocks in the table to have more, you need to recreate the table with the change to have it applied to all blocks of the table.


There is not an easy way to see how many ITL entries there are in a block.  You have to dump the block, and then look at the block header information to see the ITL entries.  They aren’t hard to see, they look something like this:
Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x0001.01f.00007821  0x01400236.1fdc.10  ----    3  fsc 0x0000.00000000
0x02   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000

The block dump command looks like this (this example just dumps block 12 of datafile 6):
ALTER SYSTEM DUMP DATAFILE 6 BLOCK MIN 12 BLOCK MAX 12;

Doing this across a table to get an idea of how many ITL entries there are is rather tedious.   And in the end likely more work than it’s worth.    Better is to get an idea of how many independent transactions there are on a hot table then raise it to that plus some to give some wiggle room.   So you know that about 30 transactions are likely going on in the table at once, a setting of 35-40 is likely good.   

Just keep in mind that each entry is permanently there and never goes away, even when dynamically allocated.   It’s around 23 bytes of space and that is space not available in the block for data.  This means that raising it to a higher value could make the table bigger.  This space trade off could well be worth it if there if this event is a serious performance problem.

To wrap up, this is magic of Oracle.  Way back at version 6 time is when this basic block and row structure was put in place.  It hasn’t changed much really since then.   I tip my hat to those folks who all those years ago that made all this possible.   In this post I focused on the row locking inside the block which is a huge thing but not the only thing that makes Oracle’s block structure so powerful.  In the future I plan to have other posts on other parts.