Thursday, July 13, 2017

Going to 12.2 in the cloud or not



We all have to upgrade at some point to the latest and greatest version of oracle.  And this is really a good thing as the database does get better with each release.   And every time we do there are always little (or big) gotchas along the way.

In 12.2 I’ve found two things.  First on Windows (yes I do install Oracle on my Windows laptop), I’d recommend NOT using the anonymous user for the owner of the software.  This is recommended by oracle but my experience is it generates a heap of errors in the alert.log and made my laptop un-bootable.  Yea that was fun.

The errors were like this, over and over again.  There were associated with the monitoring back ground processes (MMON and MMNL in particular).  The database did work and log switches worked fine as did DML activity:

ORA-27037: unable to obtain file status,
OSD-04011: GetFileInformationByHandle() failure, unable to obtain file info

To solve this I reinstalled 12.1 and then upgraded to 12.2, and used the oracle user that owned the 12.1 software to install the 12.2 software.  The upgrade took quite a big longer to do then just installing 12.2 alone but I ended up with a database that didn’t cause a ton of errors and I can reboot my laptop too!!  Win!

But wait there’s more! 

Another issue I’ve found is that with a fresh install of 12.1 I can’t get STATISTICS_LEVEL to work properly.  With a new 12.2 install once I set STATISTICS_LEVEL to ALL it seems to always stay that way.  On a Linux cloud instance I have 12.2 installed as a fresh install (which doesn’t have the errors I saw in my Windows install) once I set STATISTICS_LEVEL to ALL it never seems to change, even when it “looks” like it’s changed.  This same thing did happen on my Windows laptop when I did the fresh install, but doesn’t happen after the upgrade, very odd to be sure.

As a quick example:

SQL>
SQL>select sql_trace, count(*) from v$session group by sql_trace;

SQL_TRAC COUNT(*)
-------- ---------------
DISABLED 42
SQL>
SQL>select count(*) from DBA_ENABLED_TRACES;

COUNT(*)
---------------
0
SQL>
SQL>show parameter statistics_level

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
statistics_level                     string      TYPICAL
SQL>
SQL>select * from dual;

D
-
X
SQL>select * from table(dbms_xplan.display_cursor(null, null, 'allstats last'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID a5ks9fhw2v9s1, child number 0
-------------------------------------
select * from dual

Plan hash value: 272002086

---------------------------------------------------------------------------------
| Id | Operation       | Name | Starts | E-Rows | A-Rows | A-Time     | Buffers |
---------------------------------------------------------------------------------
| 0 | SELECT STATEMENT |      |      1 |        |      1 |00:00:00.01 |       3 |
| 1 | TABLE ACCESS FULL| DUAL |      1 |      1 |      1 |00:00:00.01 |       3 |
---------------------------------------------------------------------------------

The statistics/tracing level sure appears to be typical and no trace appears to be turned on, yet it still shows all the stats you’d see if it was on. 

What you should get for this plan is something like this, note the warning at the bottom:

SQL> select * from dual;

D
-
X

SQL> select * from table(dbms_xplan.display_cursor(null, null, 'allstats last'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
SQL_ID a5ks9fhw2v9s1, child number 0
-------------------------------------
select * from dual

Plan hash value: 272002086

----------------------------------------
| Id | Operation       | Name | E-Rows |
----------------------------------------
| 0 | SELECT STATEMENT |      |        |
| 1 | TABLE ACCESS FULL| DUAL |      1 |
----------------------------------------

Note
-----
- Warning: basic plan statistics not available. These are only collected when:
* hint 'gather_plan_statistics' is used for the statement or
* parameter 'statistics_level' is set to 'ALL', at session or system level

I have created an SR with Oracle Support on both these issues and they have their top people working on them. Top people. 

If anyone else can confirm or deny my findings I’d sure like to hear from you. Have a great day!


Wednesday, June 14, 2017

Using Trace Analyzer in 12.2 in the Cloud or not

NOTE: After some more testing, it may be the end of the line for Trace analyzer.  The "SQL Genealogy" section in particular appears to have statements repeated many times.  This tool hasn't been updated in a long time and I'm in contact with the folks who own the code so we'll see if it is updated or not.


It’s an old tool but still a useful one. If you used oracle’s trace analyzer tool to analyzer 10046 trace files in the past, you’ll need to adjust it a bit to get it to work in 12.2.  You can download it from My Oracle Support, Doc ID 224270.1.

First off the trace locations are off a bit; I noticed this first in 11 actually.  Simply change one of the directories used by the trace analyzer tool to the current directory.  

You need a SYSDBA connection to do this first part.
To get your current trace location, use this query:
SELECT VALUE FROM V$DIAG_INFO WHERE NAME = 'Diag Trace';
Then do the following:
CREATE OR REPLACE DIRECTORY TRCA$INPUT1 AS '<your trace directory>';

Now connect as the trace analyzer user, this should be TRCANLZR and whatever password you set while doing the install of trace analyzer.  Now do these changes:

Set the length of the PARTITION_NAME column to 60 in these tables:
TRCA$_SEGMENTS
TRCA$_EXTENTS
TRCA$_TOOL_WAIT_SEGMENT  
TRCA$_GROUP_WAIT_SEGMENT
TRCA$_GROUP_EXEC_WAIT_SEGMENT
TRCA$_HOT_BLOCK_SEGMENT

Set the length of the SUBOBJECT_NAME column to 60 in these tables:
TRCA$_OBJECTS        
TRCA$_OBJECTS$


Example:
ALTER TABLE TRCA$_SEGMENTS MODIFY PARTITION_NAME VARCHAR2(60);
Now you should be all set to use it!
 

Friday, May 26, 2017

Ready for some automatic big table caching?




A new feature in 12.1.0.2 is automatic big table caching.  This is pretty cool.  The key parameter for it is db_big_table_cache_percent_target. This more or less sets aside some of the buffer cache to be used for full table scans.  It’s not like the KEEP or RECYCLE pool, those have dedicated object to them.  And it’s not really setting up a set off buffer that can’t be used for anything else.  The idea is that up to this percent of the buffer cache cold be used for big table full table scans.

Now a little history on full table scans in Oracle.  Prior to 11 full table scans (of any table) were always in the buffer cache.  However if the table was of significant size, the Oracle wouldn’t even try to pull in the entire table during the scan.  It would in effect, cycle thru a set of buffers for the whole table.  In 11 Oracle stated to move this work out of the SGA (System Global Area) and into the PGA (Program Global Area).   The basic idea was that these block were highly unlikely to be shared so why even try, let's just do that work in an area for the user.  The same thing basically happen, the table was cycled thru a set of buffers in the PGA space. 

But what about tables that have full scans on them often?  Even if the entire table couldn’t be cached, if a good portion of it could be that would likely be faster and all that.  So in a sorta-kinda-way oracle is going back to the old way of doing full scans, but with a twist.  The twist is decided which tables to keep in the cache.  A new characteristic is now tracked with tables in the buffer cache, temperature. 

Temperature is at the object level not the block level.  Oracle uses the good old LRU (Least Recently Used) algorithm at the block level to decide when to age out blocks.  But with automatic big table caching things change.  Now when full scans happen on a table, the more then happen, the hotter the table becomes and the more likely it will stay in cache.

So here’s a simple test to show it in action.  The two scripts (ahwm.sql and bigtab_cnt.sql) run queries on two different tables that are both big enough to be big tables for my instance.   Also I have a script (hbtcache.sql) to show some stats.  The code for this script will be at the bottom of this post.

SQL>
SQL> alter system set db_big_table_cache_percent_target=40;

System altered.

SQL> SQL> @hbtcache
********************************************************
****** Over all automatic big table caching stats ******

BT_CACHE_TARGET OBJECT_COUNT MEMORY_BUF_ALLOC MIN_CACHED_TEMP
--------------- ------------ ---------------- ---------------
             40            0                0            1000

********************************************************
**** Object level automatic big table caching stats ****

no rows selected

SQL>
SQL>
SQL> set termout off
SQL> @ahwm
SQL> set termout on
SQL> @hbtcache
********************************************************
****** Over all automatic big table caching stats ******

BT_CACHE_TARGET OBJECT_COUNT MEMORY_BUF_ALLOC MIN_CACHED_TEMP
--------------- ------------ ---------------- ---------------
             40            1           43,787            1000

********************************************************
**** Object level automatic big table caching stats ****

  DATAOBJ# OBJECT_NAME                    POLICY      TEMPERATURE SIZE_IN_BLKS CACHED_IN_MEM
---------- ------------------------------ ---------- ------------ ------------ -------------
     73914 AHWM                           MEM_ONLY          1,000       43,787        43,787

SQL> set termout off
SQL> @bigtab_cnt
SQL> set termout on
SQL> @hbtcache
********************************************************
****** Over all automatic big table caching stats ******

BT_CACHE_TARGET OBJECT_COUNT MEMORY_BUF_ALLOC MIN_CACHED_TEMP
--------------- ------------ ---------------- ---------------
             40            2           86,416            1000

********************************************************
**** Object level automatic big table caching stats ****

  DATAOBJ# OBJECT_NAME                    POLICY      TEMPERATURE SIZE_IN_BLKS CACHED_IN_MEM
---------- ------------------------------ ---------- ------------ ------------ -------------
     73914 AHWM                           MEM_ONLY          1,000       43,787        43,787
     73488 BIG_TAB                        MEM_PART          1,000       43,161        42,629

SQL>

What we see is that at 40% of the cache it was able to put the entire table AHWM in the cache, but couldn’t fit all of BIG_TAB.  But it did put in as much of it as it could.   So latter scans on AHWM will all be in the buffer cache (no physical reads) but on BIG_TAB some physical reads will still happen.

A couple things, it’s not real clear to me at this point how the temperature goes up.  Clearly the more times you access it the hotter it gets, but what 1000 means is not clear and I supposed it’s not critical for us to know how Oracle measures this.  


Also the MIN_CACHED_TEMP doesn’t appear appear to be modifiable.  I suspect that we will be able to modify this at some point. 


Over all this looks like a cool thing, or is it a hot thing?  Hummm…


Code for hbtcache.sql  

rem hbtcache.sql
rem MAY2017 RVD
rem shows stats on automatic big table caching
rem
column memory_buf_alloc format 999,999,999
column object_name      format a30 wrap
column temperature      format 999,999,999
column size_in_blks     format 999,999,999
column cached_in_mem    format 999,999,999
set lines 200
prompt ********************************************************
prompt ****** Over all automatic big table caching stats ******
select bt_cache_target, object_count, memory_buf_alloc, min_cached_temp
from v$bt_scan_cache
/

prompt ********************************************************
prompt **** Object level automatic big table caching stats ****
select dataobj#, object_name, policy, temperature, size_in_blks, cached_in_mem
from v$bt_scan_obj_temps btsot, dba_objects dbaobj
where btsot.dataobj#=dbaobj.object_id(+)
order by temperature
/