Way back in 2018 (Seems like a life time ago, doesn’t it?) I did a post about retrieving one row from a table you can see it HERE. The other day I was looking at my blog and saw this post and wondered, what would this test be like on Exadata? And thanks to Oracle having the super cool always free option in the cloud I can easily test it.
One thing I found out quickly is that in 21c the statistic numbers have changed. (This may have change in 18 I’m not sure.) The statistic number for “consistent gets” used to be 139 when I did this in my 12 and earlier databases. It’s now 209. There are a bunch of new statistics in the consistent gets group. I believe that the change was made to keep them all together they were all moved to the numbers of 209 to 222, they used to be 139 to 146. Several of the new ones are PMEM for persistent memory. I’ve not dived into that just yet to see all details about these, but just the name gives a decent clue about what those are about.
Back to my test. In the original post I was showing that to get a single row from a table, it’s best to use an index. A unique index is even better and the best was a unique covering index.
But is that still true in Exadata?
Yes, it is.
Here is the run output. As I did in the original post, I ran this several times and the results are consistent. The test code, which is at the bottom of this post, creates a small table and then selects the 10 rows one at a time 100,000 times. I time it in both elapsed time and CPU time. Also, I capture how many consistent gets (LIOs) were done.
......................................................
selecting all 10 rows without an index 100000 times
......................................................
In hundredths of a second
**** TIME - 4419
**** CPU - 4214
**** LIO - 7001404
......................................................
selecting all 10 rows with an index 100000 times
......................................................
In hundredths of a second
**** TIME - 968
**** CPU - 932
**** LIO - 2000003
......................................................
selecting all 10 rows with a unique index 100000 times
......................................................
In hundredths of a second
**** TIME - 855
**** CPU - 831
**** LIO - 2000004
......................................................
selecting all 10 rows with a unique covering index 100000 times
......................................................
In hundredths of a second
**** TIME - 824
**** CPU - 816
**** LIO - 1000002
The output didn’t surprise me, I figured it would be the same on Exadata was it was on a non-Exadata configuration, and it sure is good to have it prove out in a test.
The key take away is that if all you want to get from a table is one row, the fastest way is to use and unique index. If available, a covering index (one that has all the columns you want) is even better, in an Exadata configuration or non-Exadata.
What this doesn’t mean is that if you are doing a join on a primary key and joining a significant number of rows from this table to another, that the index is a good thing in Exadata. A Smart Scan during a full table scan is very likely to be better. I talk about this in a post HERE.
Here is the code I ran to do the above test.
rem testing to get one row out of a table.
rem SEP2018 RVD
rem JUL2021 RVD updated for 21c
rem
set echo on feedback on serveroutput on
drop table iamsmall purge;
create table iamsmall (id_col number, data_col varchar(20));
begin
for x in 1..10 loop
insert into iamsmall values (x, (dbms_random.string('A',20)));
end loop;
end;
/
declare
x1 varchar2(20);
l_start_time pls_integer;
l_start_cpu pls_integer;
l_start_cr pls_integer :=0;
l_end_cr pls_integer :=0;
begin
--the consistent gets statistic is #139 in 12c and #209 in 21c
--select value into l_start_cr from v$mystat where STATISTIC# = 139;
select value into l_start_cr from v$mystat where STATISTIC# = 209;
l_start_time := DBMS_UTILITY.GET_TIME;
l_start_cpu := DBMS_UTILITY.GET_CPU_TIME;
for ii in 1 .. 100000 loop
for i in 1 .. 10 loop
select data_col into x1 from iamsmall where id_col = i;
end loop;
end loop;
DBMS_OUTPUT.put_line ('......................................................');
DBMS_OUTPUT.put_line ('selecting all 10 rows without an index 100000 times');
DBMS_OUTPUT.put_line ('......................................................');
DBMS_OUTPUT.put_line ('In hundredths of a second');
DBMS_OUTPUT.put_line ('**** TIME - '||to_char(DBMS_UTILITY.GET_TIME - l_start_time));
DBMS_OUTPUT.put_line ('**** CPU - '||to_char(DBMS_UTILITY.GET_CPU_TIME - l_start_cpu));
--select value into l_end_cr from v$mystat where STATISTIC# = 139;
select value into l_end_cr from v$mystat where STATISTIC# = 209;
DBMS_OUTPUT.put_line ('**** LIO - '||to_char( l_end_cr - l_start_cr));
end;
/
rem create a nonunique index on id_col
create index ias_id on iamsmall(id_col);
declare
x1 varchar2(20);
l_start_time pls_integer;
l_start_cpu pls_integer;
l_start_cr pls_integer :=0;
l_end_cr pls_integer :=0;
begin
select value into l_start_cr from v$mystat where STATISTIC# = 209;
l_start_time := DBMS_UTILITY.GET_TIME;
l_start_cpu := DBMS_UTILITY.GET_CPU_TIME;
for ii in 1 .. 100000 loop
for i in 1 .. 10 loop
select data_col into x1 from iamsmall where id_col = i;
end loop;
end loop;
DBMS_OUTPUT.put_line ('......................................................');
DBMS_OUTPUT.put_line ('selecting all 10 rows with an index 100000 times');
DBMS_OUTPUT.put_line ('......................................................');
DBMS_OUTPUT.put_line ('In hundredths of a second');
DBMS_OUTPUT.put_line ('**** TIME - '||to_char(DBMS_UTILITY.GET_TIME - l_start_time));
DBMS_OUTPUT.put_line ('**** CPU - '||to_char(DBMS_UTILITY.GET_CPU_TIME - l_start_cpu));
select value into l_end_cr from v$mystat where STATISTIC# = 209;
DBMS_OUTPUT.put_line ('**** LIO - '||to_char( l_end_cr - l_start_cr));
end;
/
rem create a unique index on id_col
drop index ias_id;
create unique index ias_id on iamsmall(id_col);
declare
x1 varchar2(20);
l_start_time pls_integer;
l_start_cpu pls_integer;
l_start_cr pls_integer :=0;
l_end_cr pls_integer :=0;
begin
select value into l_start_cr from v$mystat where STATISTIC# = 209;
l_start_time := DBMS_UTILITY.GET_TIME;
l_start_cpu := DBMS_UTILITY.GET_CPU_TIME;
for ii in 1 .. 100000 loop
for i in 1 .. 10 loop
select data_col into x1 from iamsmall where id_col = i;
end loop;
end loop;
DBMS_OUTPUT.put_line ('......................................................');
DBMS_OUTPUT.put_line ('selecting all 10 rows with a unique index 100000 times');
DBMS_OUTPUT.put_line ('......................................................');
DBMS_OUTPUT.put_line ('In hundredths of a second');
DBMS_OUTPUT.put_line ('**** TIME - '||to_char(DBMS_UTILITY.GET_TIME - l_start_time));
DBMS_OUTPUT.put_line ('**** CPU - '||to_char(DBMS_UTILITY.GET_CPU_TIME - l_start_cpu));
select value into l_end_cr from v$mystat where STATISTIC# = 209;
DBMS_OUTPUT.put_line ('**** LIO - '||to_char( l_end_cr - l_start_cr));
end;
/
rem create a unique covering index on id_col,data_col
drop index ias_id;
create unique index ias_id on iamsmall(id_col, data_col);
declare
x1 varchar2(20);
l_start_time pls_integer;
l_start_cpu pls_integer;
l_start_cr pls_integer :=0;
l_end_cr pls_integer :=0;
begin
select value into l_start_cr from v$mystat where STATISTIC# = 209;
l_start_time := DBMS_UTILITY.GET_TIME;
l_start_cpu := DBMS_UTILITY.GET_CPU_TIME;
for ii in 1 .. 100000 loop
for i in 1 .. 10 loop
select data_col into x1 from iamsmall where id_col = i;
end loop;
end loop;
DBMS_OUTPUT.put_line ('......................................................');
DBMS_OUTPUT.put_line ('selecting all 10 rows with a unique covering index 100000 times');
DBMS_OUTPUT.put_line ('......................................................');
DBMS_OUTPUT.put_line ('In hundredths of a second');
DBMS_OUTPUT.put_line ('**** TIME - '||to_char(DBMS_UTILITY.GET_TIME - l_start_time));
DBMS_OUTPUT.put_line ('**** CPU - '||to_char(DBMS_UTILITY.GET_CPU_TIME - l_start_cpu));
select value into l_end_cr from v$mystat where STATISTIC# = 209;
DBMS_OUTPUT.put_line ('**** LIO - '||to_char( l_end_cr - l_start_cr));
end;
/
No comments:
Post a Comment