For the most part I don't think any one would get to excited about dealing with a SQL statement that runs in .3 seconds. Most of us are worried about the statements that run for hours and we tend to focus on them. Which is not necessarily a bad thing. But how many statements on your system fit into this category I'm about to describe?
This statement on a per execution runs in .336 seconds, and does 1,627 Logical IOs. It's a pretty simple 3 way table join. All the predicates are simple equality ones that are ANDed together. The kicker is that this statement was called 5,935 times in a 40 minute window for a total of 9,658,869 LIOs and 275,527 Physical IOs and didn't return any rows. For this 40 minute trace this statement consumed 69% of the total response time and nearly all the PIOs via Sequential Read events. All this to get back nothing.
Maybe the statement can be optimized to do less work, but the real question is why do this at all? Is there a way to avoid running this statement at all?
It's easy to see why no one would have even looked at this SQL in the conventional tuning type engagement. It's fast, the LIOs per execution aren't necessarily bad either given that the tables involved are rather large. Looking at it from a per execution basis, there isn't much reason to get excited. But when looked at with in the context of a running application, it's easy to see that this statement is doing a lot of work for nothing.
Hey Ric - I experienced the exact same thing a week back. Was called in for a health check one site had sql statements that ran pretty quickly and wanted to know if there was room for improvement - why? well because at times they were experiencing inconsistent response times for a particular form (ranging from 3s-15s, sometimes more). I pulled out a few awr (30 min intervals) reports and noticed two sql statements in particular:
ReplyDelete1. 8000 executions, 2500 gets per exec - 1 row
2. 8700 execution, 12,000 gets per exec - 1 row
These two queries look to be doing a whole lot of work just to get 1 row. The queries involved partition tables and local partitioned indexes.
For the first query I created a new global concatenated index (the old one being a single column local partitioned index).
Logical reads or gets per exec dropped from 2500 to 4. Still working on the second one - but I'm hoping this will help bring some stability to the environment - why? well they seem to be i/o bound. db file seq being the dominating wait event; over 50% of %total call time. Most of the time with average wait times ranging from 16ms to 25ms.
Needless to say that these two queries that i identified turned out to be the ones issued behind the form that had inconsistent response time. In this particular case these queries have to be run. Just wondering if you have any thoughts on the correlation i make with this an inconsistent response time for the form? A system that is i/o bound with svc_t over 15ms is bound to result in inconsistent response times, no?
-Naqi
On a side note, the first query:
ReplyDelete1. 8000 executions, 2500 gets per exec
means 8000*2500=20,000,000*4k (block size)=78G
.
With a 24g cache size and a whole load of other sql being run - that's a big wastage of cache isn't it?
-Naqi
Hey Naqi!
ReplyDeleteThese folks also have IO issues. And with this high number of LOIs, there is certainly a possibility that it's related to the IO issues. Each LIO could be a PIO after all. So in both our cases, there is significant waste of resources, the memory in the cache being one. They are also wasting a lot of CPU and of course IO band width as well. I'm hoping that we can find a way to just plan not run the statement at all in my case.