03-13-2004 09:40 AM
The database serves 5 Win2k application servers. We are using dynamic connection pooling on the app servers so connections are maintained at a preset high water mark. We noticed unusually high system CPU utilization when ever we need to create connections ( 80% system usage ). I am not seeing any swapping.
As a test, I ran a script from sqlplus that just exits, so all we see is the time it takes to startup the connection and fork a dedicated server process and then exit.
I loop through 20 of these.
It takes about ~1-2 seconds to setup the connection and System CPU goes to ~80%.
If I run the same script against the same version of Oracle on a Win2k box (with similiar CPU speed and less RAM) it is at least 4 times as fast.
I looked at an Oracle connection trace and found most of the time occurs when the new dedicated server process is forked. I know Windows just fires another thread so the overhead of allocating another process is saved but ~1 second to create a session sounds absurd to me.
Any ideas or suggestions?
Solved! Go to Solution.
03-13-2004 10:37 AM
Yeah upgrade V1.6 (11.22) to V2 (11.23). That will solve it.
Nest I know V1.6 was never intended for production was it?
I have observed this a while ago and analyzed it witl caliper. Here is the average profile for an oracle slave activation under 11.22:
USER portion of profile: 8 hits = 0.08 seconds. KTC time was 0.052 seconds
25% 25% 2 0.02 UT_memcpy /usr/lib/hpux64/dld.so
12% 38% 1 0.01 __milli_memcpy /apps/oracle92/bin/oracle
12% 50% 1 0.01 kkshchv /apps/oracle92/bin/oracle
12% 62% 1 0.01 ttci2u /apps/oracle92/bin/oracle
12% 75% 1 0.01 LE_sym_name /usr/lib/hpux64/dld.so
12% 88% 1 0.01 LE_get_opd_entry /usr/lib/hpux64/dld.so
KERNEL portion of profile: 123 hits = 1.23 seconds. KTC time was 1.209 seconds
As soon as we upgraded to 11.23 our application (on an 4p IPF box) went from 25% systems time, blocking an entire cpu, to well below 5%.