09-10-2013 11:03 AM
Current memory is 64 GB which I think is the maximum (HP-UX 11.31) . Memory utilization reserved 50% for Oracle which is usual and swap space 45% from memory as below. We have /u01 for Oracle binaries and /u02 for backup.
node1 $swapinfo -tm
Mb Mb Mb PCT START/ Mb
TYPE AVAIL USED FREE USED LIMIT RESERVE PRI NAME
dev 8192 0 8192 0% 0 - 1 /dev/vg00/lvol2
dev 12288 0 12288 0% 0 - 1 /dev/vg00/swap2
reserve - 5004 -5004
memory 62305 27839 34466 45%
total 82785 32843 49942 40% - 0 -
1- Is current utilization for swap is usual?
2- When /u02 gets full memory raised from 50 % to 80%, what is the reason?
3- I compress dump files (100 GB) using gzip and this causing memory reach 99% ? is this usual .
I’m trying understand what happen here and achieve best practice !
Thank you .
09-10-2013 12:26 PM - edited 09-10-2013 12:30 PM
>> Current memory is 64 GB which I think is the maximum (HP-UX 11.31)
There is no practical limit for memory except what your box will hold. Some models will accept 250 GB of RAM. You'll run out of money before HP-UX runs out of memory addressing space.
>> 1- Is current utilization for swap is usual?
The best utilization of swap is always 0%. When you use swap, your system will start running very slow. swap space is a last resort when you run out of memory. The exception is that a small amount of swap space may be used by memory mapped files -- this is normal and expected.
>> 2- When /u02 gets full memory raised from 50 % to 80%, what is the reason?
I assume you mean that the filesystem /u02 is full. Memory usage does up due to some applications or database programs. Whether that has anything to do with the full filesystem cannot be determined qithout an understanding of what /u02 has on it and the baehavior of the programs that use it. A full filesystem is usually a bad sign and should be addressed. Memory usage is of no concern here.
If you want to see what is using memory, use this command to sort the processes by local memory usage:
UNIX95=1 ps -e -o vsz,pid,args | sort -rn | head -20
Then there is shared memory which several programs may be using. Look at that with:
And the filecache usage is seen with:
kcusage -m -t filecache_max
>> 3- I compress dump files (100 GB) using gzip and this causing memory reach 99% ? is this usual .
Memory usage is controlled by processes. gzip will use some local memory but not gigabytes. I would guess that the RAM is being used by the file cache and this system-wide RAM is being occupied by the data that gzip is processing. Again quite normal.
Is there a reason to be concerned about RAM usage? 100% RAM is a good thing since memory is expensive.
09-10-2013 01:37 PM
Look at the SD2 specs here, 8TB in this box:
HP-UX will support up to 32 sockets, 256 cores and 512 threads up to 4TB of memory per Operation Environment image.
Hope this helps!
There are only 10 types of people in the world -
those who understand binary, and those who don't.
No support by private messages. Please ask the forum!
If you feel this was helpful please click the KUDOS! star in the left column!
09-11-2013 02:46 AM - edited 09-11-2013 03:49 AM
The output from kcusage -m -t filecache_max
node1 $kcusage -m -t filecache_max
Time Usage %
Mon 08/12/13 27849900032 85.3
Tue 08/13/13 28109275136 86.1
Wed 08/14/13 28158443520 86.2
Thu 08/15/13 19043127296 58.3
Fri 08/16/13 28129730560 86.1
Sat 08/17/13 27950575616 85.6
Sun 08/18/13 28118106112 86.1
Mon 08/19/13 28106334208 86.0
Tue 08/20/13 28033605632 85.8
Wed 08/21/13 28040364032 85.8
Thu 08/22/13 19679076352 60.2
Fri 08/23/13 28036403200 85.8
Sat 08/24/13 27866234880 85.3
Sun 08/25/13 27928764416 85.5
Mon 08/26/13 27965992960 85.6
Tue 08/27/13 32634228736 99.9
Wed 08/28/13 32634187776 99.9
Thu 08/29/13 20590817280 63.0
Fri 08/30/13 31892828160 97.6
Sat 08/31/13 32041259008 98.1
Sun 09/01/13 32634458112 99.9
Mon 09/02/13 32634642432 99.9
Tue 09/03/13 32634134528 99.9
Wed 09/04/13 32634454016 99.9
Thu 09/05/13 10683973632 32.7
Fri 09/06/13 32634449920 99.9
Sat 09/07/13 32635158528 99.9
Sun 09/08/13 22374158336 68.5
Mon 09/09/13 22473695232 68.8
Tue 09/10/13 32634454016 99.9
Wed 09/11/13 21045362688 64.4
Is that bad or good ? sorry but I just remeber some people talking about filecachecm_min&Max .
by the way we reserve only 26 GB for Oracle . I can see from glance filecache >> 20 GB
Other nodes on the cluster are 10g,7G, 3G with same values .
The filecachec_Min = 3 GB
The filecachec_Max = 30 GB
09-18-2013 08:38 AM
When you compress the 100GB files, it creates new files and the filecache gets utilized upto it's configured maximum.
This is perfectly normal. You have configured the filecache_max to it's default value -- which is 50% or ~30GB, so it gets fully utilized during the compression activity. Looking at the kcusage usage pattern it looks like your average consumption is lower, perhaps closer to 15-20GB and it is during this compression activity that it rises up.
You can possibly try playing around with the fcache_seqlimit_file and fcache_seqlimit_system tunables to prevent the compressed files from consuming all of the filecache.
For instance, setting fcache_seqlimit_system=75, and fcache_seqlimit_file=20, will prevent the compressed files from consuming more than 6GB of the filecache.
You might also want to consider whether you need to configure the filecache_max=50%. Most Oracle customers seem to tune it to much lower values.