Let’s examine the t2.medium in more detail to understand this:

  • Baseline CPU of 2 cores @ 20% each (or 40% in aggregate of a 1 core)
  • Credits are earned at a rate of 24 per hour
  • Maximum credit balance of 576

Imagine you start the instance on Friday. Very quickly over the weekend, it builds up a full 576 credits. Then on Monday morning, higher user activity begins on your app and CPU begins using over 20% on each of 2 cores.

You could run at a sustained 50% CPU on 2 cores and use 1 credit per minute, or 60 per hour. However you are also still earning credits at 24 per hour, so your net loss per hour is only 36 credits. You could run like that for 10 hours, say from 8am to 6pm, and still have 216 credits in your balance. Then you would have 14 off-hours to build credits back up, and you would get almost all the way back to 576 (the max credit level).

That is a very significant CPU load, all business day long, totally within the bounds of the t2.medium. This is not what people usually think of when they hear “burstable CPU”.

Referent : http://aix4admins.blogspot.com/2011/09/vmm-concepts-virtual-memory-segments.html


lru_file_repage = 0
maxperm = 90%
maxclient = 90%
minperm = 3%
strict_maxclient = 1 (default)
strict_maxperm = 0 (default)

# vmo -p -o lru_file_repage=0 -o maxclient%=90 -o maxperm%=90 -o minperm%=3
# vmo -p -o strict_maxclient=1 -o strict_maxperm=0

The above tunable parameters settings are the default settings for AIX Version 6.1.


minfree: Minimum acceptable number of real-memory page frames in the free list. When the size of the free list falls below this number, the VMM begins stealing pages. It continues stealing pages until the size of the free list reaches maxfree.

An example:


Real,MB   26623
% Comp     57          <–this is used for processes (OS+appl.), if you add nmon Process+System, for me it was the same (46+11)
% Noncomp  22          <–fs cache
% Client   22          <–fs cache (for jfs2)


(numperm) 22.5%        <–this is for fs cache
Process   46.0%        <–this is for appl. processes
System    11.3%        <–this is for the OS
Free      20.2%        <–free
Total    100.0%


Excerpts from a tuning docs:

Set vmo:lru_file_repage=0; default=1  # Mandatory critical change
This change directs lrud to steal only JFS/JFS2 file-buffer pages unless/until numperm/numclient is less-than/equal-to vmo:minperm%, at which point lrud begins stealing both JFS/JFS2 file-buffer pages and computational memory pages.
Essentially stealing computational memory invokes pagingspace-pageouts.
I have found this change already made by most AIX 5.3 customers.

Set vmo:page_steal_method=1; default=0  # helpful, not critical
This change switches the lrud page-stealing algorithm from a physical memory address page-scanning method (=0) to a List-based page-scanning method (=1).

Set ioo:sync_release_ilock=1; default=0  # helpful, not critical
Default value =0 means that the i-node lock is held while all dirty pages of a file are flushed; thus, I/O to a file is blocked when the syncd daemon is running. Setting =1 will cause a sync() to flush all I/O to a file without holding the i-node lock, and then use the  i-node lock to do the commit.

Execute vmstat -v and compare the following values/settings:
minperm    should be 10, 5 or 3; default=20
maxperm    should be 80 or higher; default=80 or 90
maxclient    should be 80 or higher; default=80 or 90
numperm    real-time percent of non-computational memory (includes client below)
numclient    real-time percent of JFS2/NFS/vxfs filesystem buffer-cache
paging space page outs are triggered when numperm or numclient is less-than-or-equal-to minperm.  Typically numperm and numclient is greater than minperm, and as such, no paging space page outs can be triggered.

paging space page outs are triggered when numperm or numclient is less-than-or-equal-to minperm.  Typically numperm and numclient is greater than minperm, and as such, no paging space page outs can be triggered.


just note !!

In: Uncategorized

15 Aug 2016

select USERNAME from dba_users where account_status=’OPEN’

checking command :
#echo ::memstat | mdb -k

tune parameter zfs :

my default recommend :

set noexec_user_stack = 1
set noexec_user_stack_log = 1
set zfs:zfs_arc_max = 2147483648
set c2audit:audit_load = 1
















Lustre                     Oracle Linux                                                                  Metadata Server

StorNext                 Quantum AIX, IRIX, Linux, Mac OS X, Solaris, UNIX, Windows      Metadata Server

VxFS                        Veritas AIX, Linux, Solaris, UNIX                            Metadata Server

PVFS                        Open-Source        Linux                                              Distributed Metadata Server

IBRIX  FusionFS   HP Linux, Windows                                                    Distributed Metadata Server

GPFS                        IBM Linux, Windows                                                  Distributed Lock Server

GFS                          Red Hat Linux                                                               Distributed Lock Server

Matrix Server        HP Linux, Windows                                                     Distributed Lock Server

VMFS                      VMware Linux                                                              Distributed Lock Server

Melio FS                 Sanbolic Windows                                                        Single Lock Symmetrical

About this blog

@Pakpoom Wetwittayakhlang
AIX Administrator


no images were found