This post is the DISM follow up to the ISM-only Oracle ISM and DISM: Oracle ISM and DISM: more than a no paging scheme (1/2).
DISM (Dynamic Intimate Shared Memory) is the pageable variant of ISM. DISM was made available on Solaris 8. The DISM segment is attached to a process through the shmat system call. SHM_DYNAMIC is a new flag that tells shmat to create Dynamic ISM rather than the SHM_SHARE_MMU flag used for ISM.
DISM is like ISM except that it isn’t automatically locked. The application, not the kernel does the locking, which is done by using mlock. Kernel virtual-to-physical memory address translation structures are shared among processes that attach to the DISM segment. This is one of the DISM benefits: saving kernel memory and CPU time. As with ISM, shmget creates the segment. The shmget size specified is the maximum size of the segment. The size of the segment can be larger than physical memory. Enough of disk swap should be made available to cover the maximum possible DISM size.
Per the Oracle 10gR2 installation guide on Solaris platforms:
Oracle Database automatically selects ISM or DISM based on the following criteria:
- Oracle Database uses DISM if it is available on the system, and if the value of the
SGA_MAX_SIZE
initialization parameter is larger than the size required for all SGA components combined. This enables Oracle Database to lock only the amount of physical memory that is used. - Oracle Database uses ISM if the entire shared memory segment is in use at startup or if the value of the
SGA_MAX_SIZE
parameter is equal to or smaller than the size required for all SGA components combined.
I ran a few logical I/O intensive tests aimed at highlighting some possible performance loss when moving from ISM to DISM (as pages are not permanently locked in memory, swap management has to be invoked), but I couldn’t find any meaningful difference. Most of the benefits I described in the Oracle ISM and DISM: more than a no paging scheme (1/2) post still applies, except for the non-support of large pages in Solaris 8 (see below).
Since DISM requires the application to lock memory, and since memory locking can only be carried out by applications with superuser privileges, the $ORACLE_HOME/bin/oradism daemon run as root using setuid (early 9i releases had a different mechanism, using RBAC instead of setuid).
Solaris 8 problems:
Dynamic Intimate Shared Memory (DISM) was introduced in the 1/01 release of Solaris 8 (Update 3). DISM was supported by Oracle9i for SGA resizing.
On a 10gR2 database running on Solaris 10, it can be seen than large pages are used by DISM :
pmap -sx 19609| more
19609: oracleSID11 (LOCAL=NO)
Address Kbytes RSS Anon Locked Pgsz Mode Mapped File
0000000380000000 16384 16384 – – 4M rwxs- [ dism shmid=0x70000071 ]
Per the following Sun Solve note http://sunsolve.sun.com/search/document.do?assetkey=1-9-72952-1&searchclause=dism%2420large%2420page
“In this first release, large MMU pages were not supported. For Solaris 8 systems with 8GB of memory or less, it is reasonable to expect a performance degradation of up to 10% compared to ISM, due to the lack of large page support in DISM […] Sun recommends avoiding DISM on Solaris 8 either where SGAs are greater than 8 Gbytes in size, or on systems with a typical CPU utilization of 70% or more. In general, where performance is critical, DISM should be avoided on Solaris 8. As we will see, Solaris 9 Update 2 (the 12/02 release) is the appropriate choice for using DISM with systems of this type.”
http://www.sun.com/blueprints/0104/817-5209.pdf from Sun advocates on Solaris 8 the use of DISM primarily for the machine maintenance, such as removing a memory board, but it fails to mention that large MMU pages are not supported.