Tutoriale Online

Tutoriale Online Invata Online cum se face si cum este corect. Learn Online How Must To Do !

Online Tutoriale Using Very Large Memory (VLM)

Posted by ascultradio on September 3, 2009

Using Very Large Memory (VLM) :

General

This chapter does not apply to 64-bit systems.

With hugemem kernels on 32-bit systems, the SGA size can be increased but not significantly as shown at Oracle 10g SGA Sizes in RHEL 3 and RHEL 4 (note that the hugemem kernel is always recommended on systems with large amounts of RAM, see 32-bit Architecture and the hugemem Kernel). This chapter shows how the SGA can be significantly increased using VLM on 32-bit systems.

Starting with Oracle9i Release 2 the SGA can theoretically be increased to about 62 GB (depending on block size) on a 32-bit system with 64 GB RAM. A processor feature called Page Address Extension (PAE) provides the capability of physically addressing 64 GB of RAM. However, it does not enable a process or program to address more than 4GB directly or have a virtual address space larger than 4GB. Hence, a process cannot attach to shared memory directly if it has a size of 4GB or more. To address this issue, a shared memory filesystem (memory-based filesystem) can be created which can be as large as the maximum allowable virtual memory supported by the kernel. With a shared memory filesystem processes can dynamically attach to regions of the filesystem allowing applications like Oracle to have virtually a much larger shared memory on 32-bit systems. This is not an issue on 64-bit systems.

For Oracle to use a shared memory filesystem, a feature called Very Large Memory (VLM) must be enabled. VLM moves the database buffer cache part of the SGA from the System V shared memory to the shared memory filesystem. It is still considered one large SGA but it consists now of two different OS shared memory entities. It is noteworthy to say that VLM uses 512MB of the non-buffer cache SGA to manage VLM. This memory area is needed for mapping the indirect data buffers (shared memory filesystem buffers) into the process address space since a process cannot attach to more than 4GB directly on a 32-bit system. For example, if the non-buffer cache SGA is 2.5 GB, then you will only have 2 GB of non-buffer cache SGA for shared pool, large pool, and redo log buffer since 512MB is used for managing VLM. If the buffer cache is less than 512 MB, then the init.ora parameter VLM_WINDOW_SIZE must be changed to reflect the size of the database buffer cache. However, it is not recommended to use VLM if db_block_buffers is not greater than 512MB.

In RHEL 3 and RHEL 4 there are two different memory filesystems that can be used for VLM:
– shmfs/tmpfs:  This memory filesystem is pageable/swappable. And to my knowledge it cannot be backed by Huge Pages because Huge Pages are not swappable.
– ramfs:  This memory filesystems is not pageable/swappable and not backed by Huge Pages, see also Huge Pages and Shared Memory Filesystem in RHEL 3/4.

Note the shmfs filesystem is available in RHEL 3 but not in RHEL 4:

$ cat /etc/redhat-release
Red Hat Enterprise Linux AS release 3 (Taroon Update 6)
$ egrep "shm|tmpfs|ramfs" /proc/filesystems
nodev   tmpfs
nodev   shm
nodev   ramfs
$

$ cat /etc/redhat-release
Red Hat Enterprise Linux AS release 4 (Nahant Update 2)
$ egrep "shm|tmpfs|ramfs" /proc/filesystems
nodev   tmpfs
nodev   ramfs
$

This means that if you try to mount a shmfs filesystem in RHEL 4, you will get the following error message:

mount: fs type shm not supported by kernel

The difference between shmfs and tmpfs is you don’t need to specify the size of the filesystem if you mount a tmpfs filesystem.

Configuring Very Large Memory (VLM)

The following example shows how to use the RAM disk ramfs to allocate 8 GB of shared memory for the 10g database buffer cache on a 32-bit RHEL 3/4 systems (hugemem kernel). If this setup is performed on a server that does not have enough RAM, then Linux will appear to hang and the kernel will automatically start killing processes due to memory shortage (ramfs is not swappable). Furthermore, ramfs is not backed by Huge Pages and therefore the Huge Pages pool should not be increased for database buffers, see Huge Pages and Shared Memory Filesystem in RHEL 3/4. In fact, if there are too many Huge Pages allocated, then there may not be enough memory for ramfs.

Since ramfs is not swappable, it is by default only usable by root. If you put too much on a ramfs filesystem, you can easily hang the system. To mount the ramfs filesystem and to make it usable for the Oracle account, execute:

# umount /dev/shm
# mount -t ramfs ramfs /dev/shm
# chown oracle:dba /dev/shm

When Oracle starts it will create a file in the /dev/shm directory that corresponds to the extended buffer cache. Ensure to add the above lines to /etc/rc.local. If ointall is the primary group of the Oracle account, use chown oracle:oinstall /dev/shm instead. For security reasons you do not want to give anyone write access to the shared memory filesystem. Having write access to the ramfs filesystem allows you to allocate and pin a large chunk of memory in RAM. In fact, you can kill a machine by allocating too much memory in the ramfs filesystem.

To enable VLM, set the Oracle parameter use_indirect_data_buffers

to true:

use_indirect_data_buffers=true

For 10g R1 and R2 databases it’s important to convert DB_CACHE_SIZE and DB_xK_CACHE_SIZE parameters to DB_BLOCK_BUFFERS, and to remove SGA_TARGET if set. Otherwise you will get errors like these:

ORA-00385: cannot enable Very Large Memory with new buffer cache parameters

Here is an example for configuring a 8 GB buffer cache for a 10g R2 database with RHEL 3/4 hugemem kernels:

use_indirect_data_buffers=true
db_block_size=8192
db_block_buffers=1048576
shared_pool_size=2831155200

Note that shmmax needs to be increased for shared_pool_size to fit into the System V shared memory. In fact, it should be slightly larger than the SGA size. Since shared_pool_size is less than 3 GB in this example, shmmax doesn’t need to be larger than 3GB. The 8 GB indirect buffer cache will be in the RAM disk and hence it doesn’t have to be accounted for in shmmax. On a 32-bit system the shmmax kernel paramter cannot be larger than 4GB, see also Setting SHMMAX Parameter.

In order to allow oracle processes to lock more memory into its address space for the VLM window size, the ulimit parameter memlock must be changed for oracle.
Ensure to set memlock in /etc/security/limits.conf to 3145728:

oracle           soft    memlock         3145728
oracle           hard    memlock         3145728

Login as Oracle again and check max locked memory limit:

$ ulimit -l
3145728

If it’s not working after a ssh login, then you may have to set the SSH parameter UsePrivilegeSeparation, see Setting Shell Limits for the Oracle User.

If memlock is not set or too small, you will get error messages similar to this one:

ORA-27103: internal error
Linux Error: 11: Resource temporarily unavailable

Now try to start the database. Note the database startup can take a while. Also, the sqlplus banner or show sga may not accurately reflect the actual SGA size in older Oracle versions.

The 8GB file for the database buffer cache can be seen in the ramfs shared memory filesystem:

$ ls -al /dev/shm
total 120
drwxr-xr-x    1 oracle   dba             0 Nov 20 16:29 .
drwxr-xr-x   22 root     root       118784 Nov 20 16:25 ..
-rw-r-----    1 oracle   dba      8589934592 Nov 20 16:30 ora_orcl_458754
$

If the shared pool size is configured too large, you will get error messages similar to this one:

ORA-27103: internal error
Linux Error: 12: Cannot allocate memory

Leave a comment