Sunday, February 13, 2011

Cache Fusion - Concept

Cache Fusion - Concept
Cache Fusion is a  technology that uses 
a high speed inter-process communication (IPC) interconnect to provide cache to cache transfers of data blocks between instances in a cluster. 
This eliminates disk I/O (which is inherently slow, since it is a mechanical process) and optimizes read/write concurrency. 
Block reads take advantage of the speed of IPC and an interconnecting network. 
Cache Fusion also relaxes the requirements of data partitioning.
Cache Fusion addresses these types of concurrency between instances by doing the below things:

1. Concurrent Reads on Multiple Nodes
2. Concurrent Reads and Writes on Different Nodes
3. Concurrent Writes on Different Nodes

1. Concurrent Reads on Multiple Nodes
Concurrent reads on multiple nodes occur when two instances need to read the
same data block. Real Application Clusters easily resolves this situation because
multiple instances can share the same blocks for read access without cache
coherency conflicts.
Concurrent Reads and Writes on Different Nodes
Concurrent reads and writes on different nodes are the dominant form of
concurrency in Online Transaction Processing (OLTP) and hybrid applications. A
read of a data block that was recently modified can be either for the current version
of the block or for a read-consistent previous version. In both cases, the block will
be transferred from one cache to the other.
2. Concurrent Writes on Different Nodes
Concurrent writes on different nodes occur when the same data block is modified
frequently by processes on different instances.
The main features of the cache coherency model used in Cache Fusion are: 

a) The cache-to-cache data transfer is done through the high speed IPC
interconnect. This virtually eliminates any disk I/Os to achieve cache

b) The Global Cache Service (GCS) tracks one or more past image (PI) for a block
in addition to the traditional GCS resource roles and modes. (The GCS tracks
blocks that were shipped to other instances by retaining block copies in
memory. Each such copy is called a past image (PI). In the event of a failure,
Oracle can reconstruct the current version of a block by using a PI.

c)The work required for recovery in node failures is proportional to the number
of failed nodes. Oracle must perform a log merge in the event of failure on
multiple nodes.

d)The number of context switches is reduced because of the reduced sequence of
round trip messages. 

In addition, database writer (DBWR) is not involved in Cache Fusion block transfers. Reducing the number of context switches adds to the more efficient use of the cache coherency protocol.

The GCS tracks the location and status (mode and role) of data blocks, as well as the
access privileges of various instances. Oracle uses the GCS for cache coherency
when the current version of a data block is in one instance’s buffer cache and
another instance requests that block for modification. It is also used for reading
Following the initial acquisition of exclusive resources in subsequent transactions
multiple transactions running on a single Real Application Clusters instance can
share access to a set of data blocks without involvement of the GCS as long as the
block is not transferred out of the local cache. If the block has to be transferred out
of the local cache, then the Global Resource Directory is updated by the GCS.

No comments:

Post a Comment