Logo
 
  English Flag English   Japan Flag Japanese
About Us   |   Downloads   |   Contact Us   |   Site Map   |  Policy
Go
Home  /  Platinum Cache for Windows
Platinum Cache for Windows

In a normal computer system, the processor is fast enough to carry out its tasks but our storage is too slow (when compared with the processor speed) that it cannot provide the data required in a small amount of time due to which overall system performance is decreased. Although today more high performance storages are becoming common in market but the performance mismatch between processor and storage devices still exists.

Platinum Cache is a caching system developed by DTS Inc. Japan for Microsoft Windows® based systems to minimize the storage bottleneck problems associated with today’s computer systems.

DTS Inc has devised this caching system for Windows based machines so as to increase the system throughput. This caching system supports two caching policies namely write-through and write-back. The system uses RAMDISK (a disk created from your system RAM) as a cache which has a high I/O rate when compared to our traditional storages.

DTS Platinum Cache in Action

Currently Platinum Cache is available for following platforms:
* Microsoft Windows 2000 Advanced Server SP4
* Microsoft Windows XP Professional Edition SP2
* Microsoft Windows Server 2003 Enterprise Edition SP1
* Microsoft Windows Server 2008
* Microsoft Windows 7 (32-bit/64-bit)

Caching – A Technique for Improving System Performance:

A ‘Cache’ is a small but fast storage that is used to place the most frequently used data on it. Whenever a request comes, the request is checked against the cache to find out whether the data can be fulfilled from the cache or not. If the data can be fulfilled from the cache (fast storage) the storage bottleneck problem can be somewhat minimized. Since a cache has relatively less space compared with the slow storage therefore only the most frequently used data should be placed on the cache. To achieve this different policies are used and one amongst them is Least Recently Used or LRU policy.

In our system we use RAMDISK as a cache. RAMDISK is basically a disk made out of your system memory (RAM). This disk behaves like any other hard disk partition e.g. it can be formatted with a file system. It can be seen using a symbolic link in My Computer. Note that the RAMDISK is a volatile media once the system is rebooted or crashed all your data will be lost. Its primary purpose is to place temporary files on it. When compared in terms of speed with a HARDDISK, a RAMDISK gives unmatched performance.

In our caching system we use different sizes of RAMDISK according to the user requirements. The size of a RAMDISK can vary from 100 MB to 12 GB depending on the situation. Although in general the size of a normal RAMDISK can vary from these values also.

Platinum Cache Policies:

Platinum Cache currently supports two caching policies namely:

* Write Through Policy
* Write Back Policy

Please note that a caching system may implement other policies also but the above two policies are the main policies of any caching system. The primary difference between the two policies is the write request case. As far as the read requests are concerned, the scenario is same for both policies. That is whenever a read request is received first the cache is searched for the requested data if the data can be found in the cache the request is fulfilled from the cache. This results in the complete or partial elimination of the access of the target storage (slow device) for the read request. Since the slow device access is completely or partially minimized and the access of the cache disk (fast device) is maximized this results in overall system performance boost.

LRU Cache Policy:

Least Recently Used (LRU) is a policy used for the management of the cache. Since the size of the cache is small as compared with your slow storage therefore you should place only the most frequently used data on it. LRU helps us in implementing this behavior.

In our caching system we try to manage the flush list in such a way that the entries at the front of the list are those that are least recently used while the entries at the end of the list are those which are the most recently used. Whenever a new block is entered in the flush list it is entered at the end of the list since at this time it is the most recently used block. Whenever a block is searched the block is removed from its current location in the flush list and is placed at the end of the list since this block is recently used. Under this scheme when the system runs for some time then ultimately the block at the front of the list will be the least recently used and the block at the end of the list will be the most recently used one. In this way we can easily flush out the least recently used block first. Flushing a block means to take this block out of the index list and put it in the free list so that it can be used for future block allocations.

An alternative approach that one may use for the management of the flush list can be ‘First In First Out’ (FIFO) instead of LRU. This is to say that whenever a new block is allocated it is inserted at the end of the list and the flushing will start at the head of the list. Additionally if a block is searched it will not be removed from its current location and placed at the end. This may result in the flushing of the most recently used block in some cases.

(C) 2013, DTS Inc. All Rights Reserved.