Provided by: libcache-fastmmap-perl_1.43-1build1_amd64 

NAME
Cache::FastMmap - Uses an mmap'ed file to act as a shared memory interprocess cache
SYNOPSIS
use Cache::FastMmap;
# Uses vaguely sane defaults
$Cache = Cache::FastMmap->new();
# $Value must be a reference...
$Cache->set($Key, $Value);
$Value = $Cache->get($Key);
$Cache = Cache::FastMmap->new(raw_values => 1);
# $Value can't be a reference...
$Cache->set($Key, $Value);
$Value = $Cache->get($Key);
ABSTRACT
A shared memory cache through an mmap'ed file. It's core is written in C for performance. It uses fcntl
locking to ensure multiple processes can safely access the cache at the same time. It uses a basic LRU
algorithm to keep the most used entries in the cache.
DESCRIPTION
In multi-process environments (eg mod_perl, forking daemons, etc), it's common to want to cache
information, but have that cache shared between processes. Many solutions already exist, and may suit
your situation better:
• MLDBM::Sync - acts as a database, data is not automatically expired, slow
• IPC::MM - hash implementation is broken, data is not automatically expired, slow
• Cache::FileCache - lots of features, slow
• Cache::SharedMemoryCache - lots of features, VERY slow. Uses IPC::ShareLite which freeze/thaws ALL
data at each read/write
• DBI - use your favourite RDBMS. can perform well, need a DB server running. very global. socket
connection latency
• Cache::Mmap - similar to this module, in pure perl. slows down with larger pages
• BerkeleyDB - very fast (data ends up mostly in shared memory cache) but acts as a database overall,
so data is not automatically expired
In the case I was working on, I needed:
• Automatic expiry and space management
• Very fast access to lots of small items
• The ability to fetch/store many items in one go
Which is why I developed this module. It tries to be quite efficient through a number of means:
• Core code is written in C for performance
• It uses multiple pages within a file, and uses Fcntl to only lock a page at a time to reduce
contention when multiple processes access the cache.
• It uses a dual level hashing system (hash to find page, then hash within each page to find a slot) to
make most "get()" calls O(1) and fast
• On each "set()", if there are slots and page space available, only the slot has to be updated and the
data written at the end of the used data space. If either runs out, a re-organisation of the page is
performed to create new slots/space which is done in an efficient way
The class also supports read-through, and write-back or write-through callbacks to access the real data
if it's not in the cache, meaning that code like this:
my $Value = $Cache->get($Key);
if (!defined $Value) {
$Value = $RealDataSource->get($Key);
$Cache->set($Key, $Value)
}
Isn't required, you instead specify in the constructor:
Cache::FastMmap->new(
...
context => $RealDataSourceHandle,
read_cb => sub { $_[0]->get($_[1]) },
write_cb => sub { $_[0]->set($_[1], $_[2]) },
);
And then:
my $Value = $Cache->get($Key);
$Cache->set($Key, $NewValue);
Will just work and will be read/written to the underlying data source as needed automatically.
PERFORMANCE
If you're storing relatively large and complex structures into the cache, then you're limited by the
speed of the Storable module. If you're storing simple structures, or raw data, then Cache::FastMmap has
noticeable performance improvements.
See <http://cpan.robm.fastmail.fm/cache_perf.html> for some comparisons to other modules.
COMPATIBILITY
Cache::FastMmap uses mmap to map a file as the shared cache space, and fcntl to do page locking. This
means it should work on most UNIX like operating systems.
Ash Berlin has written a Win32 layer using MapViewOfFile et al. to provide support for Win32 platform.
MEMORY SIZE
Because Cache::FastMmap mmap's a shared file into your processes memory space, this can make each process
look quite large, even though it's just mmap'd memory that's shared between all processes that use the
cache, and may even be swapped out if the cache is getting low usage.
However, the OS will think your process is quite large, which might mean you hit some BSD::Resource or
'ulimits' you set previously that you thought were sane, but aren't anymore, so be aware.
CACHE FILES AND OS ISSUES
Because Cache::FastMmap uses an mmap'ed file, when you put values into the cache, you are actually
"dirtying" pages in memory that belong to the cache file. Your OS will want to write those dirty pages
back to the file on the actual physical disk, but the rate it does that at is very OS dependent.
In Linux, you have some control over how the OS writes those pages back using a number of parameters in
/proc/sys/vm
dirty_background_ratio
dirty_expire_centisecs
dirty_ratio
dirty_writeback_centisecs
How you tune these depends heavily on your setup.
As an interesting point, if you use a highmem linux kernel, a change between 2.6.16 and 2.6.20 made the
kernel flush memory a LOT more. There's details in this kernel mailing list thread:
<http://www.uwsg.iu.edu/hypermail/linux/kernel/0711.3/0804.html>
In most cases, people are not actually concerned about the persistence of data in the cache, and so are
happy to disable writing of any cache data back to disk at all. Baically what they want is an in memory
only shared cache. The best way to do that is to use a "tmpfs" filesystem and put all cache files on
there.
For instance, all our machines have a /tmpfs mount point that we create in /etc/fstab as:
none /tmpfs tmpfs defaults,noatime,size=1000M 0 0
And we put all our cache files on there. The tmpfs filesystem is smart enough to only use memory as
required by files actually on the tmpfs, so making it 1G in size doesn't actually use 1G of memory, it
only uses as much as the cache files we put on it. In all cases, we ensure that we never run out of real
memory, so the cache files effectively act just as named access points to shared memory.
Some people have suggested using anonymous mmaped memory. Unfortunately we need a file descriptor to do
the fcntl locking on, so we'd have to create a separate file on a filesystem somewhere anyway. It seems
easier to just create an explicit "tmpfs" filesystem.
PAGE SIZE AND KEY/VALUE LIMITS
To reduce lock contention, Cache::FastMmap breaks up the file into pages. When you get/set a value, it
hashes the key to get a page, then locks that page, and uses a hash table within the page to get/store
the actual key/value pair.
One consequence of this is that you cannot store values larger than a page in the cache at all.
Attempting to store values larger than a page size will fail (the set() function will return false).
Also keep in mind that each page has it's own hash table, and that we store the key and value data of
each item. So if you are expecting to store large values and/or keys in the cache, you should use page
sizes that are definitely larger than your largest key + value size + a few kbytes for the overhead.
USAGE
Because the cache uses shared memory through an mmap'd file, you have to make sure each process connects
up to the file. There's probably two main ways to do this:
• Create the cache in the parent process, and then when it forks, each child will inherit the same file
descriptor, mmap'ed memory, etc and just work. This is the recommended way. (BEWARE: This only works
under UNIX as Win32 has no concept of forking)
• Explicitly connect up in each forked child to the share file. In this case, make sure the file
already exists and the children connect with init_file => 0 to avoid deleting the cache contents and
possible race corruption conditions. Also be careful that multiple children may race to create the
file at the same time, each overwriting and corrupting content. Use a separate lock file if you must
to ensure only one child creates the file. (This is the only possible way under Win32)
The first way is usually the easiest. If you're using the cache in a Net::Server based module, you'll
want to open the cache in the "pre_loop_hook", because that's executed before the fork, but after the
process ownership has changed and any chroot has been done.
In mod_perl, just open the cache at the global level in the appropriate module, which is executed as the
server is starting and before it starts forking children, but you'll probably want to chmod or chown the
file to the permissions of the apache process.
METHODS
new(%Opts)
Create a new Cache::FastMmap object.
Basic global parameters are:
• share_file
File to mmap for sharing of data. default on unix: /tmp/sharefile-$pid-$time-$random default on
windows: %TEMP%\sharefile-$pid-$time-$random
• init_file
Clear any existing values and re-initialise file. Useful to do in a parent that forks off
children to ensure that file is empty at the start (default: 0)
Note: This is quite important to do in the parent to ensure a consistent file structure. The
shared file is not perfectly transaction safe, and so if a child is killed at the wrong instant,
it might leave the cache file in an inconsistent state.
• raw_values
Store values as raw binary data rather than using Storable to free/thaw data structures (default:
0)
• compress
Compress the value (but not the key) before storing into the cache. If you set this to 1, the
module will attempt to require the Compress::Zlib module and then use the memGzip() function on
the value data before storing into the cache, and memGunzip() when retrieving data from the
cache. Some initial testing shows that the uncompressing tends to be very fast, though the
compressing can be quite slow, so it's probably best to use this option only if you know values
in the cache are long lived and have a high hit rate. (default: 0)
• enable_stats
Enable some basic statistics capturing. When enabled, every read to the cache is counted, and
every read to the cache that finds a value in the cache is also counted. You can then retrieve
these values via the get_statistics() call. This causes every read action to do a write on a
page, which can cause some more IO, so it's disabled by default. (default: 0)
• expire_time
Maximum time to hold values in the cache in seconds. A value of 0 means does no explicit expiry
time, and values are expired only based on LRU usage. Can be expressed as 1m, 1h, 1d for
minutes/hours/days respectively. (default: 0)
You may specify the cache size as:
• cache_size
Size of cache. Can be expresses as 1k, 1m for kilobytes or megabytes respectively. Automatically
guesses page size/page count values.
Or specify explicit page size/page count values. If none of these are specified, the values page_size
= 64k and num_pages = 89 are used.
• page_size
Size of each page. Must be a power of 2 between 4k and 1024k. If not, is rounded to the nearest
value.
• num_pages
Number of pages. Should be a prime number for best hashing
The cache allows the use of callbacks for reading/writing data to an underlying data store.
• context
Opaque reference passed as the first parameter to any callback function if specified
• read_cb
Callback to read data from the underlying data store. Called as:
$read_cb->($context, $Key)
Should return the value to use. This value will be saved in the cache for future retrievals.
Return undef if there is no value for the given key
• write_cb
Callback to write data to the underlying data store. Called as:
$write_cb->($context, $Key, $Value, $ExpiryTime)
In 'write_through' mode, it's always called as soon as a set(...) is called on the
Cache::FastMmap class. In 'write_back' mode, it's called when a value is expunged from the cache
if it's been changed by a set(...) rather than read from the underlying store with the read_cb
above.
Note: Expired items do result in the write_cb being called if 'write_back' caching is enabled and
the item has been changed. You can check the $ExpiryTime against "time()" if you only want to
write back values which aren't expired.
Also remember that write_cb may be called in a different process to the one that placed the data
in the cache in the first place
• delete_cb
Callback to delete data from the underlying data store. Called as:
$delete_cb->($context, $Key)
Called as soon as remove(...) is called on the Cache::FastMmap class
• cache_not_found
If set to true, then if the read_cb is called and it returns undef to say nothing was found, then
that information is stored in the cache, so that next time a get(...) is called on that key,
undef is returned immediately rather than again calling the read_cb
• write_action
Either 'write_back' or 'write_through'. (default: write_through)
• allow_recursive
If you're using a callback function, then normally the cache is not re-enterable, and attempting
to call a get/set on the cache will cause an error. By setting this to one, the cache will unlock
any pages before calling the callback. During the unlock time, other processes may change data in
current cache page, causing possible unexpected effects. You shouldn't set this unless you know
you want to be able to recall to the cache within a callback. (default: 0)
• empty_on_exit
When you have 'write_back' mode enabled, then you really want to make sure all values from the
cache are expunged when your program exits so any changes are written back.
The trick is that we only want to do this in the parent process, we don't want any child
processes to empty the cache when they exit. So if you set this, it takes the PID via $$, and
only calls empty in the DESTROY method if $$ matches the pid we captured at the start. (default:
0)
• unlink_on_exit
Unlink the share file when the cache is destroyed.
As with empty_on_exit, this will only unlink the file if the DESTROY occurs in the same PID that
the cache was created in so that any forked children don't unlink the file.
This value defaults to 1 if the share_file specified does not already exist. If the share_file
specified does already exist, it defaults to 0.
• catch_deadlocks
Sets an alarm(10) before each page is locked via fcntl(F_SETLKW) to catch any deadlock. This used
to be the default behaviour, but it's not really needed in the default case and could clobber
sub-second Time::HiRes alarms setup by other code. Defaults to 0.
get($Key, [ \%Options ])
Search cache for given Key. Returns undef if not found. If read_cb specified and not found, calls the
callback to try and find the value for the key, and if found (or 'cache_not_found' is set), stores it
into the cache and returns the found value.
%Options is optional, and is used by get_and_set() to control the locking behaviour. For now, you
should probably ignore it unless you read the code to understand how it works
set($Key, $Value, [ \%Options ])
Store specified key/value pair into cache
%Options is optional, and is used by get_and_set() to control the locking behaviour. For now, you
should probably ignore it unless you read the code to understand how it works
This method returns true if the value was stored in the cache, false otherwise. See the PAGE SIZE AND
KEY/VALUE LIMITS section for more details.
get_and_set($Key, $Sub)
Atomically retrieve and set the value of a Key.
The page is locked while retrieving the $Key and is unlocked only after the value is set, thus
guaranteeing the value does not change between the get and set operations.
$Sub is a reference to a subroutine that is called to calculate the new value to store. $Sub gets
$Key and the current value as parameters, and should return the new value to set in the cache for the
given $Key.
If the subroutine returns an empty list, no value is stored back in the cache. This avoids updating
the expiry time on an entry if you want to do a "get if in cache, store if not present" type
callback.
For example, to atomically increment a value in the cache, you can just use:
$Cache->get_and_set($Key, sub { return ++$_[1]; });
In scalar context, the return value from this function is the *new* value stored back into the cache.
In list context, a two item array is returned; the new value stored back into the cache and a boolean
that's true if the value was stored in the cache, false otherwise. See the PAGE SIZE AND KEY/VALUE
LIMITS section for more details.
Notes:
• Do not perform any get/set operations from the callback sub, as these operations lock the page
and you may end up with a dead lock!
• If your sub does a die/throws an exception, the page will correctly be unlocked (1.15 onwards)
remove($Key, [ \%Options ])
Delete the given key from the cache
%Options is optional, and is used by get_and_remove() to control the locking behaviour. For now, you
should probably ignore it unless you read the code to understand how it works
get_and_remove($Key)
Atomically retrieve value of a Key while removing it from the cache.
The page is locked while retrieving the $Key and is unlocked only after the value is removed, thus
guaranteeing the value stored by someone else isn't removed by us.
clear()
Clear all items from the cache
Note: If you're using callbacks, this has no effect on items in the underlying data store. No delete
callbacks are made
purge()
Clear all expired items from the cache
Note: If you're using callbacks, this has no effect on items in the underlying data store. No delete
callbacks are made, and no write callbacks are made for the expired data
empty($OnlyExpired)
Empty all items from the cache, or if $OnlyExpired is true, only expired items.
Note: If 'write_back' mode is enabled, any changed items are written back to the underlying store.
Expired items are written back to the underlying store as well.
get_keys($Mode)
Get a list of keys/values held in the cache. May immediately be out of date because of the shared
access nature of the cache
If $Mode == 0, an array of keys is returned
If $Mode == 1, then an array of hashrefs, with 'key', 'last_access', 'expire_time' and 'flags' keys
is returned
If $Mode == 2, then hashrefs also contain 'value' key
get_statistics($Clear)
Returns a two value list of (nreads, nreadhits). This only works if you passed enable_stats in the
constructor
nreads is the total number of read attempts done on the cache since it was created
nreadhits is the total number of read attempts done on the cache since it was created that found the
key/value in the cache
If $Clear is true, the values are reset immediately after they are retrieved
multi_get($PageKey, [ $Key1, $Key2, ... ])
The two multi_xxx routines act a bit differently to the other routines. With the multi_get, you pass
a separate PageKey value and then multiple keys. The PageKey value is hashed, and that page locked.
Then that page is searched for each key. It returns a hash ref of Key => Value items found in that
page in the cache.
The main advantage of this is just a speed one, if you happen to need to search for a lot of items on
each call.
For instance, say you have users and a bunch of pieces of separate information for each user. On a
particular run, you need to retrieve a sub-set of that information for a user. You could do lots of
get() calls, or you could use the 'username' as the page key, and just use one multi_get() and
multi_set() call instead.
A couple of things to note:
1. This makes multi_get()/multi_set() and get()/set() incompatible. Don't mix calls to the two,
because you won't find the data you're expecting
2. The writeback and callback modes of operation do not work with multi_get()/multi_set(). Don't
attempt to use them together.
multi_set($PageKey, { $Key1 = $Value1, $Key2 => $Value2, ... }, [ \%Options ])>
Store specified key/value pair into cache
INTERNAL METHODS
_expunge_all($Mode, $WB)
Expunge all items from the cache
Expunged items (that have not expired) are written back to the underlying store if write_back is
enabled
_expunge_page($Mode, $WB, $Len)
Expunge items from the current page to make space for $Len bytes key/value items
Expunged items (that have not expired) are written back to the underlying store if write_back is
enabled
_lock_page($Page)
Lock a given page in the cache, and return an object reference that when DESTROYed, unlocks the page
INCOMPATIBLE CHANGES
• From 1.15
• Default share_file name is no-longer /tmp/sharefile, but /tmp/sharefile-$pid-$time. This ensures
that different runs/processes don't interfere with each other, but means you may not connect up
to the file you expect. You should be choosing an explicit name in most cases.
On Unix systems, you can pass in the environment variable TMPDIR to override the default
directory of /tmp
• The new option unlink_on_exit defaults to true if you pass a filename for the share_file which
doesn't already exist. This means if you have one process that creates the file, and another that
expects the file to be there, by default it won't be.
Otherwise the defaults seem sensible to cleanup unneeded share files rather than leaving them
around to accumulate.
• From 1.29
• Default share_file name is no longer /tmp/sharefile-$pid-$time but
/tmp/sharefile-$pid-$time-$random.
• From 1.31
• Before 1.31, if you were using raw_values => 0 mode, then the write_cb would be called with raw
frozen data, rather than the thawed object. From 1.31 onwards, it correctly calls write_cb with
the thawed object value (eg what was passed to the ->set() call in the first place)
• From 1.36
• Before 1.36, an alarm(10) would be set before each attempt to lock a page. The only purpose of
this was to detect deadlocks, which should only happen if the Cache::FastMmap code was buggy, or
a callback function in get_and_set() made another call into Cache::FastMmap.
However this added unnecessary extra system calls for every lookup, and for users using
Time::HiRes, it could clobber any existing alarms that had been set with sub-second resolution.
So this has now been made an optional feature via the catch_deadlocks option passed to new.
SEE ALSO
MLDBM::Sync, IPC::MM, Cache::FileCache, Cache::SharedMemoryCache, DBI, Cache::Mmap, BerkeleyDB
Latest news/details can also be found at:
<http://cpan.robm.fastmail.fm/cachefastmmap/>
Available on github at:
<https://github.com/robmueller/cache-fastmmap/>
AUTHOR
Rob Mueller <mailto:cpan@robm.fastmail.fm>
COPYRIGHT AND LICENSE
Copyright (C) 2003-2015 by FastMail Pty Ltd
This library is free software; you can redistribute it and/or modify it under the same terms as Perl
itself.
perl v5.22.1 2015-12-17 Cache::FastMmap(3pm)