3. What is MemcacheD?
●
Fast, Multi-Threaded
●
Stores Data in RAM only ( Key | Value )
●
Excellent Read performance
●
Great write performance
●
API available for most language
●
Data distributed multiple servers.
●
Not the replacement of your DB
4. Features
● Least Recently Used (LRU) Cache:
-- LRU items are ejected if necessary
● Very Low CPU overhead
● Minimal impact of a node failure
● Multi Gets:
-- Parallel Fetches of key|value pairs from multiple
servers in fewer operations than single-gets
● Horizontally Scalable:
-- More Server Creates more Capacity
-- No Single point of failure
5. Why & When use MemcacheD?
● To reduced the database server load by caching the
data
● Database is getting lot's of 'SELECT' request, (Require
extremely fast read)
● To get maximum "scale out" of minimum hardware
● To store Session data
● Dynamic data that changes infrequently
6. How MemcacheD work
● Server stores data in HASH table (KEY | VALUE)
pairs.
● Client calculates hash, runs modulo to figure out
which server
● When the server is identified, clients sends its
request
● Server performs a HASH key lookup for actual data
7. UserRequests
MemCacheD
First Lookup in MemCache.
If Present, Return it.
Database
**Else query the database,
store it in memcache and return it.
**If the data changes, delete it from cache.
Basic Layout
8. Memcached Limits
● A Single Key cannot be more than 250 bytes.
-- All chars except, whitespace or control chars
● A Single Value can not contain more than 1MB
data
-- arbitrary data
11. Configuration Options
● Memory : Default is 64MB
● Simultaneous incoming connections : Default is 1024
● Port number : Default port is 11211
● Type of process - foreground or daemon
● Threads
● TCP / UDP
12. Storage and Retrieval Commands
➢ 'get' - retrieves KEY|VALUE pairs
➢ 'set' - stores data, possibly overwriting existing data
➢ 'add' - stores data, if NOT exists
➢ 'replace' - stores data, if already exists
➢ 'append' - adds to last byte of existing value
➢ 'prepend' - inverse of append
➢ 'cas' (compare-and-swap) - if data has NOT changed
since we read it last
13. Administering the MemcacheD
● Connect with telnet
-- telnet localhost 11211
-- 'stats' - returns current statistics
-- We can run 'get KEY' | 'delete KEY'
● Use 'libmemcached-tools'
-- 'memcstat --servers localhost[,host2,host3]' : Reveals
stats of target server(s)
-- stats include : 'bytes', 'limit_maxbytes', 'curr_items', 'get_*',
etc.
14. Tips for optimization
● By default MemcacheD implements NO AUTH - So
protection is important
● If MemcacheD is published to the NET, use SASL AUTH
● Use a non standard port
● Run MemcacheD in DMZ enviornment
● Run as a non-priviledged user to minimize potential
damage
● Pre warm your cache using scripts
In-Memory DB ( key | Value Pairs) :- All operation are conduct in memory that makes it too fast
L1 = 1NS
L2 = 4.7NS
RAM = 83NS
Hard Disk = 13.7MS
RAM is 165,000 times faster than disk
Key: -> 250-bytes (max) - All chars except: whitespace or control chars
Values: -> 1MB (max) of arbitray data
Caching is mostly used for data that is accessed repeatedly, so that instead of calculating/retrieving from the disk repeatedly, which takes time, we can instead directly look it up in the cache, which is much faster.