The document discusses the history and evolution of chocolate production. It details how cocoa beans are harvested and fermented before being dried, roasted, and ground into chocolate liquor. The liquor is then further processed through conching and tempering to produce smooth chocolate for consumption.
X = throughput, compute power for MapReduce, storage, lower latency
X = throughput, compute power for MapReduce, storage, lower latency
X = throughput, compute power for MapReduce, storage, lower latency
Consistent hashing means:
1) large, fixed-size key-space
2) no rehashing of keys - always hash the same way
Consistent hashing means:
1) large, fixed-size key-space
2) no rehashing of keys - always hash the same way
Consistent hashing means:
1) large, fixed-size key-space
2) no rehashing of keys - always hash the same way
Consistent hashing means:
1) large, fixed-size key-space
2) no rehashing of keys - always hash the same way
Consistent hashing means:
1) large, fixed-size key-space
2) no rehashing of keys - always hash the same way
1) Client requests a key
2) Get handler starts up to service the request
3) Hashes key to its owner partitions (N=3)
4) Sends similar “get” request to those partitions
5) Waits for R replies that concur (R=2)
6) Resolves the object, replies to client
7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
1) Client requests a key
2) Get handler starts up to service the request
3) Hashes key to its owner partitions (N=3)
4) Sends similar “get” request to those partitions
5) Waits for R replies that concur (R=2)
6) Resolves the object, replies to client
7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
1) Client requests a key
2) Get handler starts up to service the request
3) Hashes key to its owner partitions (N=3)
4) Sends similar “get” request to those partitions
5) Waits for R replies that concur (R=2)
6) Resolves the object, replies to client
7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
1) Client requests a key
2) Get handler starts up to service the request
3) Hashes key to its owner partitions (N=3)
4) Sends similar “get” request to those partitions
5) Waits for R replies that concur (R=2)
6) Resolves the object, replies to client
7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
1) Client requests a key
2) Get handler starts up to service the request
3) Hashes key to its owner partitions (N=3)
4) Sends similar “get” request to those partitions
5) Waits for R replies that concur (R=2)
6) Resolves the object, replies to client
7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
1) Client requests a key
2) Get handler starts up to service the request
3) Hashes key to its owner partitions (N=3)
4) Sends similar “get” request to those partitions
5) Waits for R replies that concur (R=2)
6) Resolves the object, replies to client
7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
1) Client requests a key
2) Get handler starts up to service the request
3) Hashes key to its owner partitions (N=3)
4) Sends similar “get” request to those partitions
5) Waits for R replies that concur (R=2)
6) Resolves the object, replies to client
7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
1) Client requests a key
2) Get handler starts up to service the request
3) Hashes key to its owner partitions (N=3)
4) Sends similar “get” request to those partitions
5) Waits for R replies that concur (R=2)
6) Resolves the object, replies to client
7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
1) Client requests a key
2) Get handler starts up to service the request
3) Hashes key to its owner partitions (N=3)
4) Sends similar “get” request to those partitions
5) Waits for R replies that concur (R=2)
6) Resolves the object, replies to client
7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
1) Client requests a key
2) Get handler starts up to service the request
3) Hashes key to its owner partitions (N=3)
4) Sends similar “get” request to those partitions
5) Waits for R replies that concur (R=2)
6) Resolves the object, replies to client
7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
1) Client requests a key
2) Get handler starts up to service the request
3) Hashes key to its owner partitions (N=3)
4) Sends similar “get” request to those partitions
5) Waits for R replies that concur (R=2)
6) Resolves the object, replies to client
7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
1) Client requests a key
2) Get handler starts up to service the request
3) Hashes key to its owner partitions (N=3)
4) Sends similar “get” request to those partitions
5) Waits for R replies that concur (R=2)
6) Resolves the object, replies to client
7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
1) Client requests a key
2) Get handler starts up to service the request
3) Hashes key to its owner partitions (N=3)
4) Sends similar “get” request to those partitions
5) Waits for R replies that concur (R=2)
6) Resolves the object, replies to client
7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
1) Client requests a key
2) Get handler starts up to service the request
3) Hashes key to its owner partitions (N=3)
4) Sends similar “get” request to those partitions
5) Waits for R replies that concur (R=2)
6) Resolves the object, replies to client
7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
1) Client requests a key
2) Get handler starts up to service the request
3) Hashes key to its owner partitions (N=3)
4) Sends similar “get” request to those partitions
5) Waits for R replies that concur (R=2)
6) Resolves the object, replies to client
7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
1) Client requests a key
2) Get handler starts up to service the request
3) Hashes key to its owner partitions (N=3)
4) Sends similar “get” request to those partitions
5) Waits for R replies that concur (R=2)
6) Resolves the object, replies to client
7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
1) Client requests a key
2) Get handler starts up to service the request
3) Hashes key to its owner partitions (N=3)
4) Sends similar “get” request to those partitions
5) Waits for R replies that concur (R=2)
6) Resolves the object, replies to client
7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
1) Client requests a key
2) Get handler starts up to service the request
3) Hashes key to its owner partitions (N=3)
4) Sends similar “get” request to those partitions
5) Waits for R replies that concur (R=2)
6) Resolves the object, replies to client
7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
1) Client requests a key
2) Get handler starts up to service the request
3) Hashes key to its owner partitions (N=3)
4) Sends similar “get” request to those partitions
5) Waits for R replies that concur (R=2)
6) Resolves the object, replies to client
7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
1) Client requests a key
2) Get handler starts up to service the request
3) Hashes key to its owner partitions (N=3)
4) Sends similar “get” request to those partitions
5) Waits for R replies that concur (R=2)
6) Resolves the object, replies to client
7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
1) Client requests a key
2) Get handler starts up to service the request
3) Hashes key to its owner partitions (N=3)
4) Sends similar “get” request to those partitions
5) Waits for R replies that concur (R=2)
6) Resolves the object, replies to client
7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
1) Client requests a key
2) Get handler starts up to service the request
3) Hashes key to its owner partitions (N=3)
4) Sends similar “get” request to those partitions
5) Waits for R replies that concur (R=2)
6) Resolves the object, replies to client
7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
1) Client requests a key
2) Get handler starts up to service the request
3) Hashes key to its owner partitions (N=3)
4) Sends similar “get” request to those partitions
5) Waits for R replies that concur (R=2)
6) Resolves the object, replies to client
7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
1) Client requests a key
2) Get handler starts up to service the request
3) Hashes key to its owner partitions (N=3)
4) Sends similar “get” request to those partitions
5) Waits for R replies that concur (R=2)
6) Resolves the object, replies to client
7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
1) Client requests a key
2) Get handler starts up to service the request
3) Hashes key to its owner partitions (N=3)
4) Sends similar “get” request to those partitions
5) Waits for R replies that concur (R=2)
6) Resolves the object, replies to client
7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
1) Client requests a key
2) Get handler starts up to service the request
3) Hashes key to its owner partitions (N=3)
4) Sends similar “get” request to those partitions
5) Waits for R replies that concur (R=2)
6) Resolves the object, replies to client
7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
1) Client requests a key
2) Get handler starts up to service the request
3) Hashes key to its owner partitions (N=3)
4) Sends similar “get” request to those partitions
5) Waits for R replies that concur (R=2)
6) Resolves the object, replies to client
7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
*** make sure to talk about LWW, and commit hooks -- tell them to ignore the vclock business ***
“Quorums”? When I say “quora” I mean the constraints (or lack thereof) your application puts on request consistency.
Remember that requests contact all participant partitions/vnodes. No computer system is 100% reliable, so there will be times when increased latency or hardware failure will make a node unavailable. By unavailable, I mean requests timeout, the network partitions, or there’s an actual physical outage.
FT = fault-tolerance, C = consistency
Strong consistency (as opposed to strict) means that the participants in each read or write quorum overlap. The typical example is N=3, R=2, W=2. In all successful read requests, at least one of the read partitions will be one that accepted the latest write.
Remember that requests contact all participant partitions/vnodes. No computer system is 100% reliable, so there will be times when increased latency or hardware failure will make a node unavailable. By unavailable, I mean requests timeout, the network partitions, or there’s an actual physical outage.
FT = fault-tolerance, C = consistency
Strong consistency (as opposed to strict) means that the participants in each read or write quorum overlap. The typical example is N=3, R=2, W=2. In all successful read requests, at least one of the read partitions will be one that accepted the latest write.
Remember that requests contact all participant partitions/vnodes. No computer system is 100% reliable, so there will be times when increased latency or hardware failure will make a node unavailable. By unavailable, I mean requests timeout, the network partitions, or there’s an actual physical outage.
FT = fault-tolerance, C = consistency
Strong consistency (as opposed to strict) means that the participants in each read or write quorum overlap. The typical example is N=3, R=2, W=2. In all successful read requests, at least one of the read partitions will be one that accepted the latest write.
Remember that requests contact all participant partitions/vnodes. No computer system is 100% reliable, so there will be times when increased latency or hardware failure will make a node unavailable. By unavailable, I mean requests timeout, the network partitions, or there’s an actual physical outage.
FT = fault-tolerance, C = consistency
Strong consistency (as opposed to strict) means that the participants in each read or write quorum overlap. The typical example is N=3, R=2, W=2. In all successful read requests, at least one of the read partitions will be one that accepted the latest write.
However, writes are a little more complicated to track than reads.
When there’s a detectable node outage/partition, writes will be sent to fallbacks (hinted handoff), which means that Riak is HIGHLY write-available.
Also, there’s an implied R quorum because the internal Erlang client has to fetch the object to update it and the vclock.
However, writes are a little more complicated to track than reads.
When there’s a detectable node outage/partition, writes will be sent to fallbacks (hinted handoff), which means that Riak is HIGHLY write-available.
Also, there’s an implied R quorum because the internal Erlang client has to fetch the object to update it and the vclock.
However, writes are a little more complicated to track than reads.
When there’s a detectable node outage/partition, writes will be sent to fallbacks (hinted handoff), which means that Riak is HIGHLY write-available.
Also, there’s an implied R quorum because the internal Erlang client has to fetch the object to update it and the vclock.
However, writes are a little more complicated to track than reads.
When there’s a detectable node outage/partition, writes will be sent to fallbacks (hinted handoff), which means that Riak is HIGHLY write-available.
Also, there’s an implied R quorum because the internal Erlang client has to fetch the object to update it and the vclock.
Why don’t we outright reclaim the space? Ordering is hard to determine since deletes require no vclock. We prefer not lose data when there is an issue of contention.
Why don’t we outright reclaim the space? Ordering is hard to determine since deletes require no vclock. We prefer not lose data when there is an issue of contention.
Why don’t we outright reclaim the space? Ordering is hard to determine since deletes require no vclock. We prefer not lose data when there is an issue of contention.
This is probably one of the easiest Map-Reduce queries/jobs you can submit. It simply returns the values of all the keys in the bucket, including their bucket/key/vclock and metadata.
Instead of specifying the function inline, you can also store it under a bucket/key, and have Riak retrieve and execute it automatically.
A query that makes use of the “arg” in the map phase, named functions, and a reduce phase.
Finally here’s how you can submit all queries. Use the @- to signify that your data will come on the next line and be terminated by Ctrl-D.
A query that makes use of the “arg” in the map phase, named functions, and a reduce phase.
Finally here’s how you can submit all queries. Use the @- to signify that your data will come on the next line and be terminated by Ctrl-D.