2. PRESENTED
BY
What is RedisTimeSeries?
A time series is a series of data points indexed (or listed or graphed)
in time order. Most commonly, a time series is a sequence taken at
successive equally spaced points in time. Thus it is a sequence
of discrete-time data.
What you wrote, Wikipedia
*time series data is append-only
** Sample = <time, value>
5. PRESENTED
BY
● Compression added
○ Reduce memory up to 98%
○ Improves read performance
○ Based upon the Gorilla paper by Facebook
● Stable ingestion time
○ Independent of the number of the data points on a time-series
● Reviewed API
○ Performance improvements
○ Removed ambiguity
● Extended client support
RedisTimeSeries 1.2
Headlines
7. PRESENTED
BY
If ΔΔ is zero, then store a single ‘0’ bit
Else If ΔΔ is between [-63, 64], store ‘10’ followed by
the value (7 bits)
Else If ΔΔ is between [-512,511], store ‘110’ followed by
the value (10 bits)
Else if ΔΔ is between [-4096,4095], store ‘1110’ followed
by the value (13 bits)
Else if ΔΔ is between [-32768,32767], store ‘11110’ followed
by the value (16 bits)
Else store ‘11111’ followed by the value using 64 bits
Compression
Timestamp - DoubleDelta - variable-length encoding
9. PRESENTED
BY
If XOR is zero (same value)
store single ‘0’ bit
Compression
Value - XOR - variable-length encoding
Else
calculate the number of leading and trailing zeros in the XOR,
store bit ‘1’ followed by
If the block of meaningful bits falls within the block of previous
meaningful bits,
store control bit `0`
Else store control bit `1`,
store the length of the number of leading zeros in the next 5 bits,
store the length of the meaningful XOR value in the next 6 bits.
Finally store the meaningful bits of the XOR value.
11. PRESENTED
BY
Performance v1.0.3 vs v.1.2
Test Case
#Samples
(Millions)
30 days interval for 100 devices
x 10 metrics (cardinality 1K) 259.20
30 days interval for 1K devices x
10 metrics (cardinality 10K) 2,592.00
90 days interval for 100 devices
x 10 metrics (cardinality 1K) 777.60
%diff cardinality 1K vs 10K
No degradation
by compression
No degradation
by cardinality
Datasets and Ingestion overall throughput
v1.0.3 v1.2 % diff
354,812.17 363,562.25 2.47%
349,522.72 361,519.57 3.43%
352,025.35 343,665.92 -2.37%
-1.49% -0.56%