O slideshow foi denunciado.
Seu SlideShare está sendo baixado. ×

x86-less ScyllaDB: Exploring an All-ARM Cluster

Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Próximos SlideShares
Introduction to Galera Cluster
Introduction to Galera Cluster
Carregando em…3
×

Confira estes a seguir

1 de 20 Anúncio

x86-less ScyllaDB: Exploring an All-ARM Cluster

Baixar para ler offline

In this session, we explore an all-ARM ScyllaDB cluster with ARM-powered servers, storage, and networking. We describe the hardware setup and evaluate ScyllaDB performance to uncover what's possible in a completely x86-less cluster.

In this session, we explore an all-ARM ScyllaDB cluster with ARM-powered servers, storage, and networking. We describe the hardware setup and evaluate ScyllaDB performance to uncover what's possible in a completely x86-less cluster.

Anúncio
Anúncio

Mais Conteúdo rRelacionado

Semelhante a x86-less ScyllaDB: Exploring an All-ARM Cluster (20)

Mais de ScyllaDB (20)

Anúncio

Mais recentes (20)

x86-less ScyllaDB: Exploring an All-ARM Cluster

  1. 1. x86-less ScyllaDB: Exploring an All-Arm Cluster Mike Bennett, Ampere Keith McKay, ScaleFlux
  2. 2. Mike Bennett ■ Solution Architect ■ 18 years experience in IT, 9 in solution development ■ Enjoys servers with many cores ■ Reverse migrated from Texas to California in 2022
  3. 3. Keith McKay ■ Responsible for applications engineering at ScaleFlux ■ Loves non-volatile memory and storage ■ Born in Mountain View, CA (long before Google).
  4. 4. ■ Cluster Configuration ■ Introduction to Ampere & ScaleFlux ■ Benchmark Setup & Results ■ Call to Action Agenda
  5. 5. Cluster Configuration
  6. 6. A ScyllaDB Cluster Without x86 8x Embedded ARM Cores Per SSD 128x Arm Cores Per node 256GB DDR4-3200 Mt. Collins Single Socket NICs: Mellanox ConnectX-6 Client Client Client CSD-3000 Series NVMe (PCIe Gen4 x4) ZERO data processed, moved, or stored using x86 instructions x4 x4 x4 SUSE Linux Enterprise Server 15 SP4 (kernel 5.14.21-150400.24.38-default) | 256GiB DRAM | Ampere Altra Max @ 3.0GHz ScyllaDB Enterprise: 2022.1.3-0.20220922.539a55e35 | 100Gb Ethernet
  7. 7. Why Is This Important? ■ Low Power ■ High CPU Density ■ High Performance Higher Efficiency → Lower TCO
  8. 8. Introduction to Ampere Computing® & ScaleFlux®
  9. 9. Ampere® Altra® is the World’s First Cloud-Native Processor Ampere® Altra® Max Ampere® Altra® 7nm 80 Cores 7nm 128 Cores Predictable High Performance Elastic and Scalable Power Efficient and Sustainable Larger Low Latency Private Caches Single-Threaded Cloud Core Consistent Operating Frequency Maximum Core Counts Power and Area-Efficient Smaller Private Caches Multi-Threaded Client Core Inconsistent Operating Frequency Limited Core Counts Power and Area-Inefficient Legacy Architectures Ampere Architecture Arm Native Cloud Native Video Services Web Services Data Services AI
  10. 10. ScaleFlux: A Better SSD Datacenter Class NVMe SSD Compute Engines ● U.2 and E1.S Form Factors ● 3.84TB to 16TB+ capacity ● Enterprise feature set ○ TCG Opal, SR-IOV, etc. Compute Capabilities: ● Transparent compression ● Data filtering ● Security acceleration
  11. 11. Benchmark Results
  12. 12. Test Scenarios cassandra-stress write no-warmup n=1342177280 cl=local_quorum -schema "replication(factor=3)" -mode native cql3 -pop seq=1.. 10214748364 -col size=gaussian(214..748,364.80) … ~6x Updates ~1TB Shard-aware driver Incremental Compaction (Default for Enterprise) Varied column data size Scenario 1: 100% Read with Gaussian Distribution Scenario 2: 75% Read / 25% Write with Gaussian Distribution Scenario 3: 50% Read / 50% Write, Dataset in Memory Load Phase:
  13. 13. Cluster Limits: 100% Read 100% Read over 10B Records Gaussian Access Pattern cassandra-stress read n=1000000000 cl=ONE -pop dist=GAUSSIAN(1.. 10214748364) -schema keyspace="keyspace1" -mode native cql3 … Sub-millisecond P99 @ 1.4 Mops/sec
  14. 14. Cluster Limits: 75/25 Mixed R/W 75/25 Read-Write over 10B Records Gaussian Access Pattern cassandra-stress mixed ratio(write=1,read=3) n=1000000000 cl=ONE -pop dist=GAUSSIAN(1.. 10214748364) -schema keyspace="keyspace1" -mode native cql3 … 1.1 Mops/sec with ~100 running compactions
  15. 15. Cluster Limits: 50/50 Read/Write 50/50 Read-Write over 1M Records cassandra-stress mixed ratio(write=1,read=1) n=1000000000 cl=ONE -pop dist=UNIFORM(1..1000000) -schema keyspace="keyspace1" -mode native cql3 … 1.4 Mops/sec with a uniform distribution.
  16. 16. Call to Action
  17. 17. Why Is This Important? ■ Low Power ■ Benchmarks used under 4W per CPU core (410 - 490W per server) ■ Rack Math ■ Including platform, memory, network IO, and storage IO power ■ High CPU Density ■ Fewer database nodes required, lower CapEx & OpEx ■ Ideally suited to ScyllaDB shard-per-core architecture ■ High Performance ■ Better performance & economics compared to cloud deployments ■ Unmatched ops/sec and latency performance
  18. 18. Ampere Developer Access Program ■ Get access to hardware ■ Remote access to bare metal servers ■ Trial systems shipped to you ■ Partner cloud programs ■ Solution architects available to help you get up and running! https://solutions.amperecomputing.com/where-to-try
  19. 19. ScaleFlux PoC Program ■ Request sample units at info@scaleflux.com ■ Be sure to mention that you saw us at the ScyllaDB Summit! ■ Learn more about ScaleFlux and “A Better SSD” at our website ■ https://www.scaleflux.com ■ Feel free to reach out to me directly using the contacts at the end of this presentation (fair warning: I love talking about storage!)
  20. 20. Thank You Stay in Touch Mike Bennett mbennett@amperecomputing.com https://github.com/mikebatwork https://www.linkedin.com/in/mbamike1/ Keith McKay keith.mckay@scaleflux.com @keefmck https://github.com/kpmckay www.linkedin.com/in/kpmckay

×