SlideShare uma empresa Scribd logo
1 de 52
Baixar para ler offline
Gluster
Tutorial
Jeff Darcy, Red Hat
LISA 2016 (Boston)
Agenda
▸ Alternating info-dump and hands-on
▹ This is part of the info-dump ;)
▸ Gluster basics
▸ Initial setup
▸ Extra features
▸ Maintenance and trouble-shooting
Who Am I?
▸ One of three project-wide architects
▸ First Red Hat employee to be seriously
involved with Gluster (before
acquisition)
▸ Previously worked on NFS (v2..v4),
Lustre, PVFS2, others
▸ General distributed-storage blatherer
▹ http://pl.atyp.us / @Obdurodon
TEMPLATE CREDITS
Special thanks to all the people who made and released these
awesome resources for free:
▸ Presentation template by SlidesCarnival
▸ Photographs by Death to the Stock Photo (license)
Some Terminology
▸ A brick is simply a directory on a server
▸ We use translators to combine bricks
into more complex subvolumes
▹ For scale, replication, sharding, ...
▸ This forms a translator graph,
contained in a volfile
▸ Internal daemons (e.g. self heal) use the
same bricks arranged into slightly
different volfiles
Hands On: Getting Started
1. Use the RHGS test drive
▹ http://bit.ly/glustertestdrive
2. Start a Fedora/CentOS VM
▹ Use yum/dnf to install gluster
▹ base, libs, server, fuse, client-xlators, cli
3. Docker Docker Docker
▹ https://github.com/gluster/gluster-containers
Brick / Translator Example
Server A
/brick1
Server B
/brick2
Server C
/brick3
Server D
/brick4
Brick / Translator Example
Server A
/brick1
Server B
/brick2
Replica
Set 1
Server C
/brick3
Server D
/brick4
Replica
Set 2
A subvolume
Also a subvolume
Brick / Translator Example
Server A
/brick1
Server B
/brick2
Replica
Set 1
Server C
/brick3
Server D
/brick4
Replica
Set 2
Volume
“fubar”
Translator Patterns
Server A
/brick1
Server B
/brick2
Replica
Set 1
Fan-out or “cluster”
e.g. AFR, EC, DHT, ...
AFR
md-cache
Pass through
e.g. performance
Access Methods
FUSE
Samba
Ganesha
TCMU
GFAPI
Self heal
Rebalance
Quota
Snapshot
Bitrot
GlusterD
▸ Management daemon
▸ Maintains membership, detects server
failures
▸ Stages configuration changes
▸ Starts and monitors other daemons
Simple Configuration Example
serverA# gluster peer probe serverB
serverA# gluster volume create fubar 
replica 2 
serverA:/brick1 serverB:/brick2
serverA# gluster volume start fubar
clientX# mount -t glusterfs serverA:fubar 
/mnt/gluster_fubar
Hands On: Connect Servers
[root@vagrant-testVM glusterfs]# gluster peer probe
192.168.121.66
peer probe: success.
[root@vagrant-testVM glusterfs]# gluster peer status
Number of Peers: 1
Hostname: 192.168.121.66
Uuid: 95aee0b5-c816-445b-8dbc-f88da7e95660
State: Accepted peer request (Connected)
Hands On: Server Volume Setup
[root@vagrant-testVM glusterfs]# gluster volume create fubar 
replica 2 testvm:/d/backends/fubar{0,1} force
volume create: fubar: success: please start the volume to
access data
[root@vagrant-testVM glusterfs]# gluster volume info fubar
... (see for yourself)
[root@vagrant-testVM glusterfs]# gluster volume status fubar
Volume fubar is not started
Hands On: Server Volume Setup
[root@vagrant-testVM glusterfs]# gluster volume start fubar
volume start: fubar: success
[root@vagrant-testVM glusterfs]# gluster volume status fubar
Status of volume: fubar
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick testvm:/d/backends/fubar0 49152 0 Y 13104
Brick testvm:/d/backends/fubar1 49153 0 Y 13133
Self-heal Daemon on localhost N/A N/A Y 13163
Task Status of Volume fubar
------------------------------------------------------------------------------
There are no active volume tasks
Hands On: Client Volume Setup
[root@vagrant-testVM glusterfs]# mount -t glusterfs testvm:fubar 
/mnt/glusterfs/0
[root@vagrant-testVM glusterfs]# df /mnt/glusterfs/0
Filesystem 1K-blocks Used Available Use% Mounted on
testvm:fubar 5232640 33280 5199360 1% /mnt/glusterfs/0
[root@vagrant-testVM glusterfs]# ls -a /mnt/glusterfs/0
. ..
[root@vagrant-testVM glusterfs]# ls -a /d/backends/fubar0
. .. .glusterfs
Hands On: It’s a Filesystem!
▸ Create some files
▸ Create directories, symlinks, ...
▸ Rename, delete, ...
▸ Test performance
▹ OK, not yet
Distribution and Rebalancing
Server X’s range Server Y’s range
0 0x7fffffff 0xffffffff
● Each brick “claims” a range of hash values
○ Collection of claims is called a layout
● Files (dots) are hashed, placed on brick
claiming that range
● When bricks are added, claims are adjusted to
minimize data motion
Distribution and Rebalancing
Server X’s range Server Y’s range
0 0x80000000 0xffffffff
Server X’s range Server Y’s range
0 0x55555555 0xaaaaaaaa 0xffffffff
Server Z’s range
Move X->Z Move Y->Z
Sharding
▸ Divides files into chunks
▸ Each chunk is placed separately
according to hash
▸ High probability (not certainty) of
chunks being on different subvolumes
▸ Spreads capacity and I/O across
subvolumes
Hands On: Adding a Brick
[root@vagrant-testVM glusterfs]# gluster volume create xyzzy
testvm:/d/backends/xyzzy{0,1}
[root@vagrant-testVM glusterfs]# getfattr -d -e hex 
-m trusted.glusterfs.dht /d/backends/xyzzy{0,1}
# file: d/backends/xyzzy0
trusted.glusterfs.dht=0x0000000100000000000000007ffffffe
# file: d/backends/xyzzy1
trusted.glusterfs.dht=0x00000001000000007fffffffffffffff
Hands On: Adding a Brick
[root@vagrant-testVM glusterfs]# gluster volume add-brick xyzzy 
testvm:/d/backends/xyzzy2
volume add-brick: success
[root@vagrant-testVM glusterfs]# gluster volume rebalance xyzzy 
fix-layout start
volume rebalance: xyzzy: success: Rebalance on xyzzy has been started
successfully. Use rebalance status command to check status of the
rebalance process.
ID: 88782248-7c12-4ba8-97f6-f5ce6815963
Hands On: Adding a Brick
[root@vagrant-testVM glusterfs]# getfattr -d -e hex -m 
trusted.glusterfs.dht /d/backends/xyzzy{0,1,2}
# file: d/backends/xyzzy0
trusted.glusterfs.dht=0x00000001000000000000000055555554
# file: d/backends/xyzzy1
trusted.glusterfs.dht=0x0000000100000000aaaaaaaaffffffff
# file: d/backends/xyzzy2
trusted.glusterfs.dht=0x000000010000000055555555aaaaaaa9
Split Brain (problem definition)
▸ “Split brain” is when we don’t have
enough information to determine
correct recovery action
▸ Can be caused by node failure or
network partition
▸ Every distributed data store has to
prevent and/or deal with it
How Replication Works
▸ Client sends operation (e.g. write) to all
replicas directly
▸ Coordination: pre-op, post-op, locking
▹ enables recovery in case of failure
▸ Self-heal (repair) usually done by
internal daemon
Split Brain (how it happens)
Server A
Client X
Client Y
Server B
Network
partition
Split Brain (what it looks like)
[root@vagrant-testVM glusterfs]# ls /mnt/glusterfs/0
ls: cannot access /mnt/glusterfs/0/best-sf: Input/output error
best-sf
[root@vagrant-testVM glusterfs]# cat /mnt/glusterfs/0/best-sf
cat: /mnt/glusterfs/0/best-sf: Input/output error
[root@vagrant-testVM glusterfs]# cat /d/backends/fubar0/best-sf
star trek
[root@vagrant-testVM glusterfs]# cat /d/backends/fubar1/best-sf
star wars
What the...?
Split Brain (dealing with it)
▸ Primary mechanism: quorum
▹ server side, client side, or both
▹ arbiters
▸ Secondary: rule-based resolution
▹ e.g. largest, latest timestamp
▹ Thanks, Facebook!
▸ Last choice: manual repair
Server Side Quorum
Brick A Brick B Brick C
Client X Client Y
Writes succeed Has no servers
Forced down
Client Side Quorum
Brick A Brick B Brick C
Client X Client Y
Writes succeed Writes rejected locally
(EROFS)
Stays up
Erasure Coding
▸ Encode N input blocks into N+K output
blocks, so that original can be recovered
from any N.
▸ RAID is erasure coding with K=1 (RAID 5)
or K=2 (RAID 6)
▸ Our implementation mostly has the
same flow as replication
Erasure Coding
Erasure Coding
BREAK
Quota
▸ Gluster supports directory-level quota
▸ For nested directories, lowest applicable
limit applies
▸ Soft and hard limits
▹ Exceeding soft limit gets logged
▹ Exceeding hard limit gets EDQUOT
Quota
▸ Problem: global vs. local limits
▹ quota is global (per volume)
▹ files are pseudo-randomly distributed
across bricks
▸ How do we enforce this?
▸ Quota daemon exists to handle this
coordination
Hands On: Quota
[root@vagrant-testVM glusterfs]# gluster volume quota xyzzy enable
volume quota : success
[root@vagrant-testVM glusterfs]# gluster volume quota xyzzy soft-timeout 0
volume quota : success
[root@vagrant-testVM glusterfs]# gluster volume quota xyzzy hard-timeout 0
volume quota : success
[root@vagrant-testVM glusterfs]# gluster volume quota xyzzy 
limit-usage /john 100MB
volume quota : success
Hands On: Quota
[root@vagrant-testVM glusterfs]# gluster volume quota xyzzy list
Path Hard-limit Soft-limit
-----------------------------------------------------------------
/john 100.0MB 80%(80.0MB)
Used Available Soft-limit exceeded? Hard-limit exceeded?
--------------------------------------------------------------
0Bytes 100.0MB No No
Hands On: Quota
[root@vagrant-testVM glusterfs]# dd if=/dev/zero 
of=/mnt/glusterfs/0/john/bigfile bs=1048576 count=85 conv=sync
85+0 records in
85+0 records out
89128960 bytes (89 MB) copied, 1.83037 s, 48.7 MB/s
[root@vagrant-testVM glusterfs]# grep -i john /var/log/glusterfs/bricks/*
/var/log/glusterfs/bricks/d-backends-xyzzy0.log:[2016-11-29 14:31:44.581934]
A [MSGID: 120004] [quota.c:4973:quota_log_usage] 0-xyzzy-quota: Usage
crossed soft limit: 80.0MB used by /john
Hands On: Quota
[root@vagrant-testVM glusterfs]# dd if=/dev/zero 
of=/mnt/glusterfs/0/john/bigfile2 bs=1048576 count=85 conv=sync
dd: error writing '''/mnt/glusterfs/0/john/bigfile2''': Disk quota exceeded
[root@vagrant-testVM glusterfs]# gluster volume quota xyzzy list | cut -c
66-
Used Available Soft-limit exceeded? Hard-limit exceeded?
--------------------------------------------------------------
101.9MB 0Bytes Yes Yes
Snapshots
▸ Gluster supports read-only snapshots
and writable clones of snapshots
▸ Also, snapshot restores
▸ Support is based on / tied to LVM thin
provisioning
▹ originally supposed to be more
platform-agnostic
▹ maybe some day it really will be
Hands On: Snapshots
[root@vagrant-testVM glusterfs]# fallocate -l $((100*1024*1024)) 
/tmp/snap-brick0
[root@vagrant-testVM glusterfs]# losetup --show -f /tmp/snap-brick0 
/dev/loop3
[root@vagrant-testVM glusterfs]# vgcreate snap-vg0 /dev/loop3
Volume group "snap-vg0" successfully created
Hands On: Snapshots
[root@vagrant-testVM glusterfs]# lvcreate -L 50MB -T /dev/snap-vg0/thinpool
Rounding up size to full physical extent 52.00 MiB
Logical volume "thinpool" created.
[root@vagrant-testVM glusterfs]# lvcreate -V 200MB -T /dev/snap-vg0/thinpool
-n snap-lv0
Logical volume "snap-lv0" created.
[root@vagrant-testVM glusterfs]# mkfs.xfs /dev/snap-vg0/snap-lv0
...
[root@vagrant-testVM glusterfs]# mount /dev/snap-vg0/snap-lv0
/d/backends/xyzzy0
...
Hands On: Snapshots
[root@vagrant-testVM glusterfs]# gluster volume create xyzzy 
testvm:/d/backends/xyzzy{0,1} force
[root@vagrant-testVM glusterfs]# echo hello > /mnt/glusterfs/0/file1
[root@vagrant-testVM glusterfs]# echo hello > /mnt/glusterfs/0/file2
[root@vagrant-testVM glusterfs]# gluster snapshot create snap1 xyzzy
snapshot create: success: Snap snap1_GMT-2016.11.29-14.57.11 created
successfully
[root@vagrant-testVM glusterfs]# echo hello > /mnt/glusterfs/0/file3
Hands On: Snapshots
[root@vagrant-testVM glusterfs]# gluster snapshot activate 
snap1_GMT-2016.11.29-14.57.11
Snapshot activate: snap1_GMT-2016.11.29-14.57.11: Snap activated
successfully
[root@vagrant-testVM glusterfs]# mount -t glusterfs 
testvm:/snaps/snap1_GMT-2016.11.29-14.57.11/xyzzy /mnt/glusterfs/1
[root@vagrant-testVM glusterfs]# ls /mnt/glusterfs/1
file1 file2
[root@vagrant-testVM glusterfs]# echo hello > /mnt/glusterfs/1/file3
-bash: /mnt/glusterfs/1/file3: Read-only file system
Hands On: Snapshots
[root@vagrant-testVM glusterfs]# gluster snapshot clone clone1 
snap1_GMT-2016.11.29-14.57.11
snapshot clone: success: Clone clone1 created successfully
[root@vagrant-testVM glusterfs]# gluster volume start clone1
volume start: clone1: success
[root@vagrant-testVM glusterfs]# mount -t glusterfs testvm:/clone1 
/mnt/glusterfs/2
[root@vagrant-testVM glusterfs]# echo goodbye > /mnt/glusterfs/2/file3
Hands On: Snapshots
# Unmount and stop clone.
# Stop original volume - but leave snapshot activated!
[root@vagrant-testVM glusterfs]# gluster snapshot restore snap1_GMT-2016.11.29-14.57.11
Restore operation will replace the original volume with the snapshotted volume. Do you still want to
continue? (y/n) y
Snapshot restore: snap1_GMT-2016.11.29-14.57.11: Snap restored successfully
[root@vagrant-testVM glusterfs]# gluster volume start xyzzy
volume start: xyzzy: success
[root@vagrant-testVM glusterfs]# ls /mnt/glusterfs/0
file1 file2
BREAK
Other Features
▸ Geo-replication
▸ Bitrot detection
▸ Transport security
▸ Encryption, compression/dedup etc. can
be done locally on bricks
Gluster 4.x
▸ GlusterD 2
▹ higher scale + interfaces + smarts
▸ Server-side replication
▸ DHT improvements for scale
▸ More multitenancy
▹ subvolume mounts, throttling/QoS
Thank You!
http://gluster.org
jdarcy@redhat.com

Mais conteúdo relacionado

Mais procurados

Gluster and Kubernetes
Gluster and KubernetesGluster and Kubernetes
Gluster and KubernetesGluster.org
 
Gluster as Block Store in Containers
Gluster as Block Store in ContainersGluster as Block Store in Containers
Gluster as Block Store in ContainersGluster.org
 
Integrating gluster fs,_qemu_and_ovirt-vijay_bellur-linuxcon_eu_2013
Integrating gluster fs,_qemu_and_ovirt-vijay_bellur-linuxcon_eu_2013Integrating gluster fs,_qemu_and_ovirt-vijay_bellur-linuxcon_eu_2013
Integrating gluster fs,_qemu_and_ovirt-vijay_bellur-linuxcon_eu_2013Gluster.org
 
Gluster fs for_storage_admins_glusterfs_meetup_07_feb
Gluster fs for_storage_admins_glusterfs_meetup_07_febGluster fs for_storage_admins_glusterfs_meetup_07_feb
Gluster fs for_storage_admins_glusterfs_meetup_07_febbipin kunal
 
Gluster overview & future directions vault 2015
Gluster overview & future directions vault 2015Gluster overview & future directions vault 2015
Gluster overview & future directions vault 2015Vijay Bellur
 
GlusterFS w/ Tiered XFS
GlusterFS w/ Tiered XFS  GlusterFS w/ Tiered XFS
GlusterFS w/ Tiered XFS Gluster.org
 
Gluster d thread_synchronization_using_urcu_lca2016
Gluster d thread_synchronization_using_urcu_lca2016Gluster d thread_synchronization_using_urcu_lca2016
Gluster d thread_synchronization_using_urcu_lca2016Gluster.org
 
Lcna example-2012
Lcna example-2012Lcna example-2012
Lcna example-2012Gluster.org
 
Leases and-caching final
Leases and-caching finalLeases and-caching final
Leases and-caching finalGluster.org
 
Gluster wireshark niels_de_vos
Gluster wireshark niels_de_vosGluster wireshark niels_de_vos
Gluster wireshark niels_de_vosGluster.org
 
Kkeithley ufonfs-gluster summit
Kkeithley ufonfs-gluster summitKkeithley ufonfs-gluster summit
Kkeithley ufonfs-gluster summitGluster.org
 
Scalability and Performance of CNS 3.6
Scalability and Performance of CNS 3.6Scalability and Performance of CNS 3.6
Scalability and Performance of CNS 3.6Gluster.org
 
Lcna 2012-tutorial
Lcna 2012-tutorialLcna 2012-tutorial
Lcna 2012-tutorialGluster.org
 
GlusterFS Cinder integration presented at GlusterNight Paris event @ Openstac...
GlusterFS Cinder integration presented at GlusterNight Paris event @ Openstac...GlusterFS Cinder integration presented at GlusterNight Paris event @ Openstac...
GlusterFS Cinder integration presented at GlusterNight Paris event @ Openstac...Deepak Shetty
 
Lcna tutorial-2012
Lcna tutorial-2012Lcna tutorial-2012
Lcna tutorial-2012Gluster.org
 
Container-relevant Upstream Kernel Developments
Container-relevant Upstream Kernel DevelopmentsContainer-relevant Upstream Kernel Developments
Container-relevant Upstream Kernel DevelopmentsDocker, Inc.
 
GlusterFS and Openstack Storage
GlusterFS and Openstack StorageGlusterFS and Openstack Storage
GlusterFS and Openstack StorageDeepak Shetty
 
Data Reduction for Gluster with VDO
Data Reduction for Gluster with VDOData Reduction for Gluster with VDO
Data Reduction for Gluster with VDOGluster.org
 

Mais procurados (20)

Gluster and Kubernetes
Gluster and KubernetesGluster and Kubernetes
Gluster and Kubernetes
 
Gdeploy 2.0
Gdeploy 2.0Gdeploy 2.0
Gdeploy 2.0
 
Gluster as Block Store in Containers
Gluster as Block Store in ContainersGluster as Block Store in Containers
Gluster as Block Store in Containers
 
Qemu gluster fs
Qemu gluster fsQemu gluster fs
Qemu gluster fs
 
Integrating gluster fs,_qemu_and_ovirt-vijay_bellur-linuxcon_eu_2013
Integrating gluster fs,_qemu_and_ovirt-vijay_bellur-linuxcon_eu_2013Integrating gluster fs,_qemu_and_ovirt-vijay_bellur-linuxcon_eu_2013
Integrating gluster fs,_qemu_and_ovirt-vijay_bellur-linuxcon_eu_2013
 
Gluster fs for_storage_admins_glusterfs_meetup_07_feb
Gluster fs for_storage_admins_glusterfs_meetup_07_febGluster fs for_storage_admins_glusterfs_meetup_07_feb
Gluster fs for_storage_admins_glusterfs_meetup_07_feb
 
Gluster overview & future directions vault 2015
Gluster overview & future directions vault 2015Gluster overview & future directions vault 2015
Gluster overview & future directions vault 2015
 
GlusterFS w/ Tiered XFS
GlusterFS w/ Tiered XFS  GlusterFS w/ Tiered XFS
GlusterFS w/ Tiered XFS
 
Gluster d thread_synchronization_using_urcu_lca2016
Gluster d thread_synchronization_using_urcu_lca2016Gluster d thread_synchronization_using_urcu_lca2016
Gluster d thread_synchronization_using_urcu_lca2016
 
Lcna example-2012
Lcna example-2012Lcna example-2012
Lcna example-2012
 
Leases and-caching final
Leases and-caching finalLeases and-caching final
Leases and-caching final
 
Gluster wireshark niels_de_vos
Gluster wireshark niels_de_vosGluster wireshark niels_de_vos
Gluster wireshark niels_de_vos
 
Kkeithley ufonfs-gluster summit
Kkeithley ufonfs-gluster summitKkeithley ufonfs-gluster summit
Kkeithley ufonfs-gluster summit
 
Scalability and Performance of CNS 3.6
Scalability and Performance of CNS 3.6Scalability and Performance of CNS 3.6
Scalability and Performance of CNS 3.6
 
Lcna 2012-tutorial
Lcna 2012-tutorialLcna 2012-tutorial
Lcna 2012-tutorial
 
GlusterFS Cinder integration presented at GlusterNight Paris event @ Openstac...
GlusterFS Cinder integration presented at GlusterNight Paris event @ Openstac...GlusterFS Cinder integration presented at GlusterNight Paris event @ Openstac...
GlusterFS Cinder integration presented at GlusterNight Paris event @ Openstac...
 
Lcna tutorial-2012
Lcna tutorial-2012Lcna tutorial-2012
Lcna tutorial-2012
 
Container-relevant Upstream Kernel Developments
Container-relevant Upstream Kernel DevelopmentsContainer-relevant Upstream Kernel Developments
Container-relevant Upstream Kernel Developments
 
GlusterFS and Openstack Storage
GlusterFS and Openstack StorageGlusterFS and Openstack Storage
GlusterFS and Openstack Storage
 
Data Reduction for Gluster with VDO
Data Reduction for Gluster with VDOData Reduction for Gluster with VDO
Data Reduction for Gluster with VDO
 

Semelhante a Hands On Gluster with Jeff Darcy

Capistrano deploy Magento project in an efficient way
Capistrano deploy Magento project in an efficient wayCapistrano deploy Magento project in an efficient way
Capistrano deploy Magento project in an efficient waySylvain Rayé
 
Gluster3.1简介
Gluster3.1简介Gluster3.1简介
Gluster3.1简介imdplee
 
はじめてのGlusterFS
はじめてのGlusterFSはじめてのGlusterFS
はじめてのGlusterFSTakahiro Inoue
 
CloudStack hands-on workshop @ DevOpsDays Amsterdam 2015
CloudStack hands-on workshop @ DevOpsDays Amsterdam 2015CloudStack hands-on workshop @ DevOpsDays Amsterdam 2015
CloudStack hands-on workshop @ DevOpsDays Amsterdam 2015Remi Bergsma
 
Dockerizing the Hard Services: Neutron and Nova
Dockerizing the Hard Services: Neutron and NovaDockerizing the Hard Services: Neutron and Nova
Dockerizing the Hard Services: Neutron and Novaclayton_oneill
 
Scale out backups-with_bareos_and_gluster
Scale out backups-with_bareos_and_glusterScale out backups-with_bareos_and_gluster
Scale out backups-with_bareos_and_glusterGluster.org
 
Scale out backups-with_bareos_and_gluster
Scale out backups-with_bareos_and_glusterScale out backups-with_bareos_and_gluster
Scale out backups-with_bareos_and_glusterGluster.org
 
OSBConf 2015 | Scale out backups with bareos and gluster by niels de vos
OSBConf 2015 | Scale out backups with bareos and gluster by niels de vosOSBConf 2015 | Scale out backups with bareos and gluster by niels de vos
OSBConf 2015 | Scale out backups with bareos and gluster by niels de vosNETWAYS
 
Docker and friends at Linux Days 2014 in Prague
Docker and friends at Linux Days 2014 in PragueDocker and friends at Linux Days 2014 in Prague
Docker and friends at Linux Days 2014 in Praguetomasbart
 
Glusterfs for sysadmins-justin_clift
Glusterfs for sysadmins-justin_cliftGlusterfs for sysadmins-justin_clift
Glusterfs for sysadmins-justin_cliftGluster.org
 
Swift Install Workshop - OpenStack Conference Spring 2012
Swift Install Workshop - OpenStack Conference Spring 2012Swift Install Workshop - OpenStack Conference Spring 2012
Swift Install Workshop - OpenStack Conference Spring 2012Joe Arnold
 
Control your deployments with Capistrano
Control your deployments with CapistranoControl your deployments with Capistrano
Control your deployments with CapistranoRamazan K
 
[EXTENDED] Ceph, Docker, Heroku Slugs, CoreOS and Deis Overview
[EXTENDED] Ceph, Docker, Heroku Slugs, CoreOS and Deis Overview[EXTENDED] Ceph, Docker, Heroku Slugs, CoreOS and Deis Overview
[EXTENDED] Ceph, Docker, Heroku Slugs, CoreOS and Deis OverviewLeo Lorieri
 
Tales from the four-comma club: Managing Kafka as a service at Salesforce | L...
Tales from the four-comma club: Managing Kafka as a service at Salesforce | L...Tales from the four-comma club: Managing Kafka as a service at Salesforce | L...
Tales from the four-comma club: Managing Kafka as a service at Salesforce | L...HostedbyConfluent
 
OpenStack Swift Command Line Reference Diablo v1.2
OpenStack Swift Command Line Reference Diablo v1.2OpenStack Swift Command Line Reference Diablo v1.2
OpenStack Swift Command Line Reference Diablo v1.2Amar Kapadia
 
Deployment with Fabric
Deployment with FabricDeployment with Fabric
Deployment with Fabricandymccurdy
 
The Fairy Tale of the One Command Build Script
The Fairy Tale of the One Command Build ScriptThe Fairy Tale of the One Command Build Script
The Fairy Tale of the One Command Build ScriptDocker, Inc.
 
Openstack taskflow 簡介
Openstack taskflow 簡介Openstack taskflow 簡介
Openstack taskflow 簡介kao kuo-tung
 

Semelhante a Hands On Gluster with Jeff Darcy (20)

Capistrano deploy Magento project in an efficient way
Capistrano deploy Magento project in an efficient wayCapistrano deploy Magento project in an efficient way
Capistrano deploy Magento project in an efficient way
 
Gluster3.1简介
Gluster3.1简介Gluster3.1简介
Gluster3.1简介
 
The state of the swarm
The state of the swarmThe state of the swarm
The state of the swarm
 
はじめてのGlusterFS
はじめてのGlusterFSはじめてのGlusterFS
はじめてのGlusterFS
 
CloudStack hands-on workshop @ DevOpsDays Amsterdam 2015
CloudStack hands-on workshop @ DevOpsDays Amsterdam 2015CloudStack hands-on workshop @ DevOpsDays Amsterdam 2015
CloudStack hands-on workshop @ DevOpsDays Amsterdam 2015
 
Dockerizing the Hard Services: Neutron and Nova
Dockerizing the Hard Services: Neutron and NovaDockerizing the Hard Services: Neutron and Nova
Dockerizing the Hard Services: Neutron and Nova
 
Scale out backups-with_bareos_and_gluster
Scale out backups-with_bareos_and_glusterScale out backups-with_bareos_and_gluster
Scale out backups-with_bareos_and_gluster
 
Scale out backups-with_bareos_and_gluster
Scale out backups-with_bareos_and_glusterScale out backups-with_bareos_and_gluster
Scale out backups-with_bareos_and_gluster
 
OSBConf 2015 | Scale out backups with bareos and gluster by niels de vos
OSBConf 2015 | Scale out backups with bareos and gluster by niels de vosOSBConf 2015 | Scale out backups with bareos and gluster by niels de vos
OSBConf 2015 | Scale out backups with bareos and gluster by niels de vos
 
Docker and friends at Linux Days 2014 in Prague
Docker and friends at Linux Days 2014 in PragueDocker and friends at Linux Days 2014 in Prague
Docker and friends at Linux Days 2014 in Prague
 
Glusterfs for sysadmins-justin_clift
Glusterfs for sysadmins-justin_cliftGlusterfs for sysadmins-justin_clift
Glusterfs for sysadmins-justin_clift
 
Swift Install Workshop - OpenStack Conference Spring 2012
Swift Install Workshop - OpenStack Conference Spring 2012Swift Install Workshop - OpenStack Conference Spring 2012
Swift Install Workshop - OpenStack Conference Spring 2012
 
Control your deployments with Capistrano
Control your deployments with CapistranoControl your deployments with Capistrano
Control your deployments with Capistrano
 
Gradle como alternativa a maven
Gradle como alternativa a mavenGradle como alternativa a maven
Gradle como alternativa a maven
 
[EXTENDED] Ceph, Docker, Heroku Slugs, CoreOS and Deis Overview
[EXTENDED] Ceph, Docker, Heroku Slugs, CoreOS and Deis Overview[EXTENDED] Ceph, Docker, Heroku Slugs, CoreOS and Deis Overview
[EXTENDED] Ceph, Docker, Heroku Slugs, CoreOS and Deis Overview
 
Tales from the four-comma club: Managing Kafka as a service at Salesforce | L...
Tales from the four-comma club: Managing Kafka as a service at Salesforce | L...Tales from the four-comma club: Managing Kafka as a service at Salesforce | L...
Tales from the four-comma club: Managing Kafka as a service at Salesforce | L...
 
OpenStack Swift Command Line Reference Diablo v1.2
OpenStack Swift Command Line Reference Diablo v1.2OpenStack Swift Command Line Reference Diablo v1.2
OpenStack Swift Command Line Reference Diablo v1.2
 
Deployment with Fabric
Deployment with FabricDeployment with Fabric
Deployment with Fabric
 
The Fairy Tale of the One Command Build Script
The Fairy Tale of the One Command Build ScriptThe Fairy Tale of the One Command Build Script
The Fairy Tale of the One Command Build Script
 
Openstack taskflow 簡介
Openstack taskflow 簡介Openstack taskflow 簡介
Openstack taskflow 簡介
 

Mais de Gluster.org

Automating Gluster @ Facebook - Shreyas Siravara
Automating Gluster @ Facebook - Shreyas SiravaraAutomating Gluster @ Facebook - Shreyas Siravara
Automating Gluster @ Facebook - Shreyas SiravaraGluster.org
 
nfusr: a new userspace NFS client based on libnfs - Shreyas Siravara
nfusr: a new userspace NFS client based on libnfs - Shreyas Siravaranfusr: a new userspace NFS client based on libnfs - Shreyas Siravara
nfusr: a new userspace NFS client based on libnfs - Shreyas SiravaraGluster.org
 
Facebook’s upstream approach to GlusterFS - David Hasson
Facebook’s upstream approach to GlusterFS  - David HassonFacebook’s upstream approach to GlusterFS  - David Hasson
Facebook’s upstream approach to GlusterFS - David HassonGluster.org
 
Throttling Traffic at Facebook Scale
Throttling Traffic at Facebook ScaleThrottling Traffic at Facebook Scale
Throttling Traffic at Facebook ScaleGluster.org
 
Gluster Metrics: why they are crucial for running stable deployments of all s...
Gluster Metrics: why they are crucial for running stable deployments of all s...Gluster Metrics: why they are crucial for running stable deployments of all s...
Gluster Metrics: why they are crucial for running stable deployments of all s...Gluster.org
 
Releases: What are contributors responsible for
Releases: What are contributors responsible forReleases: What are contributors responsible for
Releases: What are contributors responsible forGluster.org
 
RIO Distribution: Reconstructing the onion - Shyamsundar Ranganathan
RIO Distribution: Reconstructing the onion - Shyamsundar RanganathanRIO Distribution: Reconstructing the onion - Shyamsundar Ranganathan
RIO Distribution: Reconstructing the onion - Shyamsundar RanganathanGluster.org
 
Native Clients, more the merrier with GFProxy!
Native Clients, more the merrier with GFProxy!Native Clients, more the merrier with GFProxy!
Native Clients, more the merrier with GFProxy!Gluster.org
 
Gluster: a SWOT Analysis
Gluster: a SWOT Analysis Gluster: a SWOT Analysis
Gluster: a SWOT Analysis Gluster.org
 
GlusterD-2.0: What's Happening? - Kaushal Madappa
GlusterD-2.0: What's Happening? - Kaushal MadappaGlusterD-2.0: What's Happening? - Kaushal Madappa
GlusterD-2.0: What's Happening? - Kaushal MadappaGluster.org
 
What Makes Us Fail
What Makes Us FailWhat Makes Us Fail
What Makes Us FailGluster.org
 
Gluster as Native Storage for Containers - past, present and future
Gluster as Native Storage for Containers - past, present and futureGluster as Native Storage for Containers - past, present and future
Gluster as Native Storage for Containers - past, present and futureGluster.org
 
Heketi Functionality into Glusterd2
Heketi Functionality into Glusterd2Heketi Functionality into Glusterd2
Heketi Functionality into Glusterd2Gluster.org
 
Architecture of the High Availability Solution for Ganesha and Samba with Kal...
Architecture of the High Availability Solution for Ganesha and Samba with Kal...Architecture of the High Availability Solution for Ganesha and Samba with Kal...
Architecture of the High Availability Solution for Ganesha and Samba with Kal...Gluster.org
 
Challenges with Gluster and Persistent Memory with Dan Lambright
Challenges with Gluster and Persistent Memory with Dan LambrightChallenges with Gluster and Persistent Memory with Dan Lambright
Challenges with Gluster and Persistent Memory with Dan LambrightGluster.org
 
Deploying pNFS over Distributed File Storage w/ Jiffin Tony Thottan and Niels...
Deploying pNFS over Distributed File Storage w/ Jiffin Tony Thottan and Niels...Deploying pNFS over Distributed File Storage w/ Jiffin Tony Thottan and Niels...
Deploying pNFS over Distributed File Storage w/ Jiffin Tony Thottan and Niels...Gluster.org
 
Sharding: Past, Present and Future with Krutika Dhananjay
Sharding: Past, Present and Future with Krutika DhananjaySharding: Past, Present and Future with Krutika Dhananjay
Sharding: Past, Present and Future with Krutika DhananjayGluster.org
 
State of Gluster Performance
State of Gluster PerformanceState of Gluster Performance
State of Gluster PerformanceGluster.org
 
Integration of Glusterfs in to commvault simpana
Integration of Glusterfs in to commvault simpanaIntegration of Glusterfs in to commvault simpana
Integration of Glusterfs in to commvault simpanaGluster.org
 
GFProxy: Scaling the GlusterFS FUSE Client
GFProxy: Scaling the GlusterFS FUSE Client	GFProxy: Scaling the GlusterFS FUSE Client
GFProxy: Scaling the GlusterFS FUSE Client Gluster.org
 

Mais de Gluster.org (20)

Automating Gluster @ Facebook - Shreyas Siravara
Automating Gluster @ Facebook - Shreyas SiravaraAutomating Gluster @ Facebook - Shreyas Siravara
Automating Gluster @ Facebook - Shreyas Siravara
 
nfusr: a new userspace NFS client based on libnfs - Shreyas Siravara
nfusr: a new userspace NFS client based on libnfs - Shreyas Siravaranfusr: a new userspace NFS client based on libnfs - Shreyas Siravara
nfusr: a new userspace NFS client based on libnfs - Shreyas Siravara
 
Facebook’s upstream approach to GlusterFS - David Hasson
Facebook’s upstream approach to GlusterFS  - David HassonFacebook’s upstream approach to GlusterFS  - David Hasson
Facebook’s upstream approach to GlusterFS - David Hasson
 
Throttling Traffic at Facebook Scale
Throttling Traffic at Facebook ScaleThrottling Traffic at Facebook Scale
Throttling Traffic at Facebook Scale
 
Gluster Metrics: why they are crucial for running stable deployments of all s...
Gluster Metrics: why they are crucial for running stable deployments of all s...Gluster Metrics: why they are crucial for running stable deployments of all s...
Gluster Metrics: why they are crucial for running stable deployments of all s...
 
Releases: What are contributors responsible for
Releases: What are contributors responsible forReleases: What are contributors responsible for
Releases: What are contributors responsible for
 
RIO Distribution: Reconstructing the onion - Shyamsundar Ranganathan
RIO Distribution: Reconstructing the onion - Shyamsundar RanganathanRIO Distribution: Reconstructing the onion - Shyamsundar Ranganathan
RIO Distribution: Reconstructing the onion - Shyamsundar Ranganathan
 
Native Clients, more the merrier with GFProxy!
Native Clients, more the merrier with GFProxy!Native Clients, more the merrier with GFProxy!
Native Clients, more the merrier with GFProxy!
 
Gluster: a SWOT Analysis
Gluster: a SWOT Analysis Gluster: a SWOT Analysis
Gluster: a SWOT Analysis
 
GlusterD-2.0: What's Happening? - Kaushal Madappa
GlusterD-2.0: What's Happening? - Kaushal MadappaGlusterD-2.0: What's Happening? - Kaushal Madappa
GlusterD-2.0: What's Happening? - Kaushal Madappa
 
What Makes Us Fail
What Makes Us FailWhat Makes Us Fail
What Makes Us Fail
 
Gluster as Native Storage for Containers - past, present and future
Gluster as Native Storage for Containers - past, present and futureGluster as Native Storage for Containers - past, present and future
Gluster as Native Storage for Containers - past, present and future
 
Heketi Functionality into Glusterd2
Heketi Functionality into Glusterd2Heketi Functionality into Glusterd2
Heketi Functionality into Glusterd2
 
Architecture of the High Availability Solution for Ganesha and Samba with Kal...
Architecture of the High Availability Solution for Ganesha and Samba with Kal...Architecture of the High Availability Solution for Ganesha and Samba with Kal...
Architecture of the High Availability Solution for Ganesha and Samba with Kal...
 
Challenges with Gluster and Persistent Memory with Dan Lambright
Challenges with Gluster and Persistent Memory with Dan LambrightChallenges with Gluster and Persistent Memory with Dan Lambright
Challenges with Gluster and Persistent Memory with Dan Lambright
 
Deploying pNFS over Distributed File Storage w/ Jiffin Tony Thottan and Niels...
Deploying pNFS over Distributed File Storage w/ Jiffin Tony Thottan and Niels...Deploying pNFS over Distributed File Storage w/ Jiffin Tony Thottan and Niels...
Deploying pNFS over Distributed File Storage w/ Jiffin Tony Thottan and Niels...
 
Sharding: Past, Present and Future with Krutika Dhananjay
Sharding: Past, Present and Future with Krutika DhananjaySharding: Past, Present and Future with Krutika Dhananjay
Sharding: Past, Present and Future with Krutika Dhananjay
 
State of Gluster Performance
State of Gluster PerformanceState of Gluster Performance
State of Gluster Performance
 
Integration of Glusterfs in to commvault simpana
Integration of Glusterfs in to commvault simpanaIntegration of Glusterfs in to commvault simpana
Integration of Glusterfs in to commvault simpana
 
GFProxy: Scaling the GlusterFS FUSE Client
GFProxy: Scaling the GlusterFS FUSE Client	GFProxy: Scaling the GlusterFS FUSE Client
GFProxy: Scaling the GlusterFS FUSE Client
 

Último

Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Alan Dix
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
Google AI Hackathon: LLM based Evaluator for RAG
Google AI Hackathon: LLM based Evaluator for RAGGoogle AI Hackathon: LLM based Evaluator for RAG
Google AI Hackathon: LLM based Evaluator for RAGSujit Pal
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024Scott Keck-Warren
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxOnBoard
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 

Último (20)

Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Google AI Hackathon: LLM based Evaluator for RAG
Google AI Hackathon: LLM based Evaluator for RAGGoogle AI Hackathon: LLM based Evaluator for RAG
Google AI Hackathon: LLM based Evaluator for RAG
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptx
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 

Hands On Gluster with Jeff Darcy

  • 1. Gluster Tutorial Jeff Darcy, Red Hat LISA 2016 (Boston)
  • 2. Agenda ▸ Alternating info-dump and hands-on ▹ This is part of the info-dump ;) ▸ Gluster basics ▸ Initial setup ▸ Extra features ▸ Maintenance and trouble-shooting
  • 3. Who Am I? ▸ One of three project-wide architects ▸ First Red Hat employee to be seriously involved with Gluster (before acquisition) ▸ Previously worked on NFS (v2..v4), Lustre, PVFS2, others ▸ General distributed-storage blatherer ▹ http://pl.atyp.us / @Obdurodon
  • 4. TEMPLATE CREDITS Special thanks to all the people who made and released these awesome resources for free: ▸ Presentation template by SlidesCarnival ▸ Photographs by Death to the Stock Photo (license)
  • 5. Some Terminology ▸ A brick is simply a directory on a server ▸ We use translators to combine bricks into more complex subvolumes ▹ For scale, replication, sharding, ... ▸ This forms a translator graph, contained in a volfile ▸ Internal daemons (e.g. self heal) use the same bricks arranged into slightly different volfiles
  • 6. Hands On: Getting Started 1. Use the RHGS test drive ▹ http://bit.ly/glustertestdrive 2. Start a Fedora/CentOS VM ▹ Use yum/dnf to install gluster ▹ base, libs, server, fuse, client-xlators, cli 3. Docker Docker Docker ▹ https://github.com/gluster/gluster-containers
  • 7. Brick / Translator Example Server A /brick1 Server B /brick2 Server C /brick3 Server D /brick4
  • 8. Brick / Translator Example Server A /brick1 Server B /brick2 Replica Set 1 Server C /brick3 Server D /brick4 Replica Set 2 A subvolume Also a subvolume
  • 9. Brick / Translator Example Server A /brick1 Server B /brick2 Replica Set 1 Server C /brick3 Server D /brick4 Replica Set 2 Volume “fubar”
  • 10. Translator Patterns Server A /brick1 Server B /brick2 Replica Set 1 Fan-out or “cluster” e.g. AFR, EC, DHT, ... AFR md-cache Pass through e.g. performance
  • 12. GlusterD ▸ Management daemon ▸ Maintains membership, detects server failures ▸ Stages configuration changes ▸ Starts and monitors other daemons
  • 13. Simple Configuration Example serverA# gluster peer probe serverB serverA# gluster volume create fubar replica 2 serverA:/brick1 serverB:/brick2 serverA# gluster volume start fubar clientX# mount -t glusterfs serverA:fubar /mnt/gluster_fubar
  • 14. Hands On: Connect Servers [root@vagrant-testVM glusterfs]# gluster peer probe 192.168.121.66 peer probe: success. [root@vagrant-testVM glusterfs]# gluster peer status Number of Peers: 1 Hostname: 192.168.121.66 Uuid: 95aee0b5-c816-445b-8dbc-f88da7e95660 State: Accepted peer request (Connected)
  • 15. Hands On: Server Volume Setup [root@vagrant-testVM glusterfs]# gluster volume create fubar replica 2 testvm:/d/backends/fubar{0,1} force volume create: fubar: success: please start the volume to access data [root@vagrant-testVM glusterfs]# gluster volume info fubar ... (see for yourself) [root@vagrant-testVM glusterfs]# gluster volume status fubar Volume fubar is not started
  • 16. Hands On: Server Volume Setup [root@vagrant-testVM glusterfs]# gluster volume start fubar volume start: fubar: success [root@vagrant-testVM glusterfs]# gluster volume status fubar Status of volume: fubar Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick testvm:/d/backends/fubar0 49152 0 Y 13104 Brick testvm:/d/backends/fubar1 49153 0 Y 13133 Self-heal Daemon on localhost N/A N/A Y 13163 Task Status of Volume fubar ------------------------------------------------------------------------------ There are no active volume tasks
  • 17. Hands On: Client Volume Setup [root@vagrant-testVM glusterfs]# mount -t glusterfs testvm:fubar /mnt/glusterfs/0 [root@vagrant-testVM glusterfs]# df /mnt/glusterfs/0 Filesystem 1K-blocks Used Available Use% Mounted on testvm:fubar 5232640 33280 5199360 1% /mnt/glusterfs/0 [root@vagrant-testVM glusterfs]# ls -a /mnt/glusterfs/0 . .. [root@vagrant-testVM glusterfs]# ls -a /d/backends/fubar0 . .. .glusterfs
  • 18. Hands On: It’s a Filesystem! ▸ Create some files ▸ Create directories, symlinks, ... ▸ Rename, delete, ... ▸ Test performance ▹ OK, not yet
  • 19. Distribution and Rebalancing Server X’s range Server Y’s range 0 0x7fffffff 0xffffffff ● Each brick “claims” a range of hash values ○ Collection of claims is called a layout ● Files (dots) are hashed, placed on brick claiming that range ● When bricks are added, claims are adjusted to minimize data motion
  • 20. Distribution and Rebalancing Server X’s range Server Y’s range 0 0x80000000 0xffffffff Server X’s range Server Y’s range 0 0x55555555 0xaaaaaaaa 0xffffffff Server Z’s range Move X->Z Move Y->Z
  • 21. Sharding ▸ Divides files into chunks ▸ Each chunk is placed separately according to hash ▸ High probability (not certainty) of chunks being on different subvolumes ▸ Spreads capacity and I/O across subvolumes
  • 22. Hands On: Adding a Brick [root@vagrant-testVM glusterfs]# gluster volume create xyzzy testvm:/d/backends/xyzzy{0,1} [root@vagrant-testVM glusterfs]# getfattr -d -e hex -m trusted.glusterfs.dht /d/backends/xyzzy{0,1} # file: d/backends/xyzzy0 trusted.glusterfs.dht=0x0000000100000000000000007ffffffe # file: d/backends/xyzzy1 trusted.glusterfs.dht=0x00000001000000007fffffffffffffff
  • 23. Hands On: Adding a Brick [root@vagrant-testVM glusterfs]# gluster volume add-brick xyzzy testvm:/d/backends/xyzzy2 volume add-brick: success [root@vagrant-testVM glusterfs]# gluster volume rebalance xyzzy fix-layout start volume rebalance: xyzzy: success: Rebalance on xyzzy has been started successfully. Use rebalance status command to check status of the rebalance process. ID: 88782248-7c12-4ba8-97f6-f5ce6815963
  • 24. Hands On: Adding a Brick [root@vagrant-testVM glusterfs]# getfattr -d -e hex -m trusted.glusterfs.dht /d/backends/xyzzy{0,1,2} # file: d/backends/xyzzy0 trusted.glusterfs.dht=0x00000001000000000000000055555554 # file: d/backends/xyzzy1 trusted.glusterfs.dht=0x0000000100000000aaaaaaaaffffffff # file: d/backends/xyzzy2 trusted.glusterfs.dht=0x000000010000000055555555aaaaaaa9
  • 25. Split Brain (problem definition) ▸ “Split brain” is when we don’t have enough information to determine correct recovery action ▸ Can be caused by node failure or network partition ▸ Every distributed data store has to prevent and/or deal with it
  • 26. How Replication Works ▸ Client sends operation (e.g. write) to all replicas directly ▸ Coordination: pre-op, post-op, locking ▹ enables recovery in case of failure ▸ Self-heal (repair) usually done by internal daemon
  • 27. Split Brain (how it happens) Server A Client X Client Y Server B Network partition
  • 28. Split Brain (what it looks like) [root@vagrant-testVM glusterfs]# ls /mnt/glusterfs/0 ls: cannot access /mnt/glusterfs/0/best-sf: Input/output error best-sf [root@vagrant-testVM glusterfs]# cat /mnt/glusterfs/0/best-sf cat: /mnt/glusterfs/0/best-sf: Input/output error [root@vagrant-testVM glusterfs]# cat /d/backends/fubar0/best-sf star trek [root@vagrant-testVM glusterfs]# cat /d/backends/fubar1/best-sf star wars What the...?
  • 29. Split Brain (dealing with it) ▸ Primary mechanism: quorum ▹ server side, client side, or both ▹ arbiters ▸ Secondary: rule-based resolution ▹ e.g. largest, latest timestamp ▹ Thanks, Facebook! ▸ Last choice: manual repair
  • 30. Server Side Quorum Brick A Brick B Brick C Client X Client Y Writes succeed Has no servers Forced down
  • 31. Client Side Quorum Brick A Brick B Brick C Client X Client Y Writes succeed Writes rejected locally (EROFS) Stays up
  • 32. Erasure Coding ▸ Encode N input blocks into N+K output blocks, so that original can be recovered from any N. ▸ RAID is erasure coding with K=1 (RAID 5) or K=2 (RAID 6) ▸ Our implementation mostly has the same flow as replication
  • 35. BREAK
  • 36. Quota ▸ Gluster supports directory-level quota ▸ For nested directories, lowest applicable limit applies ▸ Soft and hard limits ▹ Exceeding soft limit gets logged ▹ Exceeding hard limit gets EDQUOT
  • 37. Quota ▸ Problem: global vs. local limits ▹ quota is global (per volume) ▹ files are pseudo-randomly distributed across bricks ▸ How do we enforce this? ▸ Quota daemon exists to handle this coordination
  • 38. Hands On: Quota [root@vagrant-testVM glusterfs]# gluster volume quota xyzzy enable volume quota : success [root@vagrant-testVM glusterfs]# gluster volume quota xyzzy soft-timeout 0 volume quota : success [root@vagrant-testVM glusterfs]# gluster volume quota xyzzy hard-timeout 0 volume quota : success [root@vagrant-testVM glusterfs]# gluster volume quota xyzzy limit-usage /john 100MB volume quota : success
  • 39. Hands On: Quota [root@vagrant-testVM glusterfs]# gluster volume quota xyzzy list Path Hard-limit Soft-limit ----------------------------------------------------------------- /john 100.0MB 80%(80.0MB) Used Available Soft-limit exceeded? Hard-limit exceeded? -------------------------------------------------------------- 0Bytes 100.0MB No No
  • 40. Hands On: Quota [root@vagrant-testVM glusterfs]# dd if=/dev/zero of=/mnt/glusterfs/0/john/bigfile bs=1048576 count=85 conv=sync 85+0 records in 85+0 records out 89128960 bytes (89 MB) copied, 1.83037 s, 48.7 MB/s [root@vagrant-testVM glusterfs]# grep -i john /var/log/glusterfs/bricks/* /var/log/glusterfs/bricks/d-backends-xyzzy0.log:[2016-11-29 14:31:44.581934] A [MSGID: 120004] [quota.c:4973:quota_log_usage] 0-xyzzy-quota: Usage crossed soft limit: 80.0MB used by /john
  • 41. Hands On: Quota [root@vagrant-testVM glusterfs]# dd if=/dev/zero of=/mnt/glusterfs/0/john/bigfile2 bs=1048576 count=85 conv=sync dd: error writing '''/mnt/glusterfs/0/john/bigfile2''': Disk quota exceeded [root@vagrant-testVM glusterfs]# gluster volume quota xyzzy list | cut -c 66- Used Available Soft-limit exceeded? Hard-limit exceeded? -------------------------------------------------------------- 101.9MB 0Bytes Yes Yes
  • 42. Snapshots ▸ Gluster supports read-only snapshots and writable clones of snapshots ▸ Also, snapshot restores ▸ Support is based on / tied to LVM thin provisioning ▹ originally supposed to be more platform-agnostic ▹ maybe some day it really will be
  • 43. Hands On: Snapshots [root@vagrant-testVM glusterfs]# fallocate -l $((100*1024*1024)) /tmp/snap-brick0 [root@vagrant-testVM glusterfs]# losetup --show -f /tmp/snap-brick0 /dev/loop3 [root@vagrant-testVM glusterfs]# vgcreate snap-vg0 /dev/loop3 Volume group "snap-vg0" successfully created
  • 44. Hands On: Snapshots [root@vagrant-testVM glusterfs]# lvcreate -L 50MB -T /dev/snap-vg0/thinpool Rounding up size to full physical extent 52.00 MiB Logical volume "thinpool" created. [root@vagrant-testVM glusterfs]# lvcreate -V 200MB -T /dev/snap-vg0/thinpool -n snap-lv0 Logical volume "snap-lv0" created. [root@vagrant-testVM glusterfs]# mkfs.xfs /dev/snap-vg0/snap-lv0 ... [root@vagrant-testVM glusterfs]# mount /dev/snap-vg0/snap-lv0 /d/backends/xyzzy0 ...
  • 45. Hands On: Snapshots [root@vagrant-testVM glusterfs]# gluster volume create xyzzy testvm:/d/backends/xyzzy{0,1} force [root@vagrant-testVM glusterfs]# echo hello > /mnt/glusterfs/0/file1 [root@vagrant-testVM glusterfs]# echo hello > /mnt/glusterfs/0/file2 [root@vagrant-testVM glusterfs]# gluster snapshot create snap1 xyzzy snapshot create: success: Snap snap1_GMT-2016.11.29-14.57.11 created successfully [root@vagrant-testVM glusterfs]# echo hello > /mnt/glusterfs/0/file3
  • 46. Hands On: Snapshots [root@vagrant-testVM glusterfs]# gluster snapshot activate snap1_GMT-2016.11.29-14.57.11 Snapshot activate: snap1_GMT-2016.11.29-14.57.11: Snap activated successfully [root@vagrant-testVM glusterfs]# mount -t glusterfs testvm:/snaps/snap1_GMT-2016.11.29-14.57.11/xyzzy /mnt/glusterfs/1 [root@vagrant-testVM glusterfs]# ls /mnt/glusterfs/1 file1 file2 [root@vagrant-testVM glusterfs]# echo hello > /mnt/glusterfs/1/file3 -bash: /mnt/glusterfs/1/file3: Read-only file system
  • 47. Hands On: Snapshots [root@vagrant-testVM glusterfs]# gluster snapshot clone clone1 snap1_GMT-2016.11.29-14.57.11 snapshot clone: success: Clone clone1 created successfully [root@vagrant-testVM glusterfs]# gluster volume start clone1 volume start: clone1: success [root@vagrant-testVM glusterfs]# mount -t glusterfs testvm:/clone1 /mnt/glusterfs/2 [root@vagrant-testVM glusterfs]# echo goodbye > /mnt/glusterfs/2/file3
  • 48. Hands On: Snapshots # Unmount and stop clone. # Stop original volume - but leave snapshot activated! [root@vagrant-testVM glusterfs]# gluster snapshot restore snap1_GMT-2016.11.29-14.57.11 Restore operation will replace the original volume with the snapshotted volume. Do you still want to continue? (y/n) y Snapshot restore: snap1_GMT-2016.11.29-14.57.11: Snap restored successfully [root@vagrant-testVM glusterfs]# gluster volume start xyzzy volume start: xyzzy: success [root@vagrant-testVM glusterfs]# ls /mnt/glusterfs/0 file1 file2
  • 49. BREAK
  • 50. Other Features ▸ Geo-replication ▸ Bitrot detection ▸ Transport security ▸ Encryption, compression/dedup etc. can be done locally on bricks
  • 51. Gluster 4.x ▸ GlusterD 2 ▹ higher scale + interfaces + smarts ▸ Server-side replication ▸ DHT improvements for scale ▸ More multitenancy ▹ subvolume mounts, throttling/QoS