SlideShare uma empresa Scribd logo
1 de 46
Introduction
Introduction
2
Presenter and company intro
Who are we and what we do?
Migration to OpenNebula and StorPool
In order to fix our scalability problems we pinpointed
the need for a virtualization layer and distributed
storage. After thorough research we ended up with
OpenNebula and StorPool
Inoreader
What is Inoreader and what challenges we
faced while building and maintaining it?
Tips
Infrastructure issues
We were facing numerous scalability issues
while at the same time we hade a an array of
servers doing nothing mostly because of filled
storage. At certain point we hit a brick wall.
QA
If you have any questions I will gladly answer
them
Some useful takeaways for you.
I have 10+ years of experience in the Telco IT sector, working with large enterprise solutions as well as
building specialized solutions from scratch.
I have founded a company called Innologica in 2013 with the mission of developing Next-Gen OSS and
BSS solutions. A side project was born back then called Inoreader, which quickly turned into a leading
platform for content consumption and is now a core product of the company.
Yordan Yordanov
3
CEO Innologica
Who Are We?
5
Product company
We are not a sweatshop.
We make successful
products.
International market
Our customers are all over
the globe.
Relaxed environment
We do not push the devs,
but we cherish top
performers.
Smart team
The team is small, but
each member brings great
value.
Inoreader
RSS News aggregator and information hub
6
150,000 DAU
We have 150k daily active users (DAU) and more than
30k simultaneous sessions in peak times. Closing in on
1M registered users soon. 10k and counting premium
subscribers.
15,000,000,000 articles in MySQL and ES
We keep the full archive in enormous MySQL Databases
and a separate Elasticsearch cluster just for searching.
Around 20TB of data without the replicas. 10M+ new
articles per day.
1,000,000 feed updates per hour
We need to update our 10+ Million feeds in a timely
manner. A lot of machines are dedicated for this task
only.
40 VMs and 10 physical hosts
The platform is currently running on 30 Virtual Machines
mainly in our main DC. There are some physical hosts
that were not good candidates for virtualization mainly for
Elasticsearch.
7
Extreme Makeover
The old and the new setup
7
100% Virtualized
No more services running
directly on bare-metal.
Lighter power
footprint300% more capacity with
60% of the previous
servers with room for
expansion.
Performance gains
Huge compute and storage
performance gains.
Maintainability is a breeze
too.
INFRASTRUCTURE ISSUES
Our main drivers to migrate to fully virtualized
environment
Hardware capacity
9
We needed to constantly buy new servers just to keep up with the
growing databases, because local storages were being quickly
exhausted.
We were using expensive RAID cards and RAID-10 setups for all
databases. Those severs never used more than 10% of their CPUs,
so it was a complete waste of resources.
Our problem
CPU
10%
Memory
Storage
Rack space
50%
90%
100%
Hardware failures
Not so common but always hair-pulling
10
All components are bound to fail. Whenever we lose a server, there
was always at least some service disruption if not a whole outage.
All databases needed to have replications, which skyrocketed server
costs and didn’t provide automatic HA. If a hard-drive fails in a
RAID-10 setup you need to replace it ASAP. Bigger drives are more
prone to cause errors while rebuilding.
Large databases on RAID-10 are slow to recover from crashes, so
replications should be carefully set up and should be on identical
(expensive) hardware in case a replication should be promoted to a
master.
Nobody likes to go to a DC on Saturday to replace a failed drive,
reinstall OS and rotate replications. We much prefer to ride bikes!
Problem description
CHOSEN SOLUTION
We chose to virtualize everything using
OpenNebula + StorPool
Project Timeline
12
2017
Nov 2017
Nov 2017 – Jan 2018
Feb 2018
Mar 2018
PROJECT START
We knew for quite a while
that we need a solution to
the growth problem.
PLANNING AND FIRST TESTS
While the hardware was in
transit we took our time to
learn OpenNebula and test
it as much as possible
SUCCESS
We have finally migrated
our last server and all VMs
were happily running on
OpenNebula and StorPool.
CHOOSING A SOLUTION
We held some meetings
with vendors and
researched different
solutions
EXECUTION
We have migrated all
servers through several
iterations which will be
described in more detail
here
Hardware
13
StorPool nodes
We chose a standard 3x SuperMicro SC836 3U servers.
Switches
As recommended by StorPool we chose Quanta LB8 for
the 10G network and Quanta LB4-M for the Gigabit
network.
Hypervisors
We have reused our old servers, but modified their CPUs
and memory.
Others
10G LAN cards and cables
StorPool Nodes
14
StorPool recommends to use commodity hardware. Supermicro
offers a good platform without vendor specific requirements for RAID
cards, etc. and is very budget friendly.
Our setup:
• Supermicro CSE-836B chassis
• Supermicro X10SRL-F motherboard
• 1x Intel Xeon E5-1620 v4 CPU (8 threads @3.5Ghz)
• 64GB DDR4-2666 RAM
• Avago 3108L RAID controller with 2G cache
• Intel X520-DA2 10G Ethernet card
• 8x 4TB HDD LFF SATA3 7200 RPM
• 8x 2TB HDD LFF SATA3 7200 RPM (reused from older servers)
Around 3300 EUR per server
Gigabit Network – Quanta LB4M
15
We were struggling with some old TP-Link SG2424 switches that we
wanted to upgrade, so we used the opportunity to upgrade the
regular 1G network too. We chose the Quanta LB4M.
Key aspects
• 48x Gigabit RJ45 ports
• 2x 10G SFP+ ports
• Redundant power supplies
• Very cheap!
• EOL – You might want to stack up some spare switches!
• Stable (4 months without a single flop for now)
Around 250 EUR per switch from eBay.
10G Network – Quanta LB8
16
Again due to StorPool recommendation we procured three Quanta
LB8 switches. They seem to be performing great so far.
Key aspects
• 48x 10G SFP+ ports
• Redundant power supplies
• Very cheap for what they offer!
• EOL – You might want to stack up some spare switches!
• Stable (4 months without a single flop for now)
700-1000 EUR per switch from eBay including customs taxes.
Hypervisors
17
We have reused our old servers, but with some significant upgrades.
We currently have 12 hypervisors with the following configuration:
• Supermicro 1U chassis with X9DRW motherboards
• 2x Intel Xeon E5-2650 v2 CPU (32 total threads)
• Dual power supply
• 128G DDR3 12800R Memory
• Intel X520-DA2 10G card
• 2xHDD in mdraid for OS only
EXECUTION
Story with pictures
New Rack
19
We have rented a new rack in our collocation center since we didn’t
have any more space available in the old rack.
The idea was simple – Deploy StorPool in the new rack only and
gradually migrate hypervisors.
StorPool Nodes
20
The servers landed in our office in late January.
It was Friday afternoon, but we quickly installed them in the lab and
let the StorPool guys do their magic over the weekend.
Installation Day
21
The next Monday StorPool finished all tests and the equipment was
ready to be installed in our DC.
Installation Day
22
Fast forward several hours and we had our first StorPool cluster up
and running. Still not hypervisors. StorPool needed to perform a full
cluster check in the real environment to see if everything works well.
First hypervisors
23
The very next day we installed our first hypervisors – the temporary
ones that were holding VMs installed during our test period. Those
VMs were still running on local storage and NFS.
The next step was to migrate them to StorPool.
VM Migration to StorPool
24
Shut down the VM
Use SunStone or cli to shut
down the VM.01
Create StorPool volumesOn the host, use the storpool cli
to create volume(s) for the VM
with the exact size of the original
images
02
Copy the VolumesUse dd or qemu-convert for raw
and qcow2 images respectively
to copy the images to the
StorPool volumes.
03
Reattach imagesDetach local images and attach
StorPool ones. Mind the order.
There’s a catch with large
images*
04
Power up the VM
Check if the VM boots properly.
We’re not done yet…05
Finalize the migrationTo fully migrate persistent VMs use
the Recover -> delete-recreate
function to redeploy all files to
StorPool.
06
*Large images (100G+) takes forever to detach on slow local storage, so we had to kill the cp process and use the onevm recover success
option to lie to OpenNebula that the detach actually completed. This is risky but save a LOT of downtime.
After all VMs are migrated, you can delete the old system and image datastores and leave only StorPool DSs
At this point we are completely on StorPool!
StorPool helps their customers with this step, but here’s the summary of what we did.
Next hypervisors
25
From here on we had several iterations that consisted of roughly the
following:
• Create a list of servers for migration. The more hypervisors the
more servers we can move in a single iteration
• Create VMs and migrate the services there
• Use the opportunity to untangle microservices running on the
same machine
• Make sure servers are completely drained from any services.
• Shut down the servers and plan a visit to the DC the next day
• Continue on the next slide…
Remove servers from the old rack
26
Remove HDDs and RAID controllers
27
Upgrade CPUs and RAM
28
Install 10G card and smaller HDDs and reinstall OS
29
Install the servers in the new rack and hand over to StorPool
30
RINSE AND REPEAT
At each iteration we move more servers at
once because we have more capacity for
VMs
Current capacity
32
At the end we have achieved 3x capacity boost in terms of
processing power and memory with just a fraction of our previous
servers, because with virtualization we can distribute the resources
however we’d like. In terms of storage we are on a completely
different level since we are no longer restricted to a single machine
capacity, we have 3x redundancy and all the performance we need.
We did it!
Allocated CPU
37%
Allocated Memory
Storage
Rack space
32%
67%
70%
Our Dashboard
33
A glimpse at our OpenNebula dashboard.
336 CPU cores and 1.2TB of RAM in just 12 hypervisors.
Hypervisor view
34
All hypervisors are all nicely balanced using the default
scheduler.
There’s always enough room to move VMs around in case a
hypervisor crashes or if we need to reboot a host.
SOME TIPS
Optimize CPU for homogenous clusters
36
Available as template setting since OpenNebula 5.4.6. Set to host-
passthrough.
This option presents the real CPU model to the VMs instead of the
default QEMU CPU. It can substantially increase the performance
especially if instructions like aes are needed.
Do not use it if you have different CPU models across the cluster
since it will cause the VMs to crash after live migration.
For older OpenNebula setups set this as RAW DATA in the
template:
<cpu mode="host-passthrough"/>
Beware of mkfs.xfs on large StorPool volumes inside VMs
37
We noticed that when doing mkfs.xfs on large StorPool volumes
(e.g. 4TB) there was a big delay before the command completes.
What’s worse is that during this time all VMs on this host starve for
IO, because the storpool_block.bin process is using 100% CPU
time.
The image shown on the left is for 1TB volume.
The reason is that mkfs uses TRIM by default and the StorPool
driver support that.
To remedy it use -K option for mkfs.xfs or -E nodiscard for
mkfs.ext4, e.g.:
• mkfs.xfs -K /dev/sdb1
• mkfs.ext4 -E nodiscard /dev/sdb1
Use the 10G network for OpenNebula too
38
This is probably an obvious one, but it deserves to be mentioned. By
default your hosts will probably resolve others via the regular Gigabit
network. Forcing them to talk through the 10G storage network will
drastically improve the live VM migration. The migration is not IO
bound so it will completely saturate the network.
Usually a simple /etc/hosts modification.
Consult with StorPool for your specific use case before doing that.
Live migrating a VM with 8G of ram takes 7 seconds on 10G. The
same VM will take aboud 1.5 minutes on a Gigabit network and will
probably disturb VM communications if the network is saturated.
Live migration on highly loaded VMs can take significantly longer
and should be monitored. In some cases it’s enough to stop busy
services for just a second for the migration to complete.
Other tips
39
Those are the more obvious ones that probably everyone uses in
production, but still worth mentioning.
• Use cache=none, io=native when attaching volumes
• Use virtio networking instead of the default 8139 nic. The latter
has performance issues and drops packets when host IO is high
• Measure IO latency instead of IO load to judge saturation. We
have several machines with constant 99% IO load which are
doing perfectly fine.
/etc/one/vmm_exec/vmm_exec_kvm.conf:
…
DISK = [ driver = "raw" , cache = "none", io = "native",
discard = "unmap", bus = "scsi" ]
NIC = [ filter = "clean-traffic", model="virtio" ]
….
MONITORING
Dashboards
Grafana Dashboards
41
We have adapted the OpenNebula Dashboards with
Graphite and Grafana scripts by Sebastian Mangelkramer
and used them to create our own Grafana dashboards so
we can see at a glance which hypervisors are most loaded
and how much overall capacity we have.
Grafana TV Dashboard
42
Why not have a master dashboard on the TV at the office? This
gives our team a very quick and easy way to tell if everything is
working smoothly.
If all you see is green, we’re good 
This dashboard show our main DC on the first row, our backup DC
on the second and then some other critical aspects of our system.
It’s still a WIP, hence the empty space.
At the top is our Geckoboard that we use for more business KPIs.
Server Power Usage in Grafana
43
Part of our virtualization project was to optimize the
electricity bill by using less servers. We were able to easily
measure our power usage by using Graphite and Grafana.
If you are interested, the script for getting the data into
Graphite is here:
https://gist.github.com/Jacketbg/6973efdb41a2ecfcf2a83ea8
4c086887
The Grafana Dashboard can be found here:
https://gist.github.com/Jacketbg/7255b4f81ebb2de0e8a570
8b4335c9d7
Obviously you will need to tweak it, especially the formula
for the power bill.
StorPool’s Grafana
44
StorPool were nice to give us an access to their own
Grafana instance where they collect a lot of internal data
about the system and KPIs. It gives us great insights that
we couldn’t get otherwise so we can plan and estimate the
system load very well.
What’s Left?
45
SSD Pool
We are currently only using a HDD pool, but we could
benefit from a smaller SSD pool for picky MySQL
databases.
Add more hypervisors
As the service grows our needs will too. We will probably
have rack space for the near years to come.
Add more StorPool nodes
We have maxed out the HDD bays on our our current
nodes, so we’ll probably need to add more nodes in the
future.
Upgrade StorPool nodes to 40G
Currently the nodes use 2x10G ports like the
hypervisors. After adding an SSD pool we are
considering upgrading to 40G
THANK YOU !
READ MORE ON
BLOG.INOREADER.COM
GET THIS PRESENTATION FROM ino.to/one-
sofia

Mais conteúdo relacionado

Mais procurados

OpenNebula 5.4 Enhancements vCenter Integration
OpenNebula 5.4 Enhancements vCenter IntegrationOpenNebula 5.4 Enhancements vCenter Integration
OpenNebula 5.4 Enhancements vCenter IntegrationOpenNebula Project
 
OpenNebulaconf2017US: Vtastic:Akamai innovations for distributed system testi...
OpenNebulaconf2017US: Vtastic:Akamai innovations for distributed system testi...OpenNebulaconf2017US: Vtastic:Akamai innovations for distributed system testi...
OpenNebulaconf2017US: Vtastic:Akamai innovations for distributed system testi...OpenNebula Project
 
OpenNebula TechDay Waterloo 2015 - Hyperconvergence and OpenNebula
OpenNebula TechDay Waterloo 2015 - Hyperconvergence  and  OpenNebulaOpenNebula TechDay Waterloo 2015 - Hyperconvergence  and  OpenNebula
OpenNebula TechDay Waterloo 2015 - Hyperconvergence and OpenNebulaOpenNebula Project
 
OpenNebulaconf2017US: Software defined networking with OpenNebula by Roy Keen...
OpenNebulaconf2017US: Software defined networking with OpenNebula by Roy Keen...OpenNebulaconf2017US: Software defined networking with OpenNebula by Roy Keen...
OpenNebulaconf2017US: Software defined networking with OpenNebula by Roy Keen...OpenNebula Project
 
Disaster recovery solution with open nebula and storpool
Disaster recovery solution with open nebula and storpoolDisaster recovery solution with open nebula and storpool
Disaster recovery solution with open nebula and storpoolOpenNebula Project
 
Unrevealed Story Behind Viettel Network Cloud Hotpot | Đặng Văn Đại, Hà Mạnh ...
Unrevealed Story Behind Viettel Network Cloud Hotpot | Đặng Văn Đại, Hà Mạnh ...Unrevealed Story Behind Viettel Network Cloud Hotpot | Đặng Văn Đại, Hà Mạnh ...
Unrevealed Story Behind Viettel Network Cloud Hotpot | Đặng Văn Đại, Hà Mạnh ...Vietnam Open Infrastructure User Group
 
OpenNebula Conf 2014 | Cloud Automation for OpenNebula by Kishorekumar Neelam...
OpenNebula Conf 2014 | Cloud Automation for OpenNebula by Kishorekumar Neelam...OpenNebula Conf 2014 | Cloud Automation for OpenNebula by Kishorekumar Neelam...
OpenNebula Conf 2014 | Cloud Automation for OpenNebula by Kishorekumar Neelam...NETWAYS
 
OpenNebula 5.0 Highlights - Rubén S. Montero
OpenNebula 5.0 Highlights - Rubén S. MonteroOpenNebula 5.0 Highlights - Rubén S. Montero
OpenNebula 5.0 Highlights - Rubén S. MonteroOpenNebula Project
 
SUSE Expert Days Paris 2018 – SLE 15
SUSE Expert Days Paris 2018 – SLE 15SUSE Expert Days Paris 2018 – SLE 15
SUSE Expert Days Paris 2018 – SLE 15SUSE
 
Gluster ovirt integration_gluster_meetup_pune_2015
Gluster ovirt integration_gluster_meetup_pune_2015Gluster ovirt integration_gluster_meetup_pune_2015
Gluster ovirt integration_gluster_meetup_pune_2015Ramesh Nachimuthu
 
OpenNebulaConf2018 - We use OpenNebula everywhere now - Florian Heigl and Tho...
OpenNebulaConf2018 - We use OpenNebula everywhere now - Florian Heigl and Tho...OpenNebulaConf2018 - We use OpenNebula everywhere now - Florian Heigl and Tho...
OpenNebulaConf2018 - We use OpenNebula everywhere now - Florian Heigl and Tho...OpenNebula Project
 
Microsoft Azure 新功能導覽 @ Build 2014
Microsoft Azure 新功能導覽 @ Build 2014Microsoft Azure 新功能導覽 @ Build 2014
Microsoft Azure 新功能導覽 @ Build 2014Jeff Chu
 
Java App On Digital Ocean: Deploying With Gitlab CI/CD
Java App On Digital Ocean: Deploying With Gitlab CI/CDJava App On Digital Ocean: Deploying With Gitlab CI/CD
Java App On Digital Ocean: Deploying With Gitlab CI/CDSeun Matt
 
OpenNebulaConf 2016 - VTastic: Akamai Innovations for Distributed System Test...
OpenNebulaConf 2016 - VTastic: Akamai Innovations for Distributed System Test...OpenNebulaConf 2016 - VTastic: Akamai Innovations for Distributed System Test...
OpenNebulaConf 2016 - VTastic: Akamai Innovations for Distributed System Test...OpenNebula Project
 
Managing ceph through_oVirt_using_Cinder
Managing ceph through_oVirt_using_CinderManaging ceph through_oVirt_using_Cinder
Managing ceph through_oVirt_using_CinderMaor Lipchuk
 
OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...
OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...
OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...OpenNebula Project
 
Packaging Strategy for Community Openstack and Implementation Reference | Hoj...
Packaging Strategy for Community Openstack and Implementation Reference | Hoj...Packaging Strategy for Community Openstack and Implementation Reference | Hoj...
Packaging Strategy for Community Openstack and Implementation Reference | Hoj...Vietnam Open Infrastructure User Group
 

Mais procurados (20)

OpenNebula 5.4 Enhancements vCenter Integration
OpenNebula 5.4 Enhancements vCenter IntegrationOpenNebula 5.4 Enhancements vCenter Integration
OpenNebula 5.4 Enhancements vCenter Integration
 
OpenNebulaconf2017US: Vtastic:Akamai innovations for distributed system testi...
OpenNebulaconf2017US: Vtastic:Akamai innovations for distributed system testi...OpenNebulaconf2017US: Vtastic:Akamai innovations for distributed system testi...
OpenNebulaconf2017US: Vtastic:Akamai innovations for distributed system testi...
 
OpenNebula TechDay Waterloo 2015 - Hyperconvergence and OpenNebula
OpenNebula TechDay Waterloo 2015 - Hyperconvergence  and  OpenNebulaOpenNebula TechDay Waterloo 2015 - Hyperconvergence  and  OpenNebula
OpenNebula TechDay Waterloo 2015 - Hyperconvergence and OpenNebula
 
OpenNebulaconf2017US: Software defined networking with OpenNebula by Roy Keen...
OpenNebulaconf2017US: Software defined networking with OpenNebula by Roy Keen...OpenNebulaconf2017US: Software defined networking with OpenNebula by Roy Keen...
OpenNebulaconf2017US: Software defined networking with OpenNebula by Roy Keen...
 
Disaster recovery solution with open nebula and storpool
Disaster recovery solution with open nebula and storpoolDisaster recovery solution with open nebula and storpool
Disaster recovery solution with open nebula and storpool
 
XWiki Aquarium Paris
XWiki Aquarium ParisXWiki Aquarium Paris
XWiki Aquarium Paris
 
Unrevealed Story Behind Viettel Network Cloud Hotpot | Đặng Văn Đại, Hà Mạnh ...
Unrevealed Story Behind Viettel Network Cloud Hotpot | Đặng Văn Đại, Hà Mạnh ...Unrevealed Story Behind Viettel Network Cloud Hotpot | Đặng Văn Đại, Hà Mạnh ...
Unrevealed Story Behind Viettel Network Cloud Hotpot | Đặng Văn Đại, Hà Mạnh ...
 
OpenNebula Conf 2014 | Cloud Automation for OpenNebula by Kishorekumar Neelam...
OpenNebula Conf 2014 | Cloud Automation for OpenNebula by Kishorekumar Neelam...OpenNebula Conf 2014 | Cloud Automation for OpenNebula by Kishorekumar Neelam...
OpenNebula Conf 2014 | Cloud Automation for OpenNebula by Kishorekumar Neelam...
 
OpenNebula 5.0 Highlights - Rubén S. Montero
OpenNebula 5.0 Highlights - Rubén S. MonteroOpenNebula 5.0 Highlights - Rubén S. Montero
OpenNebula 5.0 Highlights - Rubén S. Montero
 
SUSE Expert Days Paris 2018 – SLE 15
SUSE Expert Days Paris 2018 – SLE 15SUSE Expert Days Paris 2018 – SLE 15
SUSE Expert Days Paris 2018 – SLE 15
 
Gluster ovirt integration_gluster_meetup_pune_2015
Gluster ovirt integration_gluster_meetup_pune_2015Gluster ovirt integration_gluster_meetup_pune_2015
Gluster ovirt integration_gluster_meetup_pune_2015
 
OpenNebulaConf2018 - We use OpenNebula everywhere now - Florian Heigl and Tho...
OpenNebulaConf2018 - We use OpenNebula everywhere now - Florian Heigl and Tho...OpenNebulaConf2018 - We use OpenNebula everywhere now - Florian Heigl and Tho...
OpenNebulaConf2018 - We use OpenNebula everywhere now - Florian Heigl and Tho...
 
Microsoft Azure 新功能導覽 @ Build 2014
Microsoft Azure 新功能導覽 @ Build 2014Microsoft Azure 新功能導覽 @ Build 2014
Microsoft Azure 新功能導覽 @ Build 2014
 
Java App On Digital Ocean: Deploying With Gitlab CI/CD
Java App On Digital Ocean: Deploying With Gitlab CI/CDJava App On Digital Ocean: Deploying With Gitlab CI/CD
Java App On Digital Ocean: Deploying With Gitlab CI/CD
 
OpenNebulaConf 2016 - VTastic: Akamai Innovations for Distributed System Test...
OpenNebulaConf 2016 - VTastic: Akamai Innovations for Distributed System Test...OpenNebulaConf 2016 - VTastic: Akamai Innovations for Distributed System Test...
OpenNebulaConf 2016 - VTastic: Akamai Innovations for Distributed System Test...
 
Managing ceph through_oVirt_using_Cinder
Managing ceph through_oVirt_using_CinderManaging ceph through_oVirt_using_Cinder
Managing ceph through_oVirt_using_Cinder
 
OpenNebula Administrator View
OpenNebula Administrator ViewOpenNebula Administrator View
OpenNebula Administrator View
 
OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...
OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...
OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...
 
MySQL Aquarium Paris
MySQL Aquarium ParisMySQL Aquarium Paris
MySQL Aquarium Paris
 
Packaging Strategy for Community Openstack and Implementation Reference | Hoj...
Packaging Strategy for Community Openstack and Implementation Reference | Hoj...Packaging Strategy for Community Openstack and Implementation Reference | Hoj...
Packaging Strategy for Community Openstack and Implementation Reference | Hoj...
 

Semelhante a Inoreader OpenNebula + StorPool migration

2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on Ceph2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on CephCeph Community
 
Idi2017 - Cloud DB: strengths and weaknesses
Idi2017 - Cloud DB: strengths and weaknessesIdi2017 - Cloud DB: strengths and weaknesses
Idi2017 - Cloud DB: strengths and weaknessesLinuxaria.com
 
Montreal OpenStack Q2 MeetUp - May 30th 2017
Montreal OpenStack Q2 MeetUp - May 30th 2017Montreal OpenStack Q2 MeetUp - May 30th 2017
Montreal OpenStack Q2 MeetUp - May 30th 2017Stacy Véronneau
 
OpenStack Ottawa Q2 MeetUp - May 31st 2017
OpenStack Ottawa Q2 MeetUp - May 31st 2017OpenStack Ottawa Q2 MeetUp - May 31st 2017
OpenStack Ottawa Q2 MeetUp - May 31st 2017Stacy Véronneau
 
Database as a Service (DBaaS) on Kubernetes
Database as a Service (DBaaS) on KubernetesDatabase as a Service (DBaaS) on Kubernetes
Database as a Service (DBaaS) on KubernetesObjectRocket
 
Bursting into the public Cloud - Sharing my experience doing it at large scal...
Bursting into the public Cloud - Sharing my experience doing it at large scal...Bursting into the public Cloud - Sharing my experience doing it at large scal...
Bursting into the public Cloud - Sharing my experience doing it at large scal...Igor Sfiligoi
 
OpenStack Toronto Q2 MeetUp - June 1st 2017
OpenStack Toronto Q2 MeetUp - June 1st 2017OpenStack Toronto Q2 MeetUp - June 1st 2017
OpenStack Toronto Q2 MeetUp - June 1st 2017Stacy Véronneau
 
Sanger OpenStack presentation March 2017
Sanger OpenStack presentation March 2017Sanger OpenStack presentation March 2017
Sanger OpenStack presentation March 2017Dave Holland
 
Coates bosc2010 clouds-fluff-and-no-substance
Coates bosc2010 clouds-fluff-and-no-substanceCoates bosc2010 clouds-fluff-and-no-substance
Coates bosc2010 clouds-fluff-and-no-substanceBOSC 2010
 
Scylla Summit 2018: Meshify - A Case Study, or Petshop Seamonsters
Scylla Summit 2018: Meshify - A Case Study, or Petshop SeamonstersScylla Summit 2018: Meshify - A Case Study, or Petshop Seamonsters
Scylla Summit 2018: Meshify - A Case Study, or Petshop SeamonstersScyllaDB
 
Big Data and OpenStack, a Love Story: Michael Still, Rackspace
Big Data and OpenStack, a Love Story: Michael Still, RackspaceBig Data and OpenStack, a Love Story: Michael Still, Rackspace
Big Data and OpenStack, a Love Story: Michael Still, RackspaceOpenStack
 
OpenNebulaConf2015 1.07 Cloud for Scientific Computing @ STFC - Alexander Dibbo
OpenNebulaConf2015 1.07 Cloud for Scientific Computing @ STFC - Alexander DibboOpenNebulaConf2015 1.07 Cloud for Scientific Computing @ STFC - Alexander Dibbo
OpenNebulaConf2015 1.07 Cloud for Scientific Computing @ STFC - Alexander DibboOpenNebula Project
 
Latest (storage IO) patterns for cloud-native applications
Latest (storage IO) patterns for cloud-native applications Latest (storage IO) patterns for cloud-native applications
Latest (storage IO) patterns for cloud-native applications OpenEBS
 
OpenEBS hangout #4
OpenEBS hangout #4OpenEBS hangout #4
OpenEBS hangout #4OpenEBS
 
DATABASE AUTOMATION with Thousands of database, monitoring and backup
DATABASE AUTOMATION with Thousands of database, monitoring and backupDATABASE AUTOMATION with Thousands of database, monitoring and backup
DATABASE AUTOMATION with Thousands of database, monitoring and backupSaewoong Lee
 
Workday's Next Generation Private Cloud
Workday's Next Generation Private CloudWorkday's Next Generation Private Cloud
Workday's Next Generation Private CloudSilvano Buback
 
rhte-2023-myths-about-openshift-virtualization-joachim-von-thadden.pptx
rhte-2023-myths-about-openshift-virtualization-joachim-von-thadden.pptxrhte-2023-myths-about-openshift-virtualization-joachim-von-thadden.pptx
rhte-2023-myths-about-openshift-virtualization-joachim-von-thadden.pptxpbtest
 
JAX London 2015 - Architecting a Highly Scalable Enterprise
JAX London 2015 - Architecting a Highly Scalable EnterpriseJAX London 2015 - Architecting a Highly Scalable Enterprise
JAX London 2015 - Architecting a Highly Scalable EnterpriseC24 Technologies
 

Semelhante a Inoreader OpenNebula + StorPool migration (20)

2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on Ceph2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on Ceph
 
Idi2017 - Cloud DB: strengths and weaknesses
Idi2017 - Cloud DB: strengths and weaknessesIdi2017 - Cloud DB: strengths and weaknesses
Idi2017 - Cloud DB: strengths and weaknesses
 
Montreal OpenStack Q2 MeetUp - May 30th 2017
Montreal OpenStack Q2 MeetUp - May 30th 2017Montreal OpenStack Q2 MeetUp - May 30th 2017
Montreal OpenStack Q2 MeetUp - May 30th 2017
 
OpenStack Ottawa Q2 MeetUp - May 31st 2017
OpenStack Ottawa Q2 MeetUp - May 31st 2017OpenStack Ottawa Q2 MeetUp - May 31st 2017
OpenStack Ottawa Q2 MeetUp - May 31st 2017
 
Linuxcon​ 2013
Linuxcon​ 2013Linuxcon​ 2013
Linuxcon​ 2013
 
Database as a Service (DBaaS) on Kubernetes
Database as a Service (DBaaS) on KubernetesDatabase as a Service (DBaaS) on Kubernetes
Database as a Service (DBaaS) on Kubernetes
 
Bursting into the public Cloud - Sharing my experience doing it at large scal...
Bursting into the public Cloud - Sharing my experience doing it at large scal...Bursting into the public Cloud - Sharing my experience doing it at large scal...
Bursting into the public Cloud - Sharing my experience doing it at large scal...
 
OpenStack Toronto Q2 MeetUp - June 1st 2017
OpenStack Toronto Q2 MeetUp - June 1st 2017OpenStack Toronto Q2 MeetUp - June 1st 2017
OpenStack Toronto Q2 MeetUp - June 1st 2017
 
Sanger OpenStack presentation March 2017
Sanger OpenStack presentation March 2017Sanger OpenStack presentation March 2017
Sanger OpenStack presentation March 2017
 
Coates bosc2010 clouds-fluff-and-no-substance
Coates bosc2010 clouds-fluff-and-no-substanceCoates bosc2010 clouds-fluff-and-no-substance
Coates bosc2010 clouds-fluff-and-no-substance
 
Scylla Summit 2018: Meshify - A Case Study, or Petshop Seamonsters
Scylla Summit 2018: Meshify - A Case Study, or Petshop SeamonstersScylla Summit 2018: Meshify - A Case Study, or Petshop Seamonsters
Scylla Summit 2018: Meshify - A Case Study, or Petshop Seamonsters
 
Big Data and OpenStack, a Love Story: Michael Still, Rackspace
Big Data and OpenStack, a Love Story: Michael Still, RackspaceBig Data and OpenStack, a Love Story: Michael Still, Rackspace
Big Data and OpenStack, a Love Story: Michael Still, Rackspace
 
OpenNebulaConf2015 1.07 Cloud for Scientific Computing @ STFC - Alexander Dibbo
OpenNebulaConf2015 1.07 Cloud for Scientific Computing @ STFC - Alexander DibboOpenNebulaConf2015 1.07 Cloud for Scientific Computing @ STFC - Alexander Dibbo
OpenNebulaConf2015 1.07 Cloud for Scientific Computing @ STFC - Alexander Dibbo
 
Latest (storage IO) patterns for cloud-native applications
Latest (storage IO) patterns for cloud-native applications Latest (storage IO) patterns for cloud-native applications
Latest (storage IO) patterns for cloud-native applications
 
Ansible for networks
Ansible for networksAnsible for networks
Ansible for networks
 
OpenEBS hangout #4
OpenEBS hangout #4OpenEBS hangout #4
OpenEBS hangout #4
 
DATABASE AUTOMATION with Thousands of database, monitoring and backup
DATABASE AUTOMATION with Thousands of database, monitoring and backupDATABASE AUTOMATION with Thousands of database, monitoring and backup
DATABASE AUTOMATION with Thousands of database, monitoring and backup
 
Workday's Next Generation Private Cloud
Workday's Next Generation Private CloudWorkday's Next Generation Private Cloud
Workday's Next Generation Private Cloud
 
rhte-2023-myths-about-openshift-virtualization-joachim-von-thadden.pptx
rhte-2023-myths-about-openshift-virtualization-joachim-von-thadden.pptxrhte-2023-myths-about-openshift-virtualization-joachim-von-thadden.pptx
rhte-2023-myths-about-openshift-virtualization-joachim-von-thadden.pptx
 
JAX London 2015 - Architecting a Highly Scalable Enterprise
JAX London 2015 - Architecting a Highly Scalable EnterpriseJAX London 2015 - Architecting a Highly Scalable Enterprise
JAX London 2015 - Architecting a Highly Scalable Enterprise
 

Mais de OpenNebula Project

OpenNebulaConf2019 - Welcome and Project Update - Ignacio M. Llorente, Rubén ...
OpenNebulaConf2019 - Welcome and Project Update - Ignacio M. Llorente, Rubén ...OpenNebulaConf2019 - Welcome and Project Update - Ignacio M. Llorente, Rubén ...
OpenNebulaConf2019 - Welcome and Project Update - Ignacio M. Llorente, Rubén ...OpenNebula Project
 
OpenNebulaConf2019 - Building Virtual Environments for Security Analyses of C...
OpenNebulaConf2019 - Building Virtual Environments for Security Analyses of C...OpenNebulaConf2019 - Building Virtual Environments for Security Analyses of C...
OpenNebulaConf2019 - Building Virtual Environments for Security Analyses of C...OpenNebula Project
 
OpenNebulaConf2019 - CORD and Edge computing with OpenNebula - Alfonso Aureli...
OpenNebulaConf2019 - CORD and Edge computing with OpenNebula - Alfonso Aureli...OpenNebulaConf2019 - CORD and Edge computing with OpenNebula - Alfonso Aureli...
OpenNebulaConf2019 - CORD and Edge computing with OpenNebula - Alfonso Aureli...OpenNebula Project
 
OpenNebulaConf2019 - 6 years (+) OpenNebula - Lessons learned - Sebastian Man...
OpenNebulaConf2019 - 6 years (+) OpenNebula - Lessons learned - Sebastian Man...OpenNebulaConf2019 - 6 years (+) OpenNebula - Lessons learned - Sebastian Man...
OpenNebulaConf2019 - 6 years (+) OpenNebula - Lessons learned - Sebastian Man...OpenNebula Project
 
OpenNebulaConf2019 - Performant and Resilient Storage the Open Source & Linux...
OpenNebulaConf2019 - Performant and Resilient Storage the Open Source & Linux...OpenNebulaConf2019 - Performant and Resilient Storage the Open Source & Linux...
OpenNebulaConf2019 - Performant and Resilient Storage the Open Source & Linux...OpenNebula Project
 
OpenNebulaConf2019 - Image Backups in OpenNebula - Momčilo Medić - ITAF
OpenNebulaConf2019 - Image Backups in OpenNebula - Momčilo Medić - ITAFOpenNebulaConf2019 - Image Backups in OpenNebula - Momčilo Medić - ITAF
OpenNebulaConf2019 - Image Backups in OpenNebula - Momčilo Medić - ITAFOpenNebula Project
 
OpenNebulaConf2019 - How We Use GOCA to Manage our OpenNebula Cloud - Jean-Ph...
OpenNebulaConf2019 - How We Use GOCA to Manage our OpenNebula Cloud - Jean-Ph...OpenNebulaConf2019 - How We Use GOCA to Manage our OpenNebula Cloud - Jean-Ph...
OpenNebulaConf2019 - How We Use GOCA to Manage our OpenNebula Cloud - Jean-Ph...OpenNebula Project
 
OpenNebulaConf2019 - Crytek: A Video gaming Edge Implementation "on the shoul...
OpenNebulaConf2019 - Crytek: A Video gaming Edge Implementation "on the shoul...OpenNebulaConf2019 - Crytek: A Video gaming Edge Implementation "on the shoul...
OpenNebulaConf2019 - Crytek: A Video gaming Edge Implementation "on the shoul...OpenNebula Project
 
Replacing vCloud with OpenNebula
Replacing vCloud with OpenNebulaReplacing vCloud with OpenNebula
Replacing vCloud with OpenNebulaOpenNebula Project
 
NTS: What We Do With OpenNebula - and Why We Do It
NTS: What We Do With OpenNebula - and Why We Do ItNTS: What We Do With OpenNebula - and Why We Do It
NTS: What We Do With OpenNebula - and Why We Do ItOpenNebula Project
 
OpenNebula from the Perspective of an ISP
OpenNebula from the Perspective of an ISPOpenNebula from the Perspective of an ISP
OpenNebula from the Perspective of an ISPOpenNebula Project
 
NTS CAPTAIN / OpenNebula at Julius Blum GmbH
NTS CAPTAIN / OpenNebula at Julius Blum GmbHNTS CAPTAIN / OpenNebula at Julius Blum GmbH
NTS CAPTAIN / OpenNebula at Julius Blum GmbHOpenNebula Project
 
Performant and Resilient Storage: The Open Source & Linux Way
Performant and Resilient Storage: The Open Source & Linux WayPerformant and Resilient Storage: The Open Source & Linux Way
Performant and Resilient Storage: The Open Source & Linux WayOpenNebula Project
 
NetApp Hybrid Cloud with OpenNebula
NetApp Hybrid Cloud with OpenNebulaNetApp Hybrid Cloud with OpenNebula
NetApp Hybrid Cloud with OpenNebulaOpenNebula Project
 
NSX with OpenNebula - upcoming 5.10
NSX with OpenNebula - upcoming 5.10NSX with OpenNebula - upcoming 5.10
NSX with OpenNebula - upcoming 5.10OpenNebula Project
 
Security for Private Cloud Environments
Security for Private Cloud EnvironmentsSecurity for Private Cloud Environments
Security for Private Cloud EnvironmentsOpenNebula Project
 
CheckPoint R80.30 Installation on OpenNebula
CheckPoint R80.30 Installation on OpenNebulaCheckPoint R80.30 Installation on OpenNebula
CheckPoint R80.30 Installation on OpenNebulaOpenNebula Project
 
Cloud Disaggregation with OpenNebula
Cloud Disaggregation with OpenNebulaCloud Disaggregation with OpenNebula
Cloud Disaggregation with OpenNebulaOpenNebula Project
 

Mais de OpenNebula Project (20)

OpenNebulaConf2019 - Welcome and Project Update - Ignacio M. Llorente, Rubén ...
OpenNebulaConf2019 - Welcome and Project Update - Ignacio M. Llorente, Rubén ...OpenNebulaConf2019 - Welcome and Project Update - Ignacio M. Llorente, Rubén ...
OpenNebulaConf2019 - Welcome and Project Update - Ignacio M. Llorente, Rubén ...
 
OpenNebulaConf2019 - Building Virtual Environments for Security Analyses of C...
OpenNebulaConf2019 - Building Virtual Environments for Security Analyses of C...OpenNebulaConf2019 - Building Virtual Environments for Security Analyses of C...
OpenNebulaConf2019 - Building Virtual Environments for Security Analyses of C...
 
OpenNebulaConf2019 - CORD and Edge computing with OpenNebula - Alfonso Aureli...
OpenNebulaConf2019 - CORD and Edge computing with OpenNebula - Alfonso Aureli...OpenNebulaConf2019 - CORD and Edge computing with OpenNebula - Alfonso Aureli...
OpenNebulaConf2019 - CORD and Edge computing with OpenNebula - Alfonso Aureli...
 
OpenNebulaConf2019 - 6 years (+) OpenNebula - Lessons learned - Sebastian Man...
OpenNebulaConf2019 - 6 years (+) OpenNebula - Lessons learned - Sebastian Man...OpenNebulaConf2019 - 6 years (+) OpenNebula - Lessons learned - Sebastian Man...
OpenNebulaConf2019 - 6 years (+) OpenNebula - Lessons learned - Sebastian Man...
 
OpenNebulaConf2019 - Performant and Resilient Storage the Open Source & Linux...
OpenNebulaConf2019 - Performant and Resilient Storage the Open Source & Linux...OpenNebulaConf2019 - Performant and Resilient Storage the Open Source & Linux...
OpenNebulaConf2019 - Performant and Resilient Storage the Open Source & Linux...
 
OpenNebulaConf2019 - Image Backups in OpenNebula - Momčilo Medić - ITAF
OpenNebulaConf2019 - Image Backups in OpenNebula - Momčilo Medić - ITAFOpenNebulaConf2019 - Image Backups in OpenNebula - Momčilo Medić - ITAF
OpenNebulaConf2019 - Image Backups in OpenNebula - Momčilo Medić - ITAF
 
OpenNebulaConf2019 - How We Use GOCA to Manage our OpenNebula Cloud - Jean-Ph...
OpenNebulaConf2019 - How We Use GOCA to Manage our OpenNebula Cloud - Jean-Ph...OpenNebulaConf2019 - How We Use GOCA to Manage our OpenNebula Cloud - Jean-Ph...
OpenNebulaConf2019 - How We Use GOCA to Manage our OpenNebula Cloud - Jean-Ph...
 
OpenNebulaConf2019 - Crytek: A Video gaming Edge Implementation "on the shoul...
OpenNebulaConf2019 - Crytek: A Video gaming Edge Implementation "on the shoul...OpenNebulaConf2019 - Crytek: A Video gaming Edge Implementation "on the shoul...
OpenNebulaConf2019 - Crytek: A Video gaming Edge Implementation "on the shoul...
 
Replacing vCloud with OpenNebula
Replacing vCloud with OpenNebulaReplacing vCloud with OpenNebula
Replacing vCloud with OpenNebula
 
NTS: What We Do With OpenNebula - and Why We Do It
NTS: What We Do With OpenNebula - and Why We Do ItNTS: What We Do With OpenNebula - and Why We Do It
NTS: What We Do With OpenNebula - and Why We Do It
 
OpenNebula from the Perspective of an ISP
OpenNebula from the Perspective of an ISPOpenNebula from the Perspective of an ISP
OpenNebula from the Perspective of an ISP
 
NTS CAPTAIN / OpenNebula at Julius Blum GmbH
NTS CAPTAIN / OpenNebula at Julius Blum GmbHNTS CAPTAIN / OpenNebula at Julius Blum GmbH
NTS CAPTAIN / OpenNebula at Julius Blum GmbH
 
Performant and Resilient Storage: The Open Source & Linux Way
Performant and Resilient Storage: The Open Source & Linux WayPerformant and Resilient Storage: The Open Source & Linux Way
Performant and Resilient Storage: The Open Source & Linux Way
 
NetApp Hybrid Cloud with OpenNebula
NetApp Hybrid Cloud with OpenNebulaNetApp Hybrid Cloud with OpenNebula
NetApp Hybrid Cloud with OpenNebula
 
NSX with OpenNebula - upcoming 5.10
NSX with OpenNebula - upcoming 5.10NSX with OpenNebula - upcoming 5.10
NSX with OpenNebula - upcoming 5.10
 
Security for Private Cloud Environments
Security for Private Cloud EnvironmentsSecurity for Private Cloud Environments
Security for Private Cloud Environments
 
CheckPoint R80.30 Installation on OpenNebula
CheckPoint R80.30 Installation on OpenNebulaCheckPoint R80.30 Installation on OpenNebula
CheckPoint R80.30 Installation on OpenNebula
 
DE-CIX: CloudConnectivity
DE-CIX: CloudConnectivityDE-CIX: CloudConnectivity
DE-CIX: CloudConnectivity
 
DDC Demo
DDC DemoDDC Demo
DDC Demo
 
Cloud Disaggregation with OpenNebula
Cloud Disaggregation with OpenNebulaCloud Disaggregation with OpenNebula
Cloud Disaggregation with OpenNebula
 

Último

CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Servicegiselly40
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)wesley chun
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsJoaquim Jorge
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEarley Information Science
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...apidays
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CVKhem
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 

Último (20)

CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 

Inoreader OpenNebula + StorPool migration

  • 1.
  • 2. Introduction Introduction 2 Presenter and company intro Who are we and what we do? Migration to OpenNebula and StorPool In order to fix our scalability problems we pinpointed the need for a virtualization layer and distributed storage. After thorough research we ended up with OpenNebula and StorPool Inoreader What is Inoreader and what challenges we faced while building and maintaining it? Tips Infrastructure issues We were facing numerous scalability issues while at the same time we hade a an array of servers doing nothing mostly because of filled storage. At certain point we hit a brick wall. QA If you have any questions I will gladly answer them Some useful takeaways for you.
  • 3. I have 10+ years of experience in the Telco IT sector, working with large enterprise solutions as well as building specialized solutions from scratch. I have founded a company called Innologica in 2013 with the mission of developing Next-Gen OSS and BSS solutions. A side project was born back then called Inoreader, which quickly turned into a leading platform for content consumption and is now a core product of the company. Yordan Yordanov 3 CEO Innologica
  • 4.
  • 5. Who Are We? 5 Product company We are not a sweatshop. We make successful products. International market Our customers are all over the globe. Relaxed environment We do not push the devs, but we cherish top performers. Smart team The team is small, but each member brings great value.
  • 6. Inoreader RSS News aggregator and information hub 6 150,000 DAU We have 150k daily active users (DAU) and more than 30k simultaneous sessions in peak times. Closing in on 1M registered users soon. 10k and counting premium subscribers. 15,000,000,000 articles in MySQL and ES We keep the full archive in enormous MySQL Databases and a separate Elasticsearch cluster just for searching. Around 20TB of data without the replicas. 10M+ new articles per day. 1,000,000 feed updates per hour We need to update our 10+ Million feeds in a timely manner. A lot of machines are dedicated for this task only. 40 VMs and 10 physical hosts The platform is currently running on 30 Virtual Machines mainly in our main DC. There are some physical hosts that were not good candidates for virtualization mainly for Elasticsearch.
  • 7. 7 Extreme Makeover The old and the new setup 7 100% Virtualized No more services running directly on bare-metal. Lighter power footprint300% more capacity with 60% of the previous servers with room for expansion. Performance gains Huge compute and storage performance gains. Maintainability is a breeze too.
  • 8. INFRASTRUCTURE ISSUES Our main drivers to migrate to fully virtualized environment
  • 9. Hardware capacity 9 We needed to constantly buy new servers just to keep up with the growing databases, because local storages were being quickly exhausted. We were using expensive RAID cards and RAID-10 setups for all databases. Those severs never used more than 10% of their CPUs, so it was a complete waste of resources. Our problem CPU 10% Memory Storage Rack space 50% 90% 100%
  • 10. Hardware failures Not so common but always hair-pulling 10 All components are bound to fail. Whenever we lose a server, there was always at least some service disruption if not a whole outage. All databases needed to have replications, which skyrocketed server costs and didn’t provide automatic HA. If a hard-drive fails in a RAID-10 setup you need to replace it ASAP. Bigger drives are more prone to cause errors while rebuilding. Large databases on RAID-10 are slow to recover from crashes, so replications should be carefully set up and should be on identical (expensive) hardware in case a replication should be promoted to a master. Nobody likes to go to a DC on Saturday to replace a failed drive, reinstall OS and rotate replications. We much prefer to ride bikes! Problem description
  • 11. CHOSEN SOLUTION We chose to virtualize everything using OpenNebula + StorPool
  • 12. Project Timeline 12 2017 Nov 2017 Nov 2017 – Jan 2018 Feb 2018 Mar 2018 PROJECT START We knew for quite a while that we need a solution to the growth problem. PLANNING AND FIRST TESTS While the hardware was in transit we took our time to learn OpenNebula and test it as much as possible SUCCESS We have finally migrated our last server and all VMs were happily running on OpenNebula and StorPool. CHOOSING A SOLUTION We held some meetings with vendors and researched different solutions EXECUTION We have migrated all servers through several iterations which will be described in more detail here
  • 13. Hardware 13 StorPool nodes We chose a standard 3x SuperMicro SC836 3U servers. Switches As recommended by StorPool we chose Quanta LB8 for the 10G network and Quanta LB4-M for the Gigabit network. Hypervisors We have reused our old servers, but modified their CPUs and memory. Others 10G LAN cards and cables
  • 14. StorPool Nodes 14 StorPool recommends to use commodity hardware. Supermicro offers a good platform without vendor specific requirements for RAID cards, etc. and is very budget friendly. Our setup: • Supermicro CSE-836B chassis • Supermicro X10SRL-F motherboard • 1x Intel Xeon E5-1620 v4 CPU (8 threads @3.5Ghz) • 64GB DDR4-2666 RAM • Avago 3108L RAID controller with 2G cache • Intel X520-DA2 10G Ethernet card • 8x 4TB HDD LFF SATA3 7200 RPM • 8x 2TB HDD LFF SATA3 7200 RPM (reused from older servers) Around 3300 EUR per server
  • 15. Gigabit Network – Quanta LB4M 15 We were struggling with some old TP-Link SG2424 switches that we wanted to upgrade, so we used the opportunity to upgrade the regular 1G network too. We chose the Quanta LB4M. Key aspects • 48x Gigabit RJ45 ports • 2x 10G SFP+ ports • Redundant power supplies • Very cheap! • EOL – You might want to stack up some spare switches! • Stable (4 months without a single flop for now) Around 250 EUR per switch from eBay.
  • 16. 10G Network – Quanta LB8 16 Again due to StorPool recommendation we procured three Quanta LB8 switches. They seem to be performing great so far. Key aspects • 48x 10G SFP+ ports • Redundant power supplies • Very cheap for what they offer! • EOL – You might want to stack up some spare switches! • Stable (4 months without a single flop for now) 700-1000 EUR per switch from eBay including customs taxes.
  • 17. Hypervisors 17 We have reused our old servers, but with some significant upgrades. We currently have 12 hypervisors with the following configuration: • Supermicro 1U chassis with X9DRW motherboards • 2x Intel Xeon E5-2650 v2 CPU (32 total threads) • Dual power supply • 128G DDR3 12800R Memory • Intel X520-DA2 10G card • 2xHDD in mdraid for OS only
  • 19. New Rack 19 We have rented a new rack in our collocation center since we didn’t have any more space available in the old rack. The idea was simple – Deploy StorPool in the new rack only and gradually migrate hypervisors.
  • 20. StorPool Nodes 20 The servers landed in our office in late January. It was Friday afternoon, but we quickly installed them in the lab and let the StorPool guys do their magic over the weekend.
  • 21. Installation Day 21 The next Monday StorPool finished all tests and the equipment was ready to be installed in our DC.
  • 22. Installation Day 22 Fast forward several hours and we had our first StorPool cluster up and running. Still not hypervisors. StorPool needed to perform a full cluster check in the real environment to see if everything works well.
  • 23. First hypervisors 23 The very next day we installed our first hypervisors – the temporary ones that were holding VMs installed during our test period. Those VMs were still running on local storage and NFS. The next step was to migrate them to StorPool.
  • 24. VM Migration to StorPool 24 Shut down the VM Use SunStone or cli to shut down the VM.01 Create StorPool volumesOn the host, use the storpool cli to create volume(s) for the VM with the exact size of the original images 02 Copy the VolumesUse dd or qemu-convert for raw and qcow2 images respectively to copy the images to the StorPool volumes. 03 Reattach imagesDetach local images and attach StorPool ones. Mind the order. There’s a catch with large images* 04 Power up the VM Check if the VM boots properly. We’re not done yet…05 Finalize the migrationTo fully migrate persistent VMs use the Recover -> delete-recreate function to redeploy all files to StorPool. 06 *Large images (100G+) takes forever to detach on slow local storage, so we had to kill the cp process and use the onevm recover success option to lie to OpenNebula that the detach actually completed. This is risky but save a LOT of downtime. After all VMs are migrated, you can delete the old system and image datastores and leave only StorPool DSs At this point we are completely on StorPool! StorPool helps their customers with this step, but here’s the summary of what we did.
  • 25. Next hypervisors 25 From here on we had several iterations that consisted of roughly the following: • Create a list of servers for migration. The more hypervisors the more servers we can move in a single iteration • Create VMs and migrate the services there • Use the opportunity to untangle microservices running on the same machine • Make sure servers are completely drained from any services. • Shut down the servers and plan a visit to the DC the next day • Continue on the next slide…
  • 26. Remove servers from the old rack 26
  • 27. Remove HDDs and RAID controllers 27
  • 29. Install 10G card and smaller HDDs and reinstall OS 29
  • 30. Install the servers in the new rack and hand over to StorPool 30
  • 31. RINSE AND REPEAT At each iteration we move more servers at once because we have more capacity for VMs
  • 32. Current capacity 32 At the end we have achieved 3x capacity boost in terms of processing power and memory with just a fraction of our previous servers, because with virtualization we can distribute the resources however we’d like. In terms of storage we are on a completely different level since we are no longer restricted to a single machine capacity, we have 3x redundancy and all the performance we need. We did it! Allocated CPU 37% Allocated Memory Storage Rack space 32% 67% 70%
  • 33. Our Dashboard 33 A glimpse at our OpenNebula dashboard. 336 CPU cores and 1.2TB of RAM in just 12 hypervisors.
  • 34. Hypervisor view 34 All hypervisors are all nicely balanced using the default scheduler. There’s always enough room to move VMs around in case a hypervisor crashes or if we need to reboot a host.
  • 36. Optimize CPU for homogenous clusters 36 Available as template setting since OpenNebula 5.4.6. Set to host- passthrough. This option presents the real CPU model to the VMs instead of the default QEMU CPU. It can substantially increase the performance especially if instructions like aes are needed. Do not use it if you have different CPU models across the cluster since it will cause the VMs to crash after live migration. For older OpenNebula setups set this as RAW DATA in the template: <cpu mode="host-passthrough"/>
  • 37. Beware of mkfs.xfs on large StorPool volumes inside VMs 37 We noticed that when doing mkfs.xfs on large StorPool volumes (e.g. 4TB) there was a big delay before the command completes. What’s worse is that during this time all VMs on this host starve for IO, because the storpool_block.bin process is using 100% CPU time. The image shown on the left is for 1TB volume. The reason is that mkfs uses TRIM by default and the StorPool driver support that. To remedy it use -K option for mkfs.xfs or -E nodiscard for mkfs.ext4, e.g.: • mkfs.xfs -K /dev/sdb1 • mkfs.ext4 -E nodiscard /dev/sdb1
  • 38. Use the 10G network for OpenNebula too 38 This is probably an obvious one, but it deserves to be mentioned. By default your hosts will probably resolve others via the regular Gigabit network. Forcing them to talk through the 10G storage network will drastically improve the live VM migration. The migration is not IO bound so it will completely saturate the network. Usually a simple /etc/hosts modification. Consult with StorPool for your specific use case before doing that. Live migrating a VM with 8G of ram takes 7 seconds on 10G. The same VM will take aboud 1.5 minutes on a Gigabit network and will probably disturb VM communications if the network is saturated. Live migration on highly loaded VMs can take significantly longer and should be monitored. In some cases it’s enough to stop busy services for just a second for the migration to complete.
  • 39. Other tips 39 Those are the more obvious ones that probably everyone uses in production, but still worth mentioning. • Use cache=none, io=native when attaching volumes • Use virtio networking instead of the default 8139 nic. The latter has performance issues and drops packets when host IO is high • Measure IO latency instead of IO load to judge saturation. We have several machines with constant 99% IO load which are doing perfectly fine. /etc/one/vmm_exec/vmm_exec_kvm.conf: … DISK = [ driver = "raw" , cache = "none", io = "native", discard = "unmap", bus = "scsi" ] NIC = [ filter = "clean-traffic", model="virtio" ] ….
  • 41. Grafana Dashboards 41 We have adapted the OpenNebula Dashboards with Graphite and Grafana scripts by Sebastian Mangelkramer and used them to create our own Grafana dashboards so we can see at a glance which hypervisors are most loaded and how much overall capacity we have.
  • 42. Grafana TV Dashboard 42 Why not have a master dashboard on the TV at the office? This gives our team a very quick and easy way to tell if everything is working smoothly. If all you see is green, we’re good  This dashboard show our main DC on the first row, our backup DC on the second and then some other critical aspects of our system. It’s still a WIP, hence the empty space. At the top is our Geckoboard that we use for more business KPIs.
  • 43. Server Power Usage in Grafana 43 Part of our virtualization project was to optimize the electricity bill by using less servers. We were able to easily measure our power usage by using Graphite and Grafana. If you are interested, the script for getting the data into Graphite is here: https://gist.github.com/Jacketbg/6973efdb41a2ecfcf2a83ea8 4c086887 The Grafana Dashboard can be found here: https://gist.github.com/Jacketbg/7255b4f81ebb2de0e8a570 8b4335c9d7 Obviously you will need to tweak it, especially the formula for the power bill.
  • 44. StorPool’s Grafana 44 StorPool were nice to give us an access to their own Grafana instance where they collect a lot of internal data about the system and KPIs. It gives us great insights that we couldn’t get otherwise so we can plan and estimate the system load very well.
  • 45. What’s Left? 45 SSD Pool We are currently only using a HDD pool, but we could benefit from a smaller SSD pool for picky MySQL databases. Add more hypervisors As the service grows our needs will too. We will probably have rack space for the near years to come. Add more StorPool nodes We have maxed out the HDD bays on our our current nodes, so we’ll probably need to add more nodes in the future. Upgrade StorPool nodes to 40G Currently the nodes use 2x10G ports like the hypervisors. After adding an SSD pool we are considering upgrading to 40G
  • 46. THANK YOU ! READ MORE ON BLOG.INOREADER.COM GET THIS PRESENTATION FROM ino.to/one- sofia