Cloud archiving can be a high value technology for businesses – if done right. But it’s easy to get wrong, and with archiving you often won’t know until it’s too late. That’s why we need to learn from past mistakes and have a battle-hardened plan to make sure cloud archiving is convenient and safe for our businesses. Featuring guest speakers Geoff Bourgeois, CEO, HubStor and Greg Campbell, CTO, HubStor who will take you through 5 examples of cloud archiving done right. Don’t archive to the cloud without attending this webinar!
View the webinar recording here: http://bit.ly/2bYzz99
29. HubStor Inc.
515 Legget Dr., Suite 800
Kanata, ON K2K 3G4
(855)704-1737
www.hubstor.net
Bishop Technologies, Inc.
2205 Point Blvd. Suite 160
Elgin, IL 60123
(847)756-7890
www.bishopit.com
THANK YOU
BISHOPIT.COM
Notas do Editor
Operating your own search environment is a tedious endeavor.
Here we talk about all or nothing indexing…
How that is a real problem at scale.
And it doesn’t jive with cold storage strategies… where you want to be really cost efficient. Indexing massive volumes of data isn’t cost efficient.
In general, content indexing is an expensive task, especially at scale.
There’s just a heavy infrastructure requirement, both compute and storage.
But with cloud archiving, there are a couple of things we can do to make it much, much better.
First off, (click to animate), we can tackle the problem of it being so expensive.
In the cloud, search as a service economics make is much more affordable. You’re not buying hardware. You can scale up and scale down. And the infrastructure is fully managed for you in the cloud.
And for the problem of scale (click to animate), we can be smarter about the data that we index. Instead of indexing everything, we can scope indexing to targeted data sets within our archive.
For example, an investigation looks at specific users’ data within a specific timeframe. In the cloud, we can scope indexing to just this data, leaving everything else unindexed.
Why is this an advantage? For starters, our search infrastructure doesn’t have to be large to handle this indexing load. If your archive is say 200 TB, indexing data by specific investigation scopes might mean you’re only ever indexing 5-10% of that 200 TB archive. Costs are contained. Secondly, it’s an advantage because you’re able to search within your scope much quicker.
Here we talk about all or nothing indexing…
How that is a real problem at scale.
And it doesn’t jive with cold storage strategies… where you want to be really cost efficient. Indexing massive volumes of data isn’t cost efficient.
You have contractual lockin… a term commitment and penalties for getting out.
You have to go through the vendor to get data out.
You have to pay for special migration services and tools to get data out.