This talk with focus on two key aspects of applications that are using the HBase APIs. The first part will provide a basic overview of how HBase works followed by an introduction to the HBase APIs with a simple example. The second part will extend what we've learned to secure the HBase application running on MapR's industry leading Hadoop.
Keys Botzum is a Senior Principal Technologist with MapR Technologies. He has over 15 years of experience in large scale distributed system design. At MapR his primary responsibility is working with customers as a consultant, but he also teaches classes, contributes to documentation, and works with MapR engineering. Previously he was a Senior Technical Staff Member with IBM and a respected author of many articles on WebSphere Application Server as well as a book. He holds a Masters degree in Computer Science from Stanford University and a B.S. in Applied Mathematics/Computer Science from Carnegie Mellon University.
3. What‟s HBase??
A NoSQL database
– Synonym for ‘non-traditional’ database
A distributed columnar data store
– Storage layout implies performance characteristics
The “Hadoop” database
A semi-structured database
– No rigid requirements to define columns or even data types in advance
– It’s all bytes to HBase
A persistent sorted Map of Maps
– Programmers view
3
4. Column Oriented
CF1
colA colB colC
val val
val
Row is indexed by a key
– Data stored sorted by key
Data is stored by columns grouped into column families
– Each family is a file of column values laid out in sorted order by row key
– Contrast this to a traditional row oriented database where rows are
stored together with fixed space allocated for each row
CF2
colA colB colC
val val
val
Row
Key
axxx
gxxx
Customer Address data Customer order dataCustomer id
5. HBase Data Model- Row Keys
Row Keys: identify the rows in an HBase table.
Row
Key
CF1 CF2 …
colA colB colC colA colB colC colD
R1
axxx val val val val
…
gxxx val val val val
R2
hxxx val val val val val val val
…
jxxx val
R3
kxxx val val val val
…
rxxx val val val val val val
… sxxx val val
6. Rows are Stored in Sorted Order
Sorting of row key is based upon binary values
–Sort is lexicographic at byte level
–Comparison is “left to right”
Example:
–Sort order for String 1, 2, 3, …, 99, 100:
1, 10, 100, 11, 12,…, 2, 20, 21, …, 9, 91, 92, …,
98, 99
– Sort order for String 001, 002, 003, …, 099, 100:
001, 002, 003, …, 099, 100
–What if the RowKeys were numbers converted to
fixed sized binary?
7. Tables are split into Regions = contiguous keys
Source: Diagram from Lars George‟s HBase: The Definitive Guide.
Key
Range
Region1
Key Range
axxx
gxxx
Tables are partitioned into key ranges (regions)
Region= contiguous keys, served by nodes (RegionServers)
Regions are spread across cluster: S1, S2…
Region 2
Key Range
Lxxx
zxxx
Region
CF1
colA colB colC
val val
val
CF2
colA colB colC
val val
val
Region
Row
key
axxx
gxxx
Region Server for Region 2, 3
8. HBase Data Model- Cells
Value for each cell is specified by complete coordinates:
– RowKey Column Family Column Version: Value
– Key:CF:Col:Version:Value
RowKey CF:Qualifier version value
smithj Data:street 12734567800 Main street
Column Key
9. Sparsely-Populated Data
Missing values: Cells remain empty and consume no storage
Row
Key
CF1 CF2 …
colA colB colC colA colB colC colD
Region
1
axxx val val val val
…
gxxx val val val val
Region
2
hxxx val val val val val val val
…
jxxx val
R3
kxxx val val val val
…
rxxx val val val val val val
… sxxx val val
10. HBase Data Model Summary
Efficient/Flexible
– Storage allocated for columns only as needed on a given row
• Great for sparse data
• Great for data of widely varying size
– Adding columns can be done at any time without impact
– Compression and versioning are usually built-in and take advantage of
column family storage (like data together)
Highly Scalable
– Data is sharded amongst regions based upon key
• Regions are distributed in cluster
– Grouping by key = related data stored together
Finding data
– Key implies region and server, column family implies file
– Efficiently get to any data by key
12. Basic Table Operations
Create Table, define Column Families before data is imported
– But not the rows keys or number/names of columns
Basic data access operations (CRUD):
put Inserts data into rows (both add and update)
get Accesses data from one row
scan Accesses data from a range of rows
delete Delete a row or a range of rows or columns
13. CRUD Operations Follow A Pattern (mostly)
Most common pattern
– Instantiate object for an operation: Put put = new Put(key)
– Add or Set attributes to specify what you need: put.add(…)
– Execute the operation against the table: myTable.put(put)
// Insert value1 into rowKey in columnFamily:columnName1
Put put = new Put(rowKey);
put.add(columnFamily, columnName1, value1);
myTable.put(put);
// Retrieve values from rowA in columnFamily:columnName1
Get get = new Get(rowKey);
get.addColumn(columnFamily, columnName1);
Result result = myTable.get(get);
14. Put Example
byte [] invTable = Bytes.toBytes("/path/Inventory");
byte [] stockCF = Bytes.toBytes(“stock");
byte [] quantityCol = Bytes.toBytes (“quantity”);
long amt = 24l;
HTableInterface table = new HTable(hbaseConfig, invTable);
Put put = new Put(Bytes.toBytes (“pens”));
put.add(stockCF, quantityCol, Bytes.toBytes(amt));
table.put(put);
quantity
pens 24
CF “stock"Inventory
15. Put Operation – Add method
Once a Put instance is created you call an add method on it
Typically you add a value for a specific column in a column family
– ("column name" and "qualifier" mean the same thing)
Optionally you can set a timestamp for a cell
Put add(byte[] family, byte[] qualifier, long ts, byte[]
value)
Put add(byte[] family, byte[] qualifier, byte[] value)
16. Put Operation –Single Put Example
adding multiple column values to a row
byte [] tableName = Bytes.toBytes("/path/Shopping");
byte [] itemsCF = Bytes.toBytes(“items");
byte [] penCol = Bytes.toBytes (“pens”);
byte [] noteCol = Bytes.toBytes (“notes”);
byte [] eraserCol = Bytes.toBytes (“erasers”);
HTableInterface table = new HTable(hbaseConfig, tableName);
Put put = new Put(“mike”);
put.add(itemsCF, penCol, Bytes.toBytes(5l));
put.add(itemsCF, noteCol, Bytes.toBytes(5l));
put.add(itemsCF, eraserCol, Bytes.toBytes(2l));
table.put(put);
18. Get Operation – Single Get Example
byte [] tableName = Bytes.toBytes("/path/Shopping");
byte [] itemsCF = Bytes.toBytes(“stock");
byte [] penCol = Bytes.toBytes (“pens”);
HTableInterface table = new HTable(hbaseConfig, tableName);
Get get = new Get(“Mike”);
get.addColumn(itemsCF, penCol);
Result result = myTable.get(get);
byte[] val = result.getValue(itemsCF, penCol);
System.out.println("Value: " + Bytes.toLong(val));
19. Get Operation – Add And Set methods
Using just a get object will return everything for a row.
To narrow down results call add
– addFamily: get all columns for a specific family
– addColumn: get a specific column
To further narrow down results, specify more details via one or
more set calls then call add
– setTimeRange: retrieve columns within a specific range of version
timestamps
– setTimestamp: retrieve columns with a specific timestamp
– setMaxVersions: set the number of versions of each column to be returned
– setFilter: add a filter
get.addColumn(columnFamilyName, columnName1);
20. Result – Retrieve A Value From A Result
public static final byte[] ITEMS_CF= Bytes.toBytes("items");
public static final byte[] PENS_COL = Bytes.toBytes(“pens");
Get g = new Get(Bytes.toBytes(“Adam”));
g.addColumn(ITEMS_CF , PENS_COL);
Result result = table.get(g);
byte[] b = result.getValue(ITEMS_CF, PENS_COL);
long valueInColumn = Bytes.toLong(b);
http://hbase.apache.org/0.94/apidocs/org/apache/hadoop/hbase/client/Result.html
Items:pens Items:notepads Items:erasers
Adam 18 7 10
21. Other APIs
Not covering append, delete, and scan
Not covering administrative APIs
24
23. Tables and Files in a Unified Storage Layer
HBase
JVM
HDFS
JVM
ext3 FS
Disks
Apache
HBase on
Hadoop
HBase
JVM
Apache HBase on
MapR Filesystem
MapR-FS
Disks
HDFS API
M7 Tables Integrated
into Filesystem
MapR-FS
Disks
HBase API HDFS API
MapR Filesystem is an integrated system
– Tables and Files in a unified filesystem, based on
MapR’s enterprise-grade storage layer.
24. Portability
MapR tables use the HBase data model and API
Apache HBase applications work as-is on MapR tables
–No need to recompile
–No vendor lock-in
MapR-FS
Disks
HBase API HDFS API
25. MapR M7 Table Storage
Table regions live inside a MapR container
– Served by MapR fileserver service running on nodes
– HBase RegionServer and HBase Master services are not required
Region Region
Container
Key colB colC
val val
val
Key colB colC
val val
val
Region Region
Container
Key colB colC
val val
val
Key colB colC
val val
val
Client Nodes
33. Source: “get row”
Whole row
Get g = new Get(Bytes.toBytes(key));
Result result = getTable().get(g);
Just base column family
Get g = new Get(Bytes.toBytes(key));
g.addFamily(BASE_CF);
Result result = getTable().get(g);
36
35. Source: “parse row”
//get salary information
Map<byte[], byte[]> m = r.getFamilyMap(SALARY_CF);
Iterator<Map.Entry<byte[], byte[]>> i =
m.entrySet().iterator();
while (i.hasNext()) {
Map.Entry<byte[], byte[]> entry = i.next();
Integer year =
Integer.parseInt(Bytes.toString(entry.getKey()));
Integer amt = Integer.parseInt(Bytes.toString(
entry.getValue()));
e.getSalary().put(year, amt);
}
38
36. Demo
Create a table using MCS
Create a table and column families using maprcli
39
$ maprcli table create -path /user/keys/employees
$ maprcli table cf create -path /user/keys/employees -cfname
base
$ maprcli table cf create -path /user/keys/employees -cfname
salary
37. Demo
Populate with sample data using hbase shell
40
hbase> put '/user/keys/employees', 'k1', 'base:lastName', 'William'
> put '/user/keys/employees', 'k1', 'base:firstName', 'John'
> put '/user/keys/employees', 'k1', 'base:address', '123 street, springfield, VA'
> put '/user/keys/empoyees', 'k1', 'base:ssn', '999-99-9999'
> put '/user/keys/employees', 'k1', 'salary:2010', '90000’
> put '/user/keys/employees', 'k1', 'salary:2011', '91000’
> put '/user/keys/employees', 'k1', 'salary:2012', '92000’
> put '/user/keys/employees', 'k1', 'salary:2013', '93000’
….….
38. Demo
Fetch record using java program
41
$ ./run employees get k1
Use command get against table /user/keys/employees
Employee record:
Employee [key=k1, lastName=William, firstName=John,
address=123 first street, springfield, VA, ssn=999-99-9999,
salary={2010=90000, 2011=91000, 2012=92000, 2013=93000}]
41. Row Key
Secondary ways of searching
– Other tables as indexes?
Long term data evolution
– Avro?
– Protobufs?
Security
– SSN is sensitive
– Salary looks kind of sensitive
What Didn‟t I Consider?
44
43. MapR Tables Security
Access Control Expressions (ACEs)
– Boolean logic to control access at table, column family, and column level
46
44. ACE Highlights
Creator of table has all rights by default
– Others have none
Can grant admin rights without granting read/write rights
Defaults for column families set at table level
Access to data depends on column family and column access
controls
Boolean logic
47
45. MapR Tables Security
Leverages MapR security when enabled
– Wire level authentication
– Wire level encryption
– Trivial to configure
• Most reasonable settings by default
• No Kerberos required!
– Portable
• No MapR specific APIs
48
46. Demo
Enable cluster security
Yes, that’s it!
– Now all Web UI and CLI access requires authentication
– Traffic is now authenticated using encrypted credentials
– Most traffic is encrypted and bulk data transfer traffic can be encrypted
49
# configure.sh –C hostname –Z hostname -secure –genkeys
47. Demo
Fetch record using java program when not authenticated
50
$ ./run employees get k1
Use command get against table /user/keys/employees
14/03/14 18:42:39 ERROR fs.MapRFileSystem: Exception while
trying to get currentUser
java.io.IOException: failure to login: Unable to obtain MapR
credentials
48. Demo
Fetch record using java program
51
$ maprlogin password
[Password for user 'keys' at cluster 'my.cluster.com': ]
MapR credentials of user 'keys' for cluster 'my.cluster.com' are written to
'/tmp/maprticket_1000'
$ ./run employees get k1
Use command get against table /user/keys/employees
Employee record:
Employee [key=k1, lastName=William, firstName=John, address=123 first
street, springfield, VA, ssn=999-99-9999, salary={2010=90000,
2011=91000, 2012=92000, 2013=93000}]
49. Demo
Fetch record using java program as someone not authorized to
table
52
$ maprlogin password
[Password for user 'fred' at cluster 'my.cluster.com': ]
MapR credentials of user 'fred' for cluster 'my.cluster.com' are written to
'/tmp/maprticket_2001'
$ ./run /user/keys/employees get k1
Use command get against table /user/keys/employees
2014-03-14 18:49:20,2787 ERROR JniCommon
fs/client/fileclient/cc/jni_common.cc:7318 Thread: 139674989631232
Error in DBGetRPC for table /user/keys/employees, error: Permission
denied(13)
Exception in thread "main" java.io.IOException: Error: Permission
denied(13)
50. Demo
Set ACEs to allow read to base information but not salary
Fetch whole record using java program
53
$ ./run /user/keys/employees get k1
Use command get against table /user/keys/employees
2014-03-14 18:53:15,0806 ERROR JniCommon
fs/client/fileclient/cc/jni_common.cc:7318 Thread:
139715048077056 Error in DBGetRPC for table
/user/keys/employees, error: Permission denied(13)
Exception in thread "main" java.io.IOException: Error: Permission
denied(13)
51. Demo
Set ACEs to allow read to base information but not salary
Fetch just base record using java program
54
$ ./run employees getbase k1
Use command get against table /user/keys/employees
Employee record:
Employee [key=k1, lastName=William, firstName=John,
address=123 first street, springfield, VA, ssn=999-99-9999,
salary={}]
53. References
http://www.mapr.com/blog/getting-started-mapr-security-0
http://www.mapr.com/
http://hadoop.apache.org/
http://hbase.apache.org/
http://tech.flurry.com/2012/06/12/137492485/
http://en.wikipedia.org/wiki/Lexicographical_order
Hbase in Action, Nick Dimiduck, Amandeep Khurana
HBase: The Definitive Guide, Lars George
Note: this presentation includes materials from the MapR HBase
training classes
56. What is HBase? (Cluster View)
ZooKeeper (ZK)
HMaster (HM)
Region Servers (RS)
For MapR, there is less delineation between Control and Data Nodes.
ZooKeeper
NameNode
A B
HMaster
C D
HMaster
ZooKeeper
ZooKeeper
Master
servers
Slave
servers
Region Server
Data Node
Region Server
Data Node
Region Server
Data Node
Region Server
Data Node
57. What is a Region?
The basic partitioning/sharding unit of HBase.
Each region is assigned a range of keys it is responsible for.
Region servers serve data for reads and writes
Region Server
Client
Region Region
HMaster
zookeeper
Region Region
Region Server
Key colB colC
val val
val
Key colB colC
val val
val
Key colB colC
val val
val
Key colB colC
val val
val
zookeeper
zookeeper
Notas do Editor
Let’s take a quick look at the relational database model versus non-relational database models. Most of us are familiar with Relational Database Management Systems (RDBMS). We’ll briefly compare the relational model to the column family oriented model in the context of big data. This will help us fully understand the structure of MapR Tables and their underlying concepts.
In the relational model data is normalized, it is split into tables when stored , and then joined back together when queried. We will see that hbase has a different model. Relational databases brought us many benefits: They take care of persistenceThey manage concurrency for transactions. SQL has become a defacto standardRelational databases provide lots of tools , They have become very important for integration of applications and for reportingMany business rules map well to a tabular structure and relationshipsRelational databases provide an efficient and robust structure for storing datastandard model of persistence- standard language of data manipulation (SQL)Relational databases handle concurrency by controlling all access to data through transactions. this transactional mechanism has worked well to contain the complexity of concurrency.shared databases have worked well for integration of applicationsRelational databases have succeeded because they provide these benefits in a standard way
• Row-oriented: Each row is indexed by a key that you can use for lookup.(for example, customer with the ID of 1234) • Column-fanily oriented: Each column family groups like data (customer address, order) within rows. You can think of a row as the join of all values in all column families.Grouping the data by key is central to running on a cluster and sharding. The key acts as the atomic unit for updates.
Data stored in the “big table” is located by it’s “rowkey.” This is like a primary key from a relational database. Records in HBase are stored in sorted order according to rowkey. This is a fundamental tenant of HBase and is also a critical semantic used in HBase schema design.
Tables are divided into sequences of rows, by key range, called regionsThese Regions are then assigned to the data nodesin the cluster called “RegionServers”. This Scale read and write capacity by spreading acrosscluster.
If a cell is empty then it does not consume disk spaceSparseness provides schema flexibilityAdd columns later, no need totransform entire schema
Once you have created a table you define column families . Columns may be defined on the fly. You can define them ahead of the time but that is not common practice. That’s it. You don’t define rows ahead of time.Table operations are fairly simple.put Inserts data into rows (both add and update)get Accesses data from one rowscan Accesses data from a range of rows
As we go through the details of the HBase API, you will see that there is a pattern that is followed most of the time for CRUD operations.First you instantiate an object for the operation you’re about to execute: put, get, scan or deleteThen you add details to that object and specify what you need from it. You do this by calling an add method and sometimes a set method.Once your object is specified with these attributes you are ready to execute the operation against a table. To do that you invoke the operation with the object you’ve prepared. For example for a put operation you call table.put() and you pass the put object you created as the parameter.Let’s look at the Put operation now.
Here is an example of single put operation. Let’s look at what all this means.
Now that you have an instance of a put object for a specified row key you should provide some details, specifically what value you need to insert or update. In general you add a value for a column that belongs to a column family. That’s the most common case. Just like in the constructor for the Put object itself you don’t have to provide a timestamp but there is a method that lets you control that if you need to by proving a timestamp argument.
This is the same thing as what we saw earlier except that now we add several values to the same put object. Each call to add() specifies exactly one column, or, in combination with an optional timestamp, one single cell.This is just to show you that even though this is a single put operation you typically call add more than once.We saw that one of the add methods takes a KeyValue parameter so let’s look at the KeyValue class.
Everything in Hbase is stored as Bytes. The Bytes class is a utility class that provides methods to convert Java types to and from byte[] arrays.The Native java types supported are String, boolean, short, int, long, double, and float. Bytes The HBase Bytes class is similar to the Java ByteBufferclass but the HBase class performs all of its operations without instantiating new classes (and thus avoids garbage collection)Note to instructor: optional, show the javadoc to point out what conforms and what doesn’t conform to this patternThere are other methods that are worth looking at and we will do that in a later session after we’ve gone through CRUD operations.
Here is an example of a single get operation. You can see it is following the pattern we mentioned earlier. The only notable difference is that we call addColumn instead of just an add. Let’s look at all this in detail now.
You call add to specify what you want returned this is similar to what we saw for Put except that here you specify the family or column you are interested in.If you want to be more precise then you should call one of the set methods to be more specific. You can control what timestamp or time range you are interested in, how many versions of the data you want to see. You can even add a filter and we will talk about filters later as they deserve more than just passing attention.
In this get operation we have narrowed things down to a specific column. Once we got the result back we invoke one of the convenience methods from Result, here getValue, to retrieve the value in the Result instance.To see more about the Result class go to http://hbase.apache.org/0.94/apidocs/org/apache/hadoop/hbase/client/Result.htmlWe’ve added and retrieved data so now to complete the CRUD cycle we need to look at deleting data.
(*) MapR takes things one step further, by integrating table storage into the MapR-FS, eliminating all JVM layers and interacting directly with disks for both file and table storage. The result is an enterprise-grade datastore for tables with all the reliability of a MapR cluster, and no additional administrative burden. removed layersfewer layers unified namespaceAgain, (*) MapR preserves the standard Hadoop and HBase APIs, so all ecosystem components continue to operate without modification.Fewer layersSingle hop to dataNo compactions, low i/o amplificationSeamless splits, automatic mergesInstant recoveryWith the MapR M5 Edition of the Hadoop stack, the company basically pushed HDFS down into a distributed NFS file system, supporting all of the APIs of HDFS . with MapR M7 Edition, the file system can not only handle small chunks of data but also small pieces of HBase tables. This eliminates some layers of Java virtualization, and the way that MapR has implemented its code, all of the HBase APIs are supported so hbaseapplications don't know they are using MapR's file system.
In MapR tables are part of the file system so it’s a single hop. Single hop means client to MapR FS that handles write/read operations to the file system directly.MapRFilesystem is an integrated systemTables and Files in a unified filesystem, based on MapR’s enterprise-grade storage layer.MapR tables use the HBase data model and APIKey differences between MapR tables and Apache HBaseTables are part of the MapR File systemNo RegionServers and HBaseMaster daemonsWrite-ahead logs (WALs) are much smallerNo manual compactionNo major compaction delaysRegion splits are seamless and require no manual interventionIn MapR tables are part of the file system so it’s a single hop. Single hop means client to MapR FS that handles write/read operations to the file system directly.Seamless splits, no compaction and small WALs
MapR Filesystem provides strong consistency for table data and a high level of availability in a distributed environment, while also solving the common problems with other popular NoSQL options, such as compaction delays and manual administration.
Can also use hbase shell> create '/user/keys/e3', 'base', 'salary'
hbase shell
Use MCS to set ACEs
Use MCS to set ACEs
ACE on SSN columnFiltering out responses (coming soon in fix)
Let's review the HBase data model as a quick refresher of terms and concepts.