0% found this document useful (0 votes)
10 views836 pages

Apache Hbase Reference Guide

The Apache HBase Reference Guide provides comprehensive information on HBase version 2.3.0, including installation, configuration, and usage. It covers topics such as data models, schema design, and shell commands, aimed at helping users effectively utilize HBase. The guide is structured into sections that facilitate easy navigation and understanding of HBase functionalities.

Uploaded by

Latch John
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views836 pages

Apache Hbase Reference Guide

The Apache HBase Reference Guide provides comprehensive information on HBase version 2.3.0, including installation, configuration, and usage. It covers topics such as data models, schema design, and shell commands, aimed at helping users effectively utilize HBase. The guide is structured into sections that facilitate easy navigation and understanding of HBase functionalities.

Uploaded by

Latch John
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 836

Apache HBase ™ Reference Guide

Apache HBase Team

Version 2.3.0
Contents
Preface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Getting Started. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2. Quick Start - Standalone HBase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Apache HBase Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3. Configuration Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4. Basic Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
5. HBase run modes: Standalone and Distributed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6. Running and Confirming Your Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
7. Default Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
8. Example Configurations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
9. The Important Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
10. Dynamic Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Upgrading. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
11. HBase version number and compatibility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
12. Rollback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
13. Upgrade Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
The Apache HBase Shell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
14. Scripting with Ruby . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
15. Running the Shell in Non-Interactive Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
16. HBase Shell in OS Scripts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
17. Read HBase Shell Commands from a Command File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
18. Passing VM Options to the Shell. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
19. Overriding configuration starting the HBase Shell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
20. Shell Tricks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Data Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
21. Conceptual View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
22. Physical View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
23. Namespace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
24. Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
25. Row . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
26. Column Family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
27. Cells. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
28. Data Model Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
29. Versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
30. Sort Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
31. Column Metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
32. Joins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
33. ACID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
HBase and Schema Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
34. Schema Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
35. Table Schema Rules Of Thumb. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
RegionServer Sizing Rules of Thumb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
36. On the number of column families . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
37. Rowkey Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
38. Number of Versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
39. Supported Datatypes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
40. Joins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
41. Time To Live (TTL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
42. Keeping Deleted Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
43. Secondary Indexes and Alternate Query Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
44. Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
45. Schema Design Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
46. Operational and Performance Configuration Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
47. Special Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
HBase and MapReduce . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
48. HBase, MapReduce, and the CLASSPATH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
49. MapReduce Scan Caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
50. Bundled HBase MapReduce Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
51. HBase as a MapReduce Job Data Source and Data Sink. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
52. Writing HFiles Directly During Bulk Import. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
53. RowCounter Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
54. Map-Task Splitting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
55. HBase MapReduce Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
56. Accessing Other HBase Tables in a MapReduce Job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
57. Speculative Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
58. Cascading. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
Securing Apache HBase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
59. Web UI Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
60. Secure Client Access to Apache HBase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
61. Simple User Access to Apache HBase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
62. Securing Access to HDFS and ZooKeeper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
63. Securing Access To Your Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
64. Security Configuration Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
65. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
66. Catalog Tables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
67. Client. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
68. Client Request Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
69. Master . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
70. RegionServer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
71. Regions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
72. Bulk Loading. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
73. HDFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
74. Timeline-consistent High Available Reads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
75. Storing Medium-sized Objects (MOB). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
76. Scan over snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
In-memory Compaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
77. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
78. Enabling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
RegionServer Offheap Read/Write Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
79. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
80. Offheap read-path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
81. Read block from HDFS to offheap directly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
82. Offheap write-path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
Apache HBase APIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
83. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
Apache HBase External APIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
84. REST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
85. Thrift. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
86. C/C++ Apache HBase Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
87. Using Java Data Objects (JDO) with HBase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
88. Scala . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
89. Jython . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
Thrift API and Filter Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
90. Filter Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
HBase and Spark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
91. Basic Spark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
92. Spark Streaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
93. Bulk Load. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
94. SparkSQL/DataFrames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
Apache HBase Coprocessors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
95. Coprocessor Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
96. Types of Coprocessors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
97. Loading Coprocessors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
98. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
99. Guidelines For Deploying A Coprocessor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
100. Restricting Coprocessor Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
Apache HBase Performance Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
101. Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
102. Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
103. Java . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
104. HBase Configurations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
105. ZooKeeper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449
106. Schema Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
107. HBase General Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
108. Writing to HBase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
109. Reading from HBase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
110. Deleting from HBase. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
111. HDFS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
112. Amazon EC2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
113. Collocating HBase and MapReduce. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
114. Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
Profiler Servlet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
115. Background. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
116. Prerequisites. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
117. Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
118. UI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
119. Notes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
Troubleshooting and Debugging Apache HBase. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
120. General Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
121. Logs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
122. Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
123. Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
124. Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
125. MapReduce . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
126. NameNode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
127. Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502
128. RegionServer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503
129. Master . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
130. ZooKeeper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514
131. Amazon EC2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
132. HBase and Hadoop version issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
133. HBase and HDFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
134. Running unit or integration tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
135. Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
136. Cryptographic Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
137. Operating System Specific Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
138. JDK Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
Apache HBase Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
139. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526
140. Schema Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
141. Performance/Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528
Apache HBase Operational Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
142. HBase Tools and Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
143. Region Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
144. Node Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
145. HBase Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 568
146. HBase Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575
147. Cluster Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587
148. Running Multiple Workloads On a Single Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 602
149. HBase Backup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610
150. HBase Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612
151. Storing Snapshots in Microsoft Azure Blob Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617
152. Capacity Planning and Region Sizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 618
153. Table Rename. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622
154. RegionServer Grouping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623
155. Region Normalizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627
156. Auto Region Reopen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633
Building and Developing Apache HBase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634
157. Getting Involved . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635
158. Apache HBase Repositories. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 638
159. IDEs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 639
160. Building Apache HBase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643
161. Releasing Apache HBase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
162. Voting on Release Candidates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659
163. Announcing Releases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 661
164. Generating the HBase Reference Guide. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 662
165. Updating hbase.apache.org. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663
166. Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664
167. Developer Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 680
Unit Testing HBase Applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697
168. JUnit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 698
169. Mockito . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
170. MRUnit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 702
171. Integration Testing with an HBase Mini-Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704
Protobuf in HBase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 706
172. Protobuf. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 707
Procedure Framework (Pv2): HBASE-12439 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 711
173. Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 712
174. Subprocedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715
175. ProcedureExecutor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716
176. Nonces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 717
177. Wait/Wake/Suspend/Yield . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 718
178. Locking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 719
179. Procedure Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 720
180. References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 721
AMv2 Description for Devs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 722
181. Background. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 723
182. New System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 724
183. Procedures Detail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725
184. UI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 727
185. Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 728
186. Implementation Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 729
187. New Configs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 730
188. Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 731
ZooKeeper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 732
189. Using existing ZooKeeper ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 734
190. SASL Authentication with ZooKeeper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735
Community . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 742
191. Decisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743
192. Community Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 744
193. Commit Message format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 745
hbtop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 746
194. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 747
195. Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 748
196. Others. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 754
Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755
Appendix A: Contributing to Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 756
Appendix B: FAQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 767
Appendix C: Access Control Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 770
Appendix D: Compression and Data Block Encoding In HBase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 777
Appendix E: SQL over HBase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 788
Appendix F: YCSB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 789
Appendix G: HFile format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 790
Appendix H: Other Information About HBase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 799
Appendix I: HBase History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 800
Appendix J: HBase and the Apache Software Foundation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 801
Appendix K: Apache HBase Orca . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 802
Appendix L: 0.95 RPC Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 803
Appendix M: Known Incompatibilities Among HBase Versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 807
197. HBase 2.0 Incompatible Changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 808
Preface
This is the official reference guide for the HBase version it ships with.

Herein you will find either the definitive documentation on an HBase topic as of its standing when
the referenced HBase version shipped, or it will point to the location in Javadoc or JIRA where the
pertinent information can be found.

About This Guide


This reference guide is a work in progress. The source for this guide can be found in the
_src/main/asciidoc directory of the HBase source. This reference guide is marked up using AsciiDoc
from which the finished guide is generated as part of the 'site' build target. Run

mvn site

to generate this documentation. Amendments and improvements to the documentation are


welcomed. Click this link to file a new documentation bug against Apache HBase with some values
pre-selected.

Contributing to the Documentation


For an overview of AsciiDoc and suggestions to get started contributing to the documentation, see
the relevant section later in this documentation.

Heads-up if this is your first foray into the world of distributed computing…
If this is your first foray into the wonderful world of Distributed Computing, then you are in for
some interesting times. First off, distributed systems are hard; making a distributed system hum
requires a disparate skillset that spans systems (hardware and software) and networking.

Your cluster’s operation can hiccup because of any of a myriad set of reasons from bugs in HBase
itself through misconfigurations — misconfiguration of HBase but also operating system
misconfigurations — through to hardware problems whether it be a bug in your network card
drivers or an underprovisioned RAM bus (to mention two recent examples of hardware issues that
manifested as "HBase is slow"). You will also need to do a recalibration if up to this your computing
has been bound to a single box. Here is one good starting point: Fallacies of Distributed Computing.

That said, you are welcome.


It’s a fun place to be.
Yours, the HBase Community.

Reporting Bugs
Please use JIRA to report non-security-related bugs.

To protect existing HBase installations from new vulnerabilities, please do not use JIRA to report
security-related bugs. Instead, send your report to the mailing list [email protected], which
allows anyone to send messages, but restricts who can read them. Someone on that list will contact
you to follow up on your report.

Support and Testing Expectations

1
The phrases /supported/, /not supported/, /tested/, and /not tested/ occur several places throughout
this guide. In the interest of clarity, here is a brief explanation of what is generally meant by these
phrases, in the context of HBase.

Commercial technical support for Apache HBase is provided by many Hadoop


vendors. This is not the sense in which the term /support/ is used in the context of
 the Apache HBase project. The Apache HBase team assumes no responsibility for
your HBase clusters, your configuration, or your data.

Supported
In the context of Apache HBase, /supported/ means that HBase is designed to work in the way
described, and deviation from the defined behavior or functionality should be reported as a bug.

Not Supported
In the context of Apache HBase, /not supported/ means that a use case or use pattern is not
expected to work and should be considered an antipattern. If you think this designation should
be reconsidered for a given feature or use pattern, file a JIRA or start a discussion on one of the
mailing lists.

Tested
In the context of Apache HBase, /tested/ means that a feature is covered by unit or integration
tests, and has been proven to work as expected.

Not Tested
In the context of Apache HBase, /not tested/ means that a feature or use pattern may or may not
work in a given way, and may or may not corrupt your data or cause operational issues. It is an
unknown, and there are no guarantees. If you can provide proof that a feature designated as
/not tested/ does work in a given way, please submit the tests and/or the metrics so that other
users can gain certainty about such features or use patterns.

2
Getting Started

3
Chapter 1. Introduction
Quickstart will get you up and running on a single-node, standalone instance of HBase.

4
Chapter 2. Quick Start - Standalone HBase
This section describes the setup of a single-node standalone HBase. A standalone instance has all
HBase daemons — the Master, RegionServers, and ZooKeeper — running in a single JVM persisting
to the local filesystem. It is our most basic deploy profile. We will show you how to create a table in
HBase using the hbase shell CLI, insert rows into the table, perform put and scan operations
against the table, enable or disable the table, and start and stop HBase.

Apart from downloading HBase, this procedure should take less than 10 minutes.

2.1. JDK Version Requirements


HBase requires that a JDK be installed. See Java for information about supported JDK versions.

2.2. Get Started with HBase


Procedure: Download, Configure, and Start HBase in Standalone Mode
1. Choose a download site from this list of Apache Download Mirrors. Click on the suggested top
link. This will take you to a mirror of HBase Releases. Click on the folder named stable and then
download the binary file that ends in .tar.gz to your local filesystem. Do not download the file
ending in src.tar.gz for now.

2. Extract the downloaded file, and change to the newly-created directory.

$ tar xzvf hbase-2.3.0-bin.tar.gz


$ cd hbase-2.3.0/

3. You must set the JAVA_HOME environment variable before starting HBase. To make this easier,
HBase lets you set it within the conf/hbase-env.sh file. You must locate where Java is installed on
your machine, and one way to find this is by using the whereis java command. Once you have
the location, edit the conf/hbase-env.sh file and uncomment the line starting with #export
JAVA_HOME=, and then set it to your Java installation path.

Example extract from hbase-env.sh where JAVA_HOME is set

# Set environment variables here.


# The java implementation to use.
export JAVA_HOME=/usr/jdk64/jdk1.8.0_112

4. The bin/start-hbase.sh script is provided as a convenient way to start HBase. Issue the command,
and if all goes well, a message is logged to standard output showing that HBase started
successfully. You can use the jps command to verify that you have one running process called
HMaster. In standalone mode HBase runs all daemons within this single JVM, i.e. the HMaster, a
single HRegionServer, and the ZooKeeper daemon. Go to http://localhost:16010 to view the
HBase Web UI.

Procedure: Use HBase For the First Time

5
1. Connect to HBase.

Connect to your running instance of HBase using the hbase shell command, located in the bin/
directory of your HBase install. In this example, some usage and version information that is
printed when you start HBase Shell has been omitted. The HBase Shell prompt ends with a >
character.

$ ./bin/hbase shell
hbase(main):001:0>

2. Display HBase Shell Help Text.

Type help and press Enter, to display some basic usage information for HBase Shell, as well as
several example commands. Notice that table names, rows, columns all must be enclosed in
quote characters.

3. Create a table.

Use the create command to create a new table. You must specify the table name and the
ColumnFamily name.

hbase(main):001:0> create 'test', 'cf'


0 row(s) in 0.4170 seconds

=> Hbase::Table - test

4. List Information About your Table

Use the list command to confirm your table exists

hbase(main):002:0> list 'test'


TABLE
test
1 row(s) in 0.0180 seconds

=> ["test"]

Now use the describe command to see details, including configuration defaults

6
hbase(main):003:0> describe 'test'
Table test is ENABLED
test
COLUMN FAMILIES DESCRIPTION
{NAME => 'cf', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false',
NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE
=>
'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0',
REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'f
alse', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false',
PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true',
BLOCKSIZE
=> '65536'}
1 row(s)
Took 0.9998 seconds

5. Put data into your table.

To put data into your table, use the put command.

hbase(main):003:0> put 'test', 'row1', 'cf:a', 'value1'


0 row(s) in 0.0850 seconds

hbase(main):004:0> put 'test', 'row2', 'cf:b', 'value2'


0 row(s) in 0.0110 seconds

hbase(main):005:0> put 'test', 'row3', 'cf:c', 'value3'


0 row(s) in 0.0100 seconds

Here, we insert three values, one at a time. The first insert is at row1, column cf:a, with a value
of value1. Columns in HBase are comprised of a column family prefix, cf in this example,
followed by a colon and then a column qualifier suffix, a in this case.

6. Scan the table for all data at once.

One of the ways to get data from HBase is to scan. Use the scan command to scan the table for
data. You can limit your scan, but for now, all data is fetched.

hbase(main):006:0> scan 'test'


ROW COLUMN+CELL
row1 column=cf:a, timestamp=1421762485768,
value=value1
row2 column=cf:b, timestamp=1421762491785,
value=value2
row3 column=cf:c, timestamp=1421762496210,
value=value3
3 row(s) in 0.0230 seconds

7
7. Get a single row of data.

To get a single row of data at a time, use the get command.

hbase(main):007:0> get 'test', 'row1'


COLUMN CELL
cf:a timestamp=1421762485768, value=value1
1 row(s) in 0.0350 seconds

8. Disable a table.

If you want to delete a table or change its settings, as well as in some other situations, you need
to disable the table first, using the disable command. You can re-enable it using the enable
command.

hbase(main):008:0> disable 'test'


0 row(s) in 1.1820 seconds

hbase(main):009:0> enable 'test'


0 row(s) in 0.1770 seconds

Disable the table again if you tested the enable command above:

hbase(main):010:0> disable 'test'


0 row(s) in 1.1820 seconds

9. Drop the table.

To drop (delete) a table, use the drop command.

hbase(main):011:0> drop 'test'


0 row(s) in 0.1370 seconds

10. Exit the HBase Shell.

To exit the HBase Shell and disconnect from your cluster, use the quit command. HBase is still
running in the background.

Procedure: Stop HBase


1. In the same way that the bin/start-hbase.sh script is provided to conveniently start all HBase
daemons, the bin/stop-hbase.sh script stops them.

8
$ ./bin/stop-hbase.sh
stopping hbase....................
$

2. After issuing the command, it can take several minutes for the processes to shut down. Use the
jps to be sure that the HMaster and HRegionServer processes are shut down.

The above has shown you how to start and stop a standalone instance of HBase. In the next sections
we give a quick overview of other modes of hbase deploy.

2.3. Pseudo-Distributed for Local Testing


After working your way through quickstart standalone mode, you can re-configure HBase to run in
pseudo-distributed mode. Pseudo-distributed mode means that HBase still runs completely on a
single host, but each HBase daemon (HMaster, HRegionServer, and ZooKeeper) runs as a separate
process: in standalone mode all daemons ran in one jvm process/instance. By default, unless you
configure the hbase.rootdir property as described in quickstart, your data is still stored in /tmp/. In
this walk-through, we store your data in HDFS instead, assuming you have HDFS available. You can
skip the HDFS configuration to continue storing your data in the local filesystem.

Hadoop Configuration
This procedure assumes that you have configured Hadoop and HDFS on your local
 system and/or a remote system, and that they are running and available. It also
assumes you are using Hadoop 2. The guide on Setting up a Single Node Cluster in
the Hadoop documentation is a good starting point.

1. Stop HBase if it is running.

If you have just finished quickstart and HBase is still running, stop it. This procedure will create
a totally new directory where HBase will store its data, so any databases you created before will
be lost.

2. Configure HBase.

Edit the hbase-site.xml configuration. First, add the following property which directs HBase to
run in distributed mode, with one JVM instance per daemon.

<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>

Next, add a configuration for hbase.rootdir, pointing to the address of your HDFS instance,
using the hdfs://// URI syntax. In this example, HDFS is running on the localhost at port 8020.

9
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:8020/hbase</value>
</property>

You do not need to create the directory in HDFS. HBase will do this for you. If you create the
directory, HBase will attempt to do a migration, which is not what you want.

Finally, remove existing configuration for hbase.tmp.dir and


hbase.unsafe.stream.capability.enforce,

3. Start HBase.

Use the bin/start-hbase.sh command to start HBase. If your system is configured correctly, the
jps command should show the HMaster and HRegionServer processes running.

4. Check the HBase directory in HDFS.

If everything worked correctly, HBase created its directory in HDFS. In the configuration above,
it is stored in /hbase/ on HDFS. You can use the hadoop fs command in Hadoop’s bin/ directory to
list this directory.

$ ./bin/hadoop fs -ls /hbase


Found 7 items
drwxr-xr-x - hbase users 0 2014-06-25 18:58 /hbase/.tmp
drwxr-xr-x - hbase users 0 2014-06-25 21:49 /hbase/WALs
drwxr-xr-x - hbase users 0 2014-06-25 18:48 /hbase/corrupt
drwxr-xr-x - hbase users 0 2014-06-25 18:58 /hbase/data
-rw-r--r-- 3 hbase users 42 2014-06-25 18:41 /hbase/hbase.id
-rw-r--r-- 3 hbase users 7 2014-06-25 18:41 /hbase/hbase.version
drwxr-xr-x - hbase users 0 2014-06-25 21:49 /hbase/oldWALs

5. Create a table and populate it with data.

You can use the HBase Shell to create a table, populate it with data, scan and get values from it,
using the same procedure as in shell exercises.

6. Start and stop a backup HBase Master (HMaster) server.

Running multiple HMaster instances on the same hardware does not make
sense in a production environment, in the same way that running a pseudo-
 distributed cluster does not make sense for production. This step is offered for
testing and learning purposes only.

The HMaster server controls the HBase cluster. You can start up to 9 backup HMaster servers,
which makes 10 total HMasters, counting the primary. To start a backup HMaster, use the local-
master-backup.sh. For each backup master you want to start, add a parameter representing the
port offset for that master. Each HMaster uses two ports (16000 and 16010 by default). The port

10
offset is added to these ports, so using an offset of 2, the backup HMaster would use ports 16002
and 16012. The following command starts 3 backup servers using ports 16002/16012,
16003/16013, and 16005/16015.

$ ./bin/local-master-backup.sh start 2 3 5

To kill a backup master without killing the entire cluster, you need to find its process ID (PID).
The PID is stored in a file with a name like /tmp/hbase-USER-X-master.pid. The only contents of
the file is the PID. You can use the kill -9 command to kill that PID. The following command
will kill the master with port offset 1, but leave the cluster running:

$ cat /tmp/hbase-testuser-1-master.pid |xargs kill -9

7. Start and stop additional RegionServers

The HRegionServer manages the data in its StoreFiles as directed by the HMaster. Generally, one
HRegionServer runs per node in the cluster. Running multiple HRegionServers on the same
system can be useful for testing in pseudo-distributed mode. The local-regionservers.sh
command allows you to run multiple RegionServers. It works in a similar way to the local-
master-backup.sh command, in that each parameter you provide represents the port offset for
an instance. Each RegionServer requires two ports, and the default ports are 16020 and 16030.
Since HBase version 1.1.0, HMaster doesn’t use region server ports, this leaves 10 ports (16020 to
16029 and 16030 to 16039) to be used for RegionServers. For supporting additional
RegionServers, set environment variables HBASE_RS_BASE_PORT and
HBASE_RS_INFO_BASE_PORT to appropriate values before running script local-
regionservers.sh. e.g. With values 16200 and 16300 for base ports, 99 additional RegionServers
can be supported, on a server. The following command starts four additional RegionServers,
running on sequential ports starting at 16022/16032 (base ports 16020/16030 plus 2).

$ .bin/local-regionservers.sh start 2 3 4 5

To stop a RegionServer manually, use the local-regionservers.sh command with the stop
parameter and the offset of the server to stop.

$ .bin/local-regionservers.sh stop 3

8. Stop HBase.

You can stop HBase the same way as in the quickstart procedure, using the bin/stop-hbase.sh
command.

2.4. Fully Distributed for Production


In reality, you need a fully-distributed configuration to fully test HBase and to use it in real-world

11
scenarios. In a distributed configuration, the cluster contains multiple nodes, each of which runs
one or more HBase daemon. These include primary and backup Master instances, multiple
ZooKeeper nodes, and multiple RegionServer nodes.

This advanced quickstart adds two more nodes to your cluster. The architecture will be as follows:

Table 1. Distributed Cluster Demo Architecture

Node Name Master ZooKeeper RegionServer

node-a.example.com yes yes no

node-b.example.com backup yes yes

node-c.example.com no yes yes

This quickstart assumes that each node is a virtual machine and that they are all on the same
network. It builds upon the previous quickstart, Pseudo-Distributed for Local Testing, assuming that
the system you configured in that procedure is now node-a. Stop HBase on node-a before continuing.

Be sure that all the nodes have full access to communicate, and that no firewall
 rules are in place which could prevent them from talking to each other. If you see
any errors like no route to host, check your firewall.

Procedure: Configure Passwordless SSH Access


node-a needs to be able to log into node-b and node-c (and to itself) in order to start the daemons.
The easiest way to accomplish this is to use the same username on all hosts, and configure
password-less SSH login from node-a to each of the others.

1. On node-a, generate a key pair.

While logged in as the user who will run HBase, generate a SSH key pair, using the following
command:

$ ssh-keygen -t rsa

If the command succeeds, the location of the key pair is printed to standard output. The default
name of the public key is id_rsa.pub.

2. Create the directory that will hold the shared keys on the other nodes.

On node-b and node-c, log in as the HBase user and create a .ssh/ directory in the user’s home
directory, if it does not already exist. If it already exists, be aware that it may already contain
other keys.

3. Copy the public key to the other nodes.

Securely copy the public key from node-a to each of the nodes, by using the scp or some other
secure means. On each of the other nodes, create a new file called .ssh/authorized_keys if it does
not already exist, and append the contents of the id_rsa.pub file to the end of it. Note that you
also need to do this for node-a itself.

12
$ cat id_rsa.pub >> ~/.ssh/authorized_keys

4. Test password-less login.

If you performed the procedure correctly, you should not be prompted for a password when
you SSH from node-a to either of the other nodes using the same username.

5. Since node-b will run a backup Master, repeat the procedure above, substituting node-b
everywhere you see node-a. Be sure not to overwrite your existing .ssh/authorized_keys files, but
concatenate the new key onto the existing file using the >> operator rather than the > operator.

Procedure: Prepare node-a


node-a will run your primary master and ZooKeeper processes, but no RegionServers. Stop the
RegionServer from starting on node-a.

1. Edit conf/regionservers and remove the line which contains localhost. Add lines with the
hostnames or IP addresses for node-b and node-c.

Even if you did want to run a RegionServer on node-a, you should refer to it by the hostname the
other servers would use to communicate with it. In this case, that would be node-a.example.com.
This enables you to distribute the configuration to each node of your cluster any hostname
conflicts. Save the file.

2. Configure HBase to use node-b as a backup master.

Create a new file in conf/ called backup-masters, and add a new line to it with the hostname for
node-b. In this demonstration, the hostname is node-b.example.com.

3. Configure ZooKeeper

In reality, you should carefully consider your ZooKeeper configuration. You can find out more
about configuring ZooKeeper in zookeeper section. This configuration will direct HBase to start
and manage a ZooKeeper instance on each node of the cluster.

On node-a, edit conf/hbase-site.xml and add the following properties.

<property>
<name>hbase.zookeeper.quorum</name>
<value>node-a.example.com,node-b.example.com,node-c.example.com</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/usr/local/zookeeper</value>
</property>

4. Everywhere in your configuration that you have referred to node-a as localhost, change the
reference to point to the hostname that the other nodes will use to refer to node-a. In these
examples, the hostname is node-a.example.com.

13
Procedure: Prepare node-b and node-c
node-b will run a backup master server and a ZooKeeper instance.

1. Download and unpack HBase.

Download and unpack HBase to node-b, just as you did for the standalone and pseudo-
distributed quickstarts.

2. Copy the configuration files from node-a to node-b.and node-c.

Each node of your cluster needs to have the same configuration information. Copy the contents
of the conf/ directory to the conf/ directory on node-b and node-c.

Procedure: Start and Test Your Cluster


1. Be sure HBase is not running on any node.

If you forgot to stop HBase from previous testing, you will have errors. Check to see whether
HBase is running on any of your nodes by using the jps command. Look for the processes
HMaster, HRegionServer, and HQuorumPeer. If they exist, kill them.

2. Start the cluster.

On node-a, issue the start-hbase.sh command. Your output will be similar to that below.

$ bin/start-hbase.sh
node-c.example.com: starting zookeeper, logging to /home/hbuser/hbase-0.98.3-
hadoop2/bin/../logs/hbase-hbuser-zookeeper-node-c.example.com.out
node-a.example.com: starting zookeeper, logging to /home/hbuser/hbase-0.98.3-
hadoop2/bin/../logs/hbase-hbuser-zookeeper-node-a.example.com.out
node-b.example.com: starting zookeeper, logging to /home/hbuser/hbase-0.98.3-
hadoop2/bin/../logs/hbase-hbuser-zookeeper-node-b.example.com.out
starting master, logging to /home/hbuser/hbase-0.98.3-hadoop2/bin/../logs/hbase-
hbuser-master-node-a.example.com.out
node-c.example.com: starting regionserver, logging to /home/hbuser/hbase-0.98.3-
hadoop2/bin/../logs/hbase-hbuser-regionserver-node-c.example.com.out
node-b.example.com: starting regionserver, logging to /home/hbuser/hbase-0.98.3-
hadoop2/bin/../logs/hbase-hbuser-regionserver-node-b.example.com.out
node-b.example.com: starting master, logging to /home/hbuser/hbase-0.98.3-
hadoop2/bin/../logs/hbase-hbuser-master-nodeb.example.com.out

ZooKeeper starts first, followed by the master, then the RegionServers, and finally the backup
masters.

3. Verify that the processes are running.

On each node of the cluster, run the jps command and verify that the correct processes are
running on each server. You may see additional Java processes running on your servers as well,
if they are used for other purposes.

14
node-a jps Output

$ jps
20355 Jps
20071 HQuorumPeer
20137 HMaster

node-b jps Output

$ jps
15930 HRegionServer
16194 Jps
15838 HQuorumPeer
16010 HMaster

node-c jps Output

$ jps
13901 Jps
13639 HQuorumPeer
13737 HRegionServer

ZooKeeper Process Name


The HQuorumPeer process is a ZooKeeper instance which is controlled and
started by HBase. If you use ZooKeeper this way, it is limited to one instance
 per cluster node and is appropriate for testing only. If ZooKeeper is run outside
of HBase, the process is called QuorumPeer. For more about ZooKeeper
configuration, including using an external ZooKeeper instance with HBase, see
zookeeper section.

4. Browse to the Web UI.

Web UI Port Changes


In HBase newer than 0.98.x, the HTTP ports used by the HBase Web UI changed
 from 60010 for the Master and 60030 for each RegionServer to 16010 for the
Master and 16030 for the RegionServer.

If everything is set up correctly, you should be able to connect to the UI for the Master
http://node-a.example.com:16010/ or the secondary master at http://node-b.example.com:16010/
using a web browser. If you can connect via localhost but not from another host, check your
firewall rules. You can see the web UI for each of the RegionServers at port 16030 of their IP
addresses, or by clicking their links in the web UI for the Master.

5. Test what happens when nodes or services disappear.

With a three-node cluster you have configured, things will not be very resilient. You can still test
the behavior of the primary Master or a RegionServer by killing the associated processes and

15
watching the logs.

2.5. Where to go next


The next chapter, configuration, gives more information about the different HBase run modes,
system requirements for running HBase, and critical configuration areas for setting up a
distributed HBase cluster.

16
Apache HBase Configuration
This chapter expands upon the Getting Started chapter to further explain configuration of Apache
HBase. Please read this chapter carefully, especially the Basic Prerequisites to ensure that your
HBase testing and deployment goes smoothly. Familiarize yourself with Support and Testing
Expectations as well.

17
Chapter 3. Configuration Files
Apache HBase uses the same configuration system as Apache Hadoop. All configuration files are
located in the conf/ directory, which needs to be kept in sync for each node on your cluster.

HBase Configuration File Descriptions


backup-masters
Not present by default. A plain-text file which lists hosts on which the Master should start a
backup Master process, one host per line.

hadoop-metrics2-hbase.properties
Used to connect HBase Hadoop’s Metrics2 framework. See the Hadoop Wiki entry for more
information on Metrics2. Contains only commented-out examples by default.

hbase-env.cmd and hbase-env.sh


Script for Windows and Linux / Unix environments to set up the working environment for
HBase, including the location of Java, Java options, and other environment variables. The file
contains many commented-out examples to provide guidance.

hbase-policy.xml
The default policy configuration file used by RPC servers to make authorization decisions on
client requests. Only used if HBase security is enabled.

hbase-site.xml
The main HBase configuration file. This file specifies configuration options which override
HBase’s default configuration. You can view (but do not edit) the default configuration file at
docs/hbase-default.xml. You can also view the entire effective configuration for your cluster
(defaults and overrides) in the HBase Configuration tab of the HBase Web UI.

log4j.properties
Configuration file for HBase logging via log4j.

regionservers
A plain-text file containing a list of hosts which should run a RegionServer in your HBase cluster.
By default, this file contains the single entry localhost. It should contain a list of hostnames or IP
addresses, one per line, and should only contain localhost if each node in your cluster will run a
RegionServer on its localhost interface.

Checking XML Validity


When you edit XML, it is a good idea to use an XML-aware editor to be sure that
your syntax is correct and your XML is well-formed. You can also use the xmllint
 utility to check that your XML is well-formed. By default, xmllint re-flows and
prints the XML to standard output. To check for well-formedness and only print
output if errors exist, use the command xmllint -noout filename.xml.

18
Keep Configuration In Sync Across the Cluster
When running in distributed mode, after you make an edit to an HBase
configuration, make sure you copy the contents of the conf/ directory to all nodes
 of the cluster. HBase will not do this for you. Use a configuration management tool
for managing and copying the configuration files to your nodes. For most
configurations, a restart is needed for servers to pick up changes. Dynamic
configuration is an exception to this, to be described later below.

19
Chapter 4. Basic Prerequisites
This section lists required services and some required system configuration.

Java
HBase runs on the Java Virtual Machine, thus all HBase deployments require a JVM runtime.

The following table summarizes the recommendations of the HBase community with respect to
running on various Java versions. The  symbol indicates a base level of testing and willingness to
help diagnose and address issues you might run into; these are the expected deployment
combinations. An entry of  means that there may be challenges with this combination, and you
should look for more information before deciding to pursue this as your deployment strategy. The
 means this combination does not work; either an older Java version is considered deprecated by
the HBase community, or this combination is known to not work. For combinations of newer JDK
with older HBase releases, it’s likely there are known compatibility issues that cannot be addressed
under our compatibility guarantees, making the combination impossible. In some cases, specific
guidance on limitations (e.g. whether compiling / unit tests work, specific operational issues, etc)
are also noted. Assume any combination not listed here is considered .

Long-Term Support JDKs are Recommended


HBase recommends downstream users rely only on JDK releases that are marked
as Long-Term Supported (LTS), either from the OpenJDK project or vendors. At the
 time of this writing, the following JDK releases are NOT LTS releases and are NOT
tested or advocated for use by the Apache HBase community: JDK9, JDK10, JDK12,
JDK13, and JDK14. Community discussion around this decision is recorded on
HBASE-20264.

HotSpot vs. OpenJ9


At this time, all testing performed by the Apache HBase project runs on the
 HotSpot variant of the JVM. When selecting your JDK distribution, please take this
into consideration.

Table 2. Java support by release line

Java Version HBase 1.3+ HBase 2.1+ HBase 2.3+

JDK7   

JDK8   

JDK11   *

20
A Note on JDK11 *
Preliminary support for JDK11 is introduced with HBase 2.3.0. This support is
limited to compilation and running the full test suite. There are open questions
 regarding the runtime compatibility of JDK11 with Apache ZooKeeper and Apache
Hadoop (HADOOP-15338). Significantly, neither project has yet released a version
with explicit runtime support for JDK11. The remaining known issues in HBase are
catalogued in HBASE-22972.

You must set JAVA_HOME on each node of your cluster. hbase-env.sh provides a handy
 mechanism to do this.

Operating System Utilities


ssh
HBase uses the Secure Shell (ssh) command and utilities extensively to communicate between
cluster nodes. Each server in the cluster must be running ssh so that the Hadoop and HBase
daemons can be managed. You must be able to connect to all nodes via SSH, including the local
node, from the Master as well as any backup Master, using a shared key rather than a password.
You can see the basic methodology for such a set-up in Linux or Unix systems at "Procedure:
Configure Passwordless SSH Access". If your cluster nodes use OS X, see the section, SSH: Setting
up Remote Desktop and Enabling Self-Login on the Hadoop wiki.

DNS
HBase uses the local hostname to self-report its IP address.

NTP
The clocks on cluster nodes should be synchronized. A small amount of variation is acceptable,
but larger amounts of skew can cause erratic and unexpected behavior. Time synchronization is
one of the first things to check if you see unexplained problems in your cluster. It is
recommended that you run a Network Time Protocol (NTP) service, or another time-
synchronization mechanism on your cluster and that all nodes look to the same service for time
synchronization. See the Basic NTP Configuration at The Linux Documentation Project (TLDP) to
set up NTP.

Limits on Number of Files and Processes (ulimit)


Apache HBase is a database. It requires the ability to open a large number of files at once. Many
Linux distributions limit the number of files a single user is allowed to open to 1024 (or 256 on
older versions of OS X). You can check this limit on your servers by running the command ulimit
-n when logged in as the user which runs HBase. See the Troubleshooting section for some of the
problems you may experience if the limit is too low. You may also notice errors such as the
following:

2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Exception


increateBlockOutputStream java.io.EOFException
2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block
blk_-6935524980745310745_1391901

21
It is recommended to raise the ulimit to at least 10,000, but more likely 10,240, because the value
is usually expressed in multiples of 1024. Each ColumnFamily has at least one StoreFile, and
possibly more than six StoreFiles if the region is under load. The number of open files required
depends upon the number of ColumnFamilies and the number of regions. The following is a
rough formula for calculating the potential number of open files on a RegionServer.

Calculate the Potential Number of Open Files

(StoreFiles per ColumnFamily) x (regions per RegionServer)

For example, assuming that a schema had 3 ColumnFamilies per region with an average of 3
StoreFiles per ColumnFamily, and there are 100 regions per RegionServer, the JVM will open 3 *
3 * 100 = 900 file descriptors, not counting open JAR files, configuration files, and others.
Opening a file does not take many resources, and the risk of allowing a user to open too many
files is minimal.

Another related setting is the number of processes a user is allowed to run at once. In Linux and
Unix, the number of processes is set using the ulimit -u command. This should not be confused
with the nproc command, which controls the number of CPUs available to a given user. Under
load, a ulimit -u that is too low can cause OutOfMemoryError exceptions.

Configuring the maximum number of file descriptors and processes for the user who is running
the HBase process is an operating system configuration, rather than an HBase configuration. It is
also important to be sure that the settings are changed for the user that actually runs HBase. To
see which user started HBase, and that user’s ulimit configuration, look at the first line of the
HBase log for that instance.

Example 1. ulimit Settings on Ubuntu

To configure ulimit settings on Ubuntu, edit /etc/security/limits.conf, which is a space-


delimited file with four columns. Refer to the man page for limits.conf for details about the
format of this file. In the following example, the first line sets both soft and hard limits for
the number of open files (nofile) to 32768 for the operating system user with the username
hadoop. The second line sets the number of processes to 32000 for the same user.

hadoop - nofile 32768


hadoop - nproc 32000

The settings are only applied if the Pluggable Authentication Module (PAM) environment is
directed to use them. To configure PAM to use these limits, be sure that the
/etc/pam.d/common-session file contains the following line:

session required pam_limits.so

Linux Shell
All of the shell scripts that come with HBase rely on the GNU Bash shell.

22
Windows
Running production systems on Windows machines is not recommended.

4.1. Hadoop
The following table summarizes the versions of Hadoop supported with each version of HBase.
Older versions not appearing in this table are considered unsupported and likely missing necessary
features, while newer versions are untested but may be suitable.

Based on the version of HBase, you should select the most appropriate version of Hadoop. You can
use Apache Hadoop, or a vendor’s distribution of Hadoop. No distinction is made here. See the
Hadoop wiki for information about vendors of Hadoop.

Hadoop 2.x is recommended.


Hadoop 2.x is faster and includes features, such as short-circuit reads (see
Leveraging local data), which will help improve your HBase random read profile.
Hadoop 2.x also includes important bug fixes that will improve your overall HBase
 experience. HBase does not support running with earlier versions of Hadoop. See
the table below for requirements specific to different HBase versions.

Hadoop 3.x is still in early access releases and has not yet been sufficiently tested
by the HBase community for production use cases.

Use the following legend to interpret this table:

Hadoop version support matrix


•  = Tested to be fully-functional

•  = Known to not be fully-functional, or there are CVEs so we drop the support in newer minor
releases

•  = Not tested, may/may-not function

HBase-1.3.x HBase-1.4.x HBase-1.5.x HBase-2.1.x HBase-2.2.x HBase-2.3.x


Hadoop-2.4.x      
Hadoop-2.5.x      
Hadoop-2.6.0      
Hadoop-
     
2.6.1+

Hadoop-2.7.0      
Hadoop-
     
2.7.1+

Hadoop-
     
2.8.[0-2]

23
HBase-1.3.x HBase-1.4.x HBase-1.5.x HBase-2.1.x HBase-2.2.x HBase-2.3.x
Hadoop-
     
2.8.[3-4]

Hadoop-
     
2.8.5+

Hadoop-
     
2.9.[0-1]

Hadoop-
     
2.9.2+

Hadoop-
     
2.10.0

Hadoop-
     
3.0.[0-2]

Hadoop-
     
3.0.3+

Hadoop-3.1.0      
Hadoop-
     
3.1.1+

Hadoop-3.2.x      

Hadoop Pre-2.6.1 and JDK 1.8 Kerberos


When using pre-2.6.1 Hadoop versions and JDK 1.8 in a Kerberos environment,
 HBase server can fail and abort due to Kerberos keytab relogin error. Late version
of JDK 1.7 (1.7.0_80) has the problem too. Refer to HADOOP-10786 for additional
details. Consider upgrading to Hadoop 2.6.1+ in this case.

Hadoop 2.6.x
Hadoop distributions based on the 2.6.x line must have HADOOP-11710 applied if
 you plan to run HBase on top of an HDFS Encryption Zone. Failure to do so will
result in cluster failure and data loss. This patch is present in Apache Hadoop
releases 2.6.1+.

Hadoop 2.y.0 Releases


Starting around the time of Hadoop version 2.7.0, the Hadoop PMC got into the
habit of calling out new minor releases on their major version 2 release line as not
stable / production ready. As such, HBase expressly advises downstream users to
 avoid running on top of these releases. Note that additionally the 2.8.1 release was
given the same caveat by the Hadoop PMC. For reference, see the release
announcements for Apache Hadoop 2.7.0, Apache Hadoop 2.8.0, Apache Hadoop
2.8.1, and Apache Hadoop 2.9.0.

24
Hadoop 3.0.x Releases
Hadoop distributions that include the Application Timeline Service feature may
cause unexpected versions of HBase classes to be present in the application
 classpath. Users planning on running MapReduce applications with HBase should
make sure that YARN-7190 is present in their YARN service (currently fixed in
2.9.1+ and 3.1.0+).

Hadoop 3.1.0 Release


The Hadoop PMC called out the 3.1.0 release as not stable / production ready. As
 such, HBase expressly advises downstream users to avoid running on top of this
release. For reference, see the release announcement for Hadoop 3.1.0.

Replace the Hadoop Bundled With HBase!


Because HBase depends on Hadoop, it bundles Hadoop jars under its lib directory.
The bundled jars are ONLY for use in stand-alone mode. In distributed mode, it is
critical that the version of Hadoop that is out on your cluster match what is under
 HBase. Replace the hadoop jars found in the HBase lib directory with the
equivalent hadoop jars from the version you are running on your cluster to avoid
version mismatch issues. Make sure you replace the jars under HBase across your
whole cluster. Hadoop version mismatch issues have various manifestations.
Check for mismatch if HBase appears hung.

4.1.1. dfs.datanode.max.transfer.threads

An HDFS DataNode has an upper bound on the number of files that it will serve at any one time.
Before doing any loading, make sure you have configured Hadoop’s conf/hdfs-site.xml, setting the
dfs.datanode.max.transfer.threads value to at least the following:

<property>
<name>dfs.datanode.max.transfer.threads</name>
<value>4096</value>
</property>

Be sure to restart your HDFS after making the above configuration.

Not having this configuration in place makes for strange-looking failures. One manifestation is a
complaint about missing blocks. For example:

10/12/08 20:10:31 INFO hdfs.DFSClient: Could not obtain block


blk_XXXXXXXXXXXXXXXXXXXXXX_YYYYYYYY from any node: java.io.IOException: No
live nodes
contain current block. Will get new block locations from namenode and
retry...

See also casestudies.max.transfer.threads and note that this property was previously known as

25
dfs.datanode.max.xcievers (e.g. Hadoop HDFS: Deceived by Xciever).

4.2. ZooKeeper Requirements


An Apache ZooKeeper quorum is required. The exact version depends on your version of HBase,
though the minimum ZooKeeper version is 3.4.x due to the useMulti feature made default in 1.0.0
(see HBASE-16598).

26
Chapter 5. HBase run modes: Standalone
and Distributed
HBase has two run modes: standalone and distributed. Out of the box, HBase runs in standalone
mode. Whatever your mode, you will need to configure HBase by editing files in the HBase conf
directory. At a minimum, you must edit conf/hbase-env.sh to tell HBase which java to use. In this
file you set HBase environment variables such as the heapsize and other options for the JVM, the
preferred location for log files, etc. Set JAVA_HOME to point at the root of your java install.

5.1. Standalone HBase


This is the default mode. Standalone mode is what is described in the quickstart section. In
standalone mode, HBase does not use HDFS — it uses the local filesystem instead — and it runs all
HBase daemons and a local ZooKeeper all up in the same JVM. ZooKeeper binds to a well-known
port so clients may talk to HBase.

5.1.1. Standalone HBase over HDFS

A sometimes useful variation on standalone hbase has all daemons running inside the one JVM but
rather than persist to the local filesystem, instead they persist to an HDFS instance.

You might consider this profile when you are intent on a simple deploy profile, the loading is light,
but the data must persist across node comings and goings. Writing to HDFS where data is replicated
ensures the latter.

To configure this standalone variant, edit your hbase-site.xml setting hbase.rootdir to point at a
directory in your HDFS instance but then set hbase.cluster.distributed to false. For example:

<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://namenode.example.org:8020/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>false</value>
</property>
</configuration>

5.2. Distributed
Distributed mode can be subdivided into distributed but all daemons run on a single node — a.k.a.
pseudo-distributed — and fully-distributed where the daemons are spread across all nodes in the
cluster. The pseudo-distributed vs. fully-distributed nomenclature comes from Hadoop.

Pseudo-distributed mode can run against the local filesystem or it can run against an instance of

27
the Hadoop Distributed File System (HDFS). Fully-distributed mode can ONLY run on HDFS. See the
Hadoop documentation for how to set up HDFS. A good walk-through for setting up HDFS on
Hadoop 2 can be found at http://www.alexjf.net/blog/distributed-systems/hadoop-yarn-installation-
definitive-guide.

5.2.1. Pseudo-distributed

Pseudo-Distributed Quickstart

 A quickstart has been added to the quickstart chapter. See quickstart-pseudo.


Some of the information that was originally in this section has been moved there.

A pseudo-distributed mode is simply a fully-distributed mode run on a single host. Use this HBase
configuration for testing and prototyping purposes only. Do not use this configuration for
production or for performance evaluation.

5.3. Fully-distributed
By default, HBase runs in stand-alone mode. Both stand-alone mode and pseudo-distributed mode
are provided for the purposes of small-scale testing. For a production environment, distributed
mode is advised. In distributed mode, multiple instances of HBase daemons run on multiple servers
in the cluster.

Just as in pseudo-distributed mode, a fully distributed configuration requires that you set the
hbase.cluster.distributed property to true. Typically, the hbase.rootdir is configured to point to a
highly-available HDFS filesystem.

In addition, the cluster is configured so that multiple cluster nodes enlist as RegionServers,
ZooKeeper QuorumPeers, and backup HMaster servers. These configuration basics are all
demonstrated in quickstart-fully-distributed.

Distributed RegionServers
Typically, your cluster will contain multiple RegionServers all running on different servers, as well
as primary and backup Master and ZooKeeper daemons. The conf/regionservers file on the master
server contains a list of hosts whose RegionServers are associated with this cluster. Each host is on
a separate line. All hosts listed in this file will have their RegionServer processes started and
stopped when the master server starts or stops.

ZooKeeper and HBase


See the ZooKeeper section for ZooKeeper setup instructions for HBase.

28
Example 2. Example Distributed HBase Cluster

This is a bare-bones conf/hbase-site.xml for a distributed HBase cluster. A cluster that is used
for real-world work would contain more custom configuration parameters. Most HBase
configuration directives have default values, which are used unless the value is overridden in
the hbase-site.xml. See "Configuration Files" for more information.

<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://namenode.example.org:8020/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>node-a.example.com,node-b.example.com,node-c.example.com</value>
</property>
</configuration>

This is an example conf/regionservers file, which contains a list of nodes that should run a
RegionServer in the cluster. These nodes need HBase installed and they need to use the same
contents of the conf/ directory as the Master server.

node-a.example.com
node-b.example.com
node-c.example.com

This is an example conf/backup-masters file, which contains a list of each node that should run
a backup Master instance. The backup Master instances will sit idle unless the main Master
becomes unavailable.

node-b.example.com
node-c.example.com

Distributed HBase Quickstart


See quickstart-fully-distributed for a walk-through of a simple three-node cluster configuration
with multiple ZooKeeper, backup HMaster, and RegionServer instances.

Procedure: HDFS Client Configuration


1. Of note, if you have made HDFS client configuration changes on your Hadoop cluster, such as
configuration directives for HDFS clients, as opposed to server-side configurations, you must
use one of the following methods to enable HBase to see and use these configuration changes:

29
1. Add a pointer to your HADOOP_CONF_DIR to the HBASE_CLASSPATH environment variable in hbase-
env.sh.

2. Add a copy of hdfs-site.xml (or hadoop-site.xml) or, better, symlinks, under


${HBASE_HOME}/conf, or

3. if only a small set of HDFS client configurations, add them to hbase-site.xml.

An example of such an HDFS client configuration is dfs.replication. If for example, you want to
run with a replication factor of 5, HBase will create files with the default of 3 unless you do the
above to make the configuration available to HBase.

30
Chapter 6. Running and Confirming Your
Installation
Make sure HDFS is running first. Start and stop the Hadoop HDFS daemons by running bin/start-
hdfs.sh over in the HADOOP_HOME directory. You can ensure it started properly by testing the put and
get of files into the Hadoop filesystem. HBase does not normally use the MapReduce or YARN
daemons. These do not need to be started.

If you are managing your own ZooKeeper, start it and confirm it’s running, else HBase will start up
ZooKeeper for you as part of its start process.

Start HBase with the following command:

bin/start-hbase.sh

Run the above from the HBASE_HOME directory.

You should now have a running HBase instance. HBase logs can be found in the logs subdirectory.
Check them out especially if HBase had trouble starting.

HBase also puts up a UI listing vital attributes. By default it’s deployed on the Master host at port
16010 (HBase RegionServers listen on port 16020 by default and put up an informational HTTP
server at port 16030). If the Master is running on a host named master.example.org on the default
port, point your browser at http://master.example.org:16010 to see the web interface.

Once HBase has started, see the shell exercises section for how to create tables, add data, scan your
insertions, and finally disable and drop your tables.

To stop HBase after exiting the HBase shell enter

$ ./bin/stop-hbase.sh
stopping hbase...............

Shutdown can take a moment to complete. It can take longer if your cluster is comprised of many
machines. If you are running a distributed operation, be sure to wait until HBase has shut down
completely before stopping the Hadoop daemons.

31
Chapter 7. Default Configuration
7.1. hbase-site.xml and hbase-default.xml
Just as in Hadoop where you add site-specific HDFS configuration to the hdfs-site.xml file, for HBase,
site specific customizations go into the file conf/hbase-site.xml. For the list of configurable
properties, see hbase default configurations below or view the raw hbase-default.xml source file in
the HBase source code at src/main/resources.

Not all configuration options make it out to hbase-default.xml. Some configurations would only
appear in source code; the only way to identify these changes are through code review.

Currently, changes here will require a cluster restart for HBase to notice the change.

7.2. HBase Default Configuration


The documentation below is generated using the default hbase configuration file, hbase-default.xml,
as source.

hbase.tmp.dir
Description
Temporary directory on the local filesystem. Change this setting to point to a location more
permanent than '/tmp', the usual resolve for java.io.tmpdir, as the '/tmp' directory is cleared on
machine restart.

Default
${java.io.tmpdir}/hbase-${user.name}

hbase.rootdir
Description
The directory shared by region servers and into which HBase persists. The URL should be 'fully-
qualified' to include the filesystem scheme. For example, to specify the HDFS directory '/hbase'
where the HDFS instance’s namenode is running at namenode.example.org on port 9000, set this
value to: hdfs://namenode.example.org:9000/hbase. By default, we write to whatever
${hbase.tmp.dir} is set too — usually /tmp — so change this configuration or else all data will be
lost on machine restart.

Default
${hbase.tmp.dir}/hbase

hbase.cluster.distributed
Description
The mode the cluster will be in. Possible values are false for standalone mode and true for
distributed mode. If false, startup will run all HBase and ZooKeeper daemons together in the one
JVM.

Default

32
false

hbase.zookeeper.quorum
Description
Comma separated list of servers in the ZooKeeper ensemble (This config. should have been
named hbase.zookeeper.ensemble). For example,
"host1.mydomain.com,host2.mydomain.com,host3.mydomain.com". By default this is set to
localhost for local and pseudo-distributed modes of operation. For a fully-distributed setup, this
should be set to a full list of ZooKeeper ensemble servers. If HBASE_MANAGES_ZK is set in
hbase-env.sh this is the list of servers which hbase will start/stop ZooKeeper on as part of cluster
start/stop. Client-side, we will take this list of ensemble members and put it together with the
hbase.zookeeper.property.clientPort config. and pass it into zookeeper constructor as the
connectString parameter.

Default
127.0.0.1

zookeeper.recovery.retry.maxsleeptime
Description
Max sleep time before retry zookeeper operations in milliseconds, a max time is needed here so
that sleep time won’t grow unboundedly

Default
60000

hbase.local.dir
Description
Directory on the local filesystem to be used as a local storage.

Default
${hbase.tmp.dir}/local/

hbase.master.port
Description
The port the HBase Master should bind to.

Default
16000

hbase.master.info.port
Description
The port for the HBase Master web UI. Set to -1 if you do not want a UI instance run.

Default
16010

hbase.master.info.bindAddress
Description

33
The bind address for the HBase Master web UI

Default
0.0.0.0

hbase.master.logcleaner.plugins
Description
A comma-separated list of BaseLogCleanerDelegate invoked by the LogsCleaner service. These
WAL cleaners are called in order, so put the cleaner that prunes the most files in front. To
implement your own BaseLogCleanerDelegate, just put it in HBase’s classpath and add the fully
qualified class name here. Always add the above default log cleaners in the list.

Default
org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner,org.apache.hadoop.hbase.master.c
leaner.TimeToLiveProcedureWALCleaner,org.apache.hadoop.hbase.master.cleaner.TimeToLiveMaster
LocalStoreWALCleaner

hbase.master.logcleaner.ttl
Description
How long a WAL remain in the archive ({hbase.rootdir}/oldWALs) directory, after which it will
be cleaned by a Master thread. The value is in milliseconds.

Default
600000

hbase.master.hfilecleaner.plugins
Description
A comma-separated list of BaseHFileCleanerDelegate invoked by the HFileCleaner service. These
HFiles cleaners are called in order, so put the cleaner that prunes the most files in front. To
implement your own BaseHFileCleanerDelegate, just put it in HBase’s classpath and add the fully
qualified class name here. Always add the above default hfile cleaners in the list as they will be
overwritten in hbase-site.xml.

Default
org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner,org.apache.hadoop.hbase.master
.cleaner.TimeToLiveMasterLocalStoreHFileCleaner

hbase.master.infoserver.redirect
Description
Whether or not the Master listens to the Master web UI port (hbase.master.info.port) and
redirects requests to the web UI server shared by the Master and RegionServer. Config. makes
sense when Master is serving Regions (not the default).

Default
true

hbase.master.fileSplitTimeout
Description
Splitting a region, how long to wait on the file-splitting step before aborting the attempt. Default:

34
600000. This setting used to be known as hbase.regionserver.fileSplitTimeout in hbase-1.x. Split
is now run master-side hence the rename (If a 'hbase.master.fileSplitTimeout' setting found, will
use it to prime the current 'hbase.master.fileSplitTimeout' Configuration.

Default
600000

hbase.regionserver.port
Description
The port the HBase RegionServer binds to.

Default
16020

hbase.regionserver.info.port
Description
The port for the HBase RegionServer web UI Set to -1 if you do not want the RegionServer UI to
run.

Default
16030

hbase.regionserver.info.bindAddress
Description
The address for the HBase RegionServer web UI

Default
0.0.0.0

hbase.regionserver.info.port.auto
Description
Whether or not the Master or RegionServer UI should search for a port to bind to. Enables
automatic port search if hbase.regionserver.info.port is already in use. Useful for testing, turned
off by default.

Default
false

hbase.regionserver.handler.count
Description
Count of RPC Listener instances spun up on RegionServers. Same property is used by the Master
for count of master handlers. Too many handlers can be counter-productive. Make it a multiple
of CPU count. If mostly read-only, handlers count close to cpu count does well. Start with twice
the CPU count and tune from there.

Default
30

35
hbase.ipc.server.callqueue.handler.factor
Description
Factor to determine the number of call queues. A value of 0 means a single queue shared
between all the handlers. A value of 1 means that each handler has its own queue.

Default
0.1

hbase.ipc.server.callqueue.read.ratio
Description
Split the call queues into read and write queues. The specified interval (which should be
between 0.0 and 1.0) will be multiplied by the number of call queues. A value of 0 indicate to not
split the call queues, meaning that both read and write requests will be pushed to the same set of
queues. A value lower than 0.5 means that there will be less read queues than write queues. A
value of 0.5 means there will be the same number of read and write queues. A value greater
than 0.5 means that there will be more read queues than write queues. A value of 1.0 means that
all the queues except one are used to dispatch read requests. Example: Given the total number of
call queues being 10 a read.ratio of 0 means that: the 10 queues will contain both read/write
requests. a read.ratio of 0.3 means that: 3 queues will contain only read requests and 7 queues
will contain only write requests. a read.ratio of 0.5 means that: 5 queues will contain only read
requests and 5 queues will contain only write requests. a read.ratio of 0.8 means that: 8 queues
will contain only read requests and 2 queues will contain only write requests. a read.ratio of 1
means that: 9 queues will contain only read requests and 1 queues will contain only write
requests.

Default
0

hbase.ipc.server.callqueue.scan.ratio
Description
Given the number of read call queues, calculated from the total number of call queues
multiplied by the callqueue.read.ratio, the scan.ratio property will split the read call queues into
small-read and long-read queues. A value lower than 0.5 means that there will be less long-read
queues than short-read queues. A value of 0.5 means that there will be the same number of
short-read and long-read queues. A value greater than 0.5 means that there will be more long-
read queues than short-read queues A value of 0 or 1 indicate to use the same set of queues for
gets and scans. Example: Given the total number of read call queues being 8 a scan.ratio of 0 or 1
means that: 8 queues will contain both long and short read requests. a scan.ratio of 0.3 means
that: 2 queues will contain only long-read requests and 6 queues will contain only short-read
requests. a scan.ratio of 0.5 means that: 4 queues will contain only long-read requests and 4
queues will contain only short-read requests. a scan.ratio of 0.8 means that: 6 queues will
contain only long-read requests and 2 queues will contain only short-read requests.

Default
0

hbase.regionserver.msginterval
Description

36
Interval between messages from the RegionServer to Master in milliseconds.

Default
3000

hbase.regionserver.logroll.period
Description
Period at which we will roll the commit log regardless of how many edits it has.

Default
3600000

hbase.regionserver.logroll.errors.tolerated
Description
The number of consecutive WAL close errors we will allow before triggering a server abort. A
setting of 0 will cause the region server to abort if closing the current WAL writer fails during
log rolling. Even a small value (2 or 3) will allow a region server to ride over transient HDFS
errors.

Default
2

hbase.regionserver.hlog.reader.impl
Description
The WAL file reader implementation.

Default
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader

hbase.regionserver.hlog.writer.impl
Description
The WAL file writer implementation.

Default
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter

hbase.regionserver.global.memstore.size
Description
Maximum size of all memstores in a region server before new updates are blocked and flushes
are forced. Defaults to 40% of heap (0.4). Updates are blocked and flushes are forced until size of
all memstores in a region server hits hbase.regionserver.global.memstore.size.lower.limit. The
default value in this configuration has been intentionally left empty in order to honor the old
hbase.regionserver.global.memstore.upperLimit property if present.

Default
none

37
hbase.regionserver.global.memstore.size.lower.limit
Description
Maximum size of all memstores in a region server before flushes are forced. Defaults to 95% of
hbase.regionserver.global.memstore.size (0.95). A 100% value for this value causes the minimum
possible flushing to occur when updates are blocked due to memstore limiting. The default value
in this configuration has been intentionally left empty in order to honor the old
hbase.regionserver.global.memstore.lowerLimit property if present.

Default
none

hbase.systemtables.compacting.memstore.type
Description
Determines the type of memstore to be used for system tables like META, namespace tables etc.
By default NONE is the type and hence we use the default memstore for all the system tables. If
we need to use compacting memstore for system tables then set this property to BASIC/EAGER

Default
NONE

hbase.regionserver.optionalcacheflushinterval
Description
Maximum amount of time an edit lives in memory before being automatically flushed. Default 1
hour. Set it to 0 to disable automatic flushing.

Default
3600000

hbase.regionserver.dns.interface
Description
The name of the Network Interface from which a region server should report its IP address.

Default
default

hbase.regionserver.dns.nameserver
Description
The host name or IP address of the name server (DNS) which a region server should use to
determine the host name used by the master for communication and display purposes.

Default
default

hbase.regionserver.region.split.policy
Description
A split policy determines when a region should be split. The various other split policies that are
available currently are BusyRegionSplitPolicy, ConstantSizeRegionSplitPolicy,
DisabledRegionSplitPolicy, DelimitedKeyPrefixRegionSplitPolicy, KeyPrefixRegionSplitPolicy, and

38
SteppingSplitPolicy. DisabledRegionSplitPolicy blocks manual region splitting.

Default
org.apache.hadoop.hbase.regionserver.SteppingSplitPolicy

hbase.regionserver.regionSplitLimit
Description
Limit for the number of regions after which no more region splitting should take place. This is
not hard limit for the number of regions but acts as a guideline for the regionserver to stop
splitting after a certain limit. Default is set to 1000.

Default
1000

zookeeper.session.timeout
Description
ZooKeeper session timeout in milliseconds. It is used in two different ways. First, this value is
used in the ZK client that HBase uses to connect to the ensemble. It is also used by HBase when it
starts a ZK server and it is passed as the 'maxSessionTimeout'. See https://zookeeper.apache.org/
doc/current/zookeeperProgrammers.html#ch_zkSessions. For example, if an HBase region server
connects to a ZK ensemble that’s also managed by HBase, then the session timeout will be the
one specified by this configuration. But, a region server that connects to an ensemble managed
with a different configuration will be subjected that ensemble’s maxSessionTimeout. So, even
though HBase might propose using 90 seconds, the ensemble can have a max timeout lower than
this and it will take precedence. The current default maxSessionTimeout that ZK ships with is 40
seconds, which is lower than HBase’s.

Default
90000

zookeeper.znode.parent
Description
Root ZNode for HBase in ZooKeeper. All of HBase’s ZooKeeper files that are configured with a
relative path will go under this node. By default, all of HBase’s ZooKeeper file paths are
configured with a relative path, so they will all go under this directory unless changed.

Default
/hbase

zookeeper.znode.acl.parent
Description
Root ZNode for access control lists.

Default
acl

hbase.zookeeper.dns.interface
Description

39
The name of the Network Interface from which a ZooKeeper server should report its IP address.

Default
default

hbase.zookeeper.dns.nameserver
Description
The host name or IP address of the name server (DNS) which a ZooKeeper server should use to
determine the host name used by the master for communication and display purposes.

Default
default

hbase.zookeeper.peerport
Description
Port used by ZooKeeper peers to talk to each other. See https://zookeeper.apache.org/doc/r3.3.3/
zookeeperStarted.html#sc_RunningReplicatedZooKeeper for more information.

Default
2888

hbase.zookeeper.leaderport
Description
Port used by ZooKeeper for leader election. See https://zookeeper.apache.org/doc/r3.3.3/
zookeeperStarted.html#sc_RunningReplicatedZooKeeper for more information.

Default
3888

hbase.zookeeper.property.initLimit
Description
Property from ZooKeeper’s config zoo.cfg. The number of ticks that the initial synchronization
phase can take.

Default
10

hbase.zookeeper.property.syncLimit
Description
Property from ZooKeeper’s config zoo.cfg. The number of ticks that can pass between sending a
request and getting an acknowledgment.

Default
5

hbase.zookeeper.property.dataDir
Description
Property from ZooKeeper’s config zoo.cfg. The directory where the snapshot is stored.

40
Default
${hbase.tmp.dir}/zookeeper

hbase.zookeeper.property.clientPort
Description
Property from ZooKeeper’s config zoo.cfg. The port at which the clients will connect.

Default
2181

hbase.zookeeper.property.maxClientCnxns
Description
Property from ZooKeeper’s config zoo.cfg. Limit on number of concurrent connections (at the
socket level) that a single client, identified by IP address, may make to a single member of the
ZooKeeper ensemble. Set high to avoid zk connection issues running standalone and pseudo-
distributed.

Default
300

hbase.client.write.buffer
Description
Default size of the BufferedMutator write buffer in bytes. A bigger buffer takes more
memory — on both the client and server side since server instantiates the passed write buffer to
process it — but a larger buffer size reduces the number of RPCs made. For an estimate of
server-side memory-used, evaluate hbase.client.write.buffer * hbase.regionserver.handler.count

Default
2097152

hbase.client.pause
Description
General client pause value. Used mostly as value to wait before running a retry of a failed get,
region lookup, etc. See hbase.client.retries.number for description of how we backoff from this
initial pause amount and how this pause works w/ retries.

Default
100

hbase.client.pause.cqtbe
Description
Whether or not to use a special client pause for CallQueueTooBigException (cqtbe). Set this
property to a higher value than hbase.client.pause if you observe frequent CQTBE from the same
RegionServer and the call queue there keeps full

Default
none

41
hbase.client.retries.number
Description
Maximum retries. Used as maximum for all retryable operations such as the getting of a cell’s
value, starting a row update, etc. Retry interval is a rough function based on hbase.client.pause.
At first we retry at this interval but then with backoff, we pretty quickly reach retrying every ten
seconds. See HConstants#RETRY_BACKOFF for how the backup ramps up. Change this setting
and hbase.client.pause to suit your workload.

Default
15

hbase.client.max.total.tasks
Description
The maximum number of concurrent mutation tasks a single HTable instance will send to the
cluster.

Default
100

hbase.client.max.perserver.tasks
Description
The maximum number of concurrent mutation tasks a single HTable instance will send to a
single region server.

Default
2

hbase.client.max.perregion.tasks
Description
The maximum number of concurrent mutation tasks the client will maintain to a single Region.
That is, if there is already hbase.client.max.perregion.tasks writes in progress for this region,
new puts won’t be sent to this region until some writes finishes.

Default
1

hbase.client.perserver.requests.threshold
Description
The max number of concurrent pending requests for one server in all client threads (process
level). Exceeding requests will be thrown ServerTooBusyException immediately to prevent
user’s threads being occupied and blocked by only one slow region server. If you use a fix
number of threads to access HBase in a synchronous way, set this to a suitable value which is
related to the number of threads will help you. See https://issues.apache.org/jira/browse/HBASE-
16388 for details.

Default
2147483647

42
hbase.client.scanner.caching
Description
Number of rows that we try to fetch when calling next on a scanner if it is not served from
(local, client) memory. This configuration works together with
hbase.client.scanner.max.result.size to try and use the network efficiently. The default value is
Integer.MAX_VALUE by default so that the network will fill the chunk size defined by
hbase.client.scanner.max.result.size rather than be limited by a particular number of rows since
the size of rows varies table to table. If you know ahead of time that you will not require more
than a certain number of rows from a scan, this configuration should be set to that row limit via
Scan#setCaching. Higher caching values will enable faster scanners but will eat up more
memory and some calls of next may take longer and longer times when the cache is empty. Do
not set this value such that the time between invocations is greater than the scanner timeout; i.e.
hbase.client.scanner.timeout.period

Default
2147483647

hbase.client.keyvalue.maxsize
Description
Specifies the combined maximum allowed size of a KeyValue instance. This is to set an upper
boundary for a single entry saved in a storage file. Since they cannot be split it helps avoiding
that a region cannot be split any further because the data is too large. It seems wise to set this to
a fraction of the maximum region size. Setting it to zero or less disables the check.

Default
10485760

hbase.server.keyvalue.maxsize
Description
Maximum allowed size of an individual cell, inclusive of value and all key components. A value
of 0 or less disables the check. The default value is 10MB. This is a safety setting to protect the
server from OOM situations.

Default
10485760

hbase.client.scanner.timeout.period
Description
Client scanner lease period in milliseconds.

Default
60000

hbase.client.localityCheck.threadPoolSize
Default
2

43
hbase.bulkload.retries.number
Description
Maximum retries. This is maximum number of iterations to atomic bulk loads are attempted in
the face of splitting operations 0 means never give up.

Default
10

hbase.master.balancer.maxRitPercent
Description
The max percent of regions in transition when balancing. The default value is 1.0. So there are
no balancer throttling. If set this config to 0.01, It means that there are at most 1% regions in
transition when balancing. Then the cluster’s availability is at least 99% when balancing.

Default
1.0

hbase.balancer.period
Description
Period at which the region balancer runs in the Master, in milliseconds.

Default
300000

hbase.regions.slop
Description
Rebalance if any regionserver has average + (average * slop) regions. The default value of this
parameter is 0.001 in StochasticLoadBalancer (the default load balancer), while the default is 0.2
in other load balancers (i.e., SimpleLoadBalancer).

Default
0.001

hbase.normalizer.period
Description
Period at which the region normalizer runs in the Master, in milliseconds.

Default
300000

hbase.normalizer.split.enabled
Description
Whether to split a region as part of normalization.

Default
true

44
hbase.normalizer.merge.enabled
Description
Whether to merge a region as part of normalization.

Default
true

hbase.normalizer.min.region.count
Description
The minimum number of regions in a table to consider it for merge normalization.

Default
3

hbase.normalizer.merge.min_region_age.days
Description
The minimum age for a region to be considered for a merge, in days.

Default
3

hbase.normalizer.merge.min_region_age.days
Description
The minimum age for a region to be considered for a merge, in days.

Default
3

hbase.normalizer.merge.min_region_size.mb
Description
The minimum size for a region to be considered for a merge, in whole MBs.

Default
1

hbase.server.thread.wakefrequency
Description
Time to sleep in between searches for work (in milliseconds). Used as sleep interval by service
threads such as log roller.

Default
10000

hbase.server.versionfile.writeattempts
Description
How many times to retry attempting to write a version file before just aborting. Each attempt is
separated by the hbase.server.thread.wakefrequency milliseconds.

45
Default
3

hbase.hregion.memstore.flush.size
Description
Memstore will be flushed to disk if size of the memstore exceeds this number of bytes. Value is
checked by a thread that runs every hbase.server.thread.wakefrequency.

Default
134217728

hbase.hregion.percolumnfamilyflush.size.lower.bound.min
Description
If FlushLargeStoresPolicy is used and there are multiple column families, then every time that
we hit the total memstore limit, we find out all the column families whose memstores exceed a
"lower bound" and only flush them while retaining the others in memory. The "lower bound"
will be "hbase.hregion.memstore.flush.size / column_family_number" by default unless value of
this property is larger than that. If none of the families have their memstore size more than
lower bound, all the memstores will be flushed (just as usual).

Default
16777216

hbase.hregion.preclose.flush.size
Description
If the memstores in a region are this size or larger when we go to close, run a "pre-flush" to clear
out memstores before we put up the region closed flag and take the region offline. On close, a
flush is run under the close flag to empty memory. During this time the region is offline and we
are not taking on any writes. If the memstore content is large, this flush could take a long time to
complete. The preflush is meant to clean out the bulk of the memstore before putting up the
close flag and taking the region offline so the flush that runs under the close flag has little to do.

Default
5242880

hbase.hregion.memstore.block.multiplier
Description
Block updates if memstore has hbase.hregion.memstore.block.multiplier times
hbase.hregion.memstore.flush.size bytes. Useful preventing runaway memstore during spikes in
update traffic. Without an upper-bound, memstore fills such that when it flushes the resultant
flush files take a long time to compact or split, or worse, we OOME.

Default
4

hbase.hregion.memstore.mslab.enabled
Description
Enables the MemStore-Local Allocation Buffer, a feature which works to prevent heap

46
fragmentation under heavy write loads. This can reduce the frequency of stop-the-world GC
pauses on large heaps.

Default
true

hbase.hregion.memstore.mslab.chunksize
Description
The maximum byte size of a chunk in the MemStoreLAB. Unit: bytes

Default
2097152

hbase.regionserver.offheap.global.memstore.size
Description
The amount of off-heap memory all MemStores in a RegionServer may use. A value of 0 means
that no off-heap memory will be used and all chunks in MSLAB will be HeapByteBuffer,
otherwise the non-zero value means how many megabyte of off-heap memory will be used for
chunks in MSLAB and all chunks in MSLAB will be DirectByteBuffer. Unit: megabytes.

Default
0

hbase.hregion.memstore.mslab.max.allocation
Description
The maximal size of one allocation in the MemStoreLAB, if the desired byte size exceed this
threshold then it will be just allocated from JVM heap rather than MemStoreLAB.

Default
262144

hbase.hregion.max.filesize
Description
Maximum HFile size. If the sum of the sizes of a region’s HFiles has grown to exceed this value,
the region is split in two.

Default
10737418240

hbase.hregion.majorcompaction
Description
Time between major compactions, expressed in milliseconds. Set to 0 to disable time-based
automatic major compactions. User-requested and size-based major compactions will still run.
This value is multiplied by hbase.hregion.majorcompaction.jitter to cause compaction to start at
a somewhat-random time during a given window of time. The default value is 7 days, expressed
in milliseconds. If major compactions are causing disruption in your environment, you can
configure them to run at off-peak times for your deployment, or disable time-based major
compactions by setting this parameter to 0, and run major compactions in a cron job or by

47
another external mechanism.

Default
604800000

hbase.hregion.majorcompaction.jitter
Description
A multiplier applied to hbase.hregion.majorcompaction to cause compaction to occur a given
amount of time either side of hbase.hregion.majorcompaction. The smaller the number, the
closer the compactions will happen to the hbase.hregion.majorcompaction interval.

Default
0.50

hbase.hstore.compactionThreshold
Description
If more than this number of StoreFiles exist in any one Store (one StoreFile is written per flush
of MemStore), a compaction is run to rewrite all StoreFiles into a single StoreFile. Larger values
delay compaction, but when compaction does occur, it takes longer to complete.

Default
3

hbase.regionserver.compaction.enabled
Description
Enable/disable compactions on by setting true/false. We can further switch compactions
dynamically with the compaction_switch shell command.

Default
true

hbase.hstore.flusher.count
Description
The number of flush threads. With fewer threads, the MemStore flushes will be queued. With
more threads, the flushes will be executed in parallel, increasing the load on HDFS, and
potentially causing more compactions.

Default
2

hbase.hstore.blockingStoreFiles
Description
If more than this number of StoreFiles exist in any one Store (one StoreFile is written per flush
of MemStore), updates are blocked for this region until a compaction is completed, or until
hbase.hstore.blockingWaitTime has been exceeded.

Default
16

48
hbase.hstore.blockingWaitTime
Description
The time for which a region will block updates after reaching the StoreFile limit defined by
hbase.hstore.blockingStoreFiles. After this time has elapsed, the region will stop blocking
updates even if a compaction has not been completed.

Default
90000

hbase.hstore.compaction.min
Description
The minimum number of StoreFiles which must be eligible for compaction before compaction
can run. The goal of tuning hbase.hstore.compaction.min is to avoid ending up with too many
tiny StoreFiles to compact. Setting this value to 2 would cause a minor compaction each time you
have two StoreFiles in a Store, and this is probably not appropriate. If you set this value too high,
all the other values will need to be adjusted accordingly. For most cases, the default value is
appropriate (empty value here, results in 3 by code logic). In previous versions of HBase, the
parameter hbase.hstore.compaction.min was named hbase.hstore.compactionThreshold.

Default
none

hbase.hstore.compaction.max
Description
The maximum number of StoreFiles which will be selected for a single minor compaction,
regardless of the number of eligible StoreFiles. Effectively, the value of
hbase.hstore.compaction.max controls the length of time it takes a single compaction to
complete. Setting it larger means that more StoreFiles are included in a compaction. For most
cases, the default value is appropriate.

Default
10

hbase.hstore.compaction.min.size
Description
A StoreFile (or a selection of StoreFiles, when using ExploringCompactionPolicy) smaller than
this size will always be eligible for minor compaction. HFiles this size or larger are evaluated by
hbase.hstore.compaction.ratio to determine if they are eligible. Because this limit represents the
"automatic include" limit for all StoreFiles smaller than this value, this value may need to be
reduced in write-heavy environments where many StoreFiles in the 1-2 MB range are being
flushed, because every StoreFile will be targeted for compaction and the resulting StoreFiles
may still be under the minimum size and require further compaction. If this parameter is
lowered, the ratio check is triggered more quickly. This addressed some issues seen in earlier
versions of HBase but changing this parameter is no longer necessary in most situations.
Default: 128 MB expressed in bytes.

Default
134217728

49
hbase.hstore.compaction.max.size
Description
A StoreFile (or a selection of StoreFiles, when using ExploringCompactionPolicy) larger than this
size will be excluded from compaction. The effect of raising hbase.hstore.compaction.max.size is
fewer, larger StoreFiles that do not get compacted often. If you feel that compaction is
happening too often without much benefit, you can try raising this value. Default: the value of
LONG.MAX_VALUE, expressed in bytes.

Default
9223372036854775807

hbase.hstore.compaction.ratio
Description
For minor compaction, this ratio is used to determine whether a given StoreFile which is larger
than hbase.hstore.compaction.min.size is eligible for compaction. Its effect is to limit compaction
of large StoreFiles. The value of hbase.hstore.compaction.ratio is expressed as a floating-point
decimal. A large ratio, such as 10, will produce a single giant StoreFile. Conversely, a low value,
such as .25, will produce behavior similar to the BigTable compaction algorithm, producing four
StoreFiles. A moderate value of between 1.0 and 1.4 is recommended. When tuning this value,
you are balancing write costs with read costs. Raising the value (to something like 1.4) will have
more write costs, because you will compact larger StoreFiles. However, during reads, HBase will
need to seek through fewer StoreFiles to accomplish the read. Consider this approach if you
cannot take advantage of Bloom filters. Otherwise, you can lower this value to something like 1.0
to reduce the background cost of writes, and use Bloom filters to control the number of
StoreFiles touched during reads. For most cases, the default value is appropriate.

Default
1.2F

hbase.hstore.compaction.ratio.offpeak
Description
Allows you to set a different (by default, more aggressive) ratio for determining whether larger
StoreFiles are included in compactions during off-peak hours. Works in the same way as
hbase.hstore.compaction.ratio. Only applies if hbase.offpeak.start.hour and
hbase.offpeak.end.hour are also enabled.

Default
5.0F

hbase.hstore.time.to.purge.deletes
Description
The amount of time to delay purging of delete markers with future timestamps. If unset, or set to
0, all delete markers, including those with future timestamps, are purged during the next major
compaction. Otherwise, a delete marker is kept until the major compaction which occurs after
the marker’s timestamp plus the value of this setting, in milliseconds.

Default
0

50
hbase.offpeak.start.hour
Description
The start of off-peak hours, expressed as an integer between 0 and 23, inclusive. Set to -1 to
disable off-peak.

Default
-1

hbase.offpeak.end.hour
Description
The end of off-peak hours, expressed as an integer between 0 and 23, inclusive. Set to -1 to
disable off-peak.

Default
-1

hbase.regionserver.thread.compaction.throttle
Description
There are two different thread pools for compactions, one for large compactions and the other
for small compactions. This helps to keep compaction of lean tables (such as hbase:meta) fast. If
a compaction is larger than this threshold, it goes into the large compaction pool. In most cases,
the default value is appropriate. Default: 2 x hbase.hstore.compaction.max x
hbase.hregion.memstore.flush.size (which defaults to 128MB). The value field assumes that the
value of hbase.hregion.memstore.flush.size is unchanged from the default.

Default
2684354560

hbase.regionserver.majorcompaction.pagecache.drop
Description
Specifies whether to drop pages read/written into the system page cache by major compactions.
Setting it to true helps prevent major compactions from polluting the page cache, which is
almost always required, especially for clusters with low/moderate memory to storage ratio.

Default
true

hbase.regionserver.minorcompaction.pagecache.drop
Description
Specifies whether to drop pages read/written into the system page cache by minor compactions.
Setting it to true helps prevent minor compactions from polluting the page cache, which is most
beneficial on clusters with low memory to storage ratio or very write heavy clusters. You may
want to set it to false under moderate to low write workload when bulk of the reads are on the
most recently written data.

Default
true

51
hbase.hstore.compaction.kv.max
Description
The maximum number of KeyValues to read and then write in a batch when flushing or
compacting. Set this lower if you have big KeyValues and problems with Out Of Memory
Exceptions Set this higher if you have wide, small rows.

Default
10

hbase.storescanner.parallel.seek.enable
Description
Enables StoreFileScanner parallel-seeking in StoreScanner, a feature which can reduce response
latency under special conditions.

Default
false

hbase.storescanner.parallel.seek.threads
Description
The default thread pool size if parallel-seeking feature enabled.

Default
10

hfile.block.cache.policy
Description
The eviction policy for the L1 block cache (LRU or TinyLFU).

Default
LRU

hfile.block.cache.size
Description
Percentage of maximum heap (-Xmx setting) to allocate to block cache used by a StoreFile.
Default of 0.4 means allocate 40%. Set to 0 to disable but it’s not recommended; you need at least
enough cache to hold the storefile indices.

Default
0.4

hfile.block.index.cacheonwrite
Description
This allows to put non-root multi-level index blocks into the block cache at the time the index is
being written.

Default
false

52
hfile.index.block.max.size
Description
When the size of a leaf-level, intermediate-level, or root-level index block in a multi-level block
index grows to this size, the block is written out and a new block is started.

Default
131072

hbase.bucketcache.ioengine
Description
Where to store the contents of the bucketcache. One of: offheap, file, files, mmap or pmem. If a
file or files, set it to file(s):PATH_TO_FILE. mmap means the content will be in an mmaped file.
Use mmap:PATH_TO_FILE. 'pmem' is bucket cache over a file on the persistent memory device.
Use pmem:PATH_TO_FILE. See http://hbase.apache.org/book.html#offheap.blockcache for more
information.

Default
none

hbase.hstore.compaction.throughput.lower.bound
Description
The target lower bound on aggregate compaction throughput, in bytes/sec. Allows you to tune
the minimum available compaction throughput when the
PressureAwareCompactionThroughputController throughput controller is active. (It is active by
default.)

Default
52428800

hbase.hstore.compaction.throughput.higher.bound
Description
The target upper bound on aggregate compaction throughput, in bytes/sec. Allows you to control
aggregate compaction throughput demand when the
PressureAwareCompactionThroughputController throughput controller is active. (It is active by
default.) The maximum throughput will be tuned between the lower and upper bounds when
compaction pressure is within the range [0.0, 1.0]. If compaction pressure is 1.0 or greater the
higher bound will be ignored until pressure returns to the normal range.

Default
104857600

hbase.bucketcache.size
Description
A float that EITHER represents a percentage of total heap memory size to give to the cache (if <
1.0) OR, it is the total capacity in megabytes of BucketCache. Default: 0.0

Default

53
none

hbase.bucketcache.bucket.sizes
Description
A comma-separated list of sizes for buckets for the bucketcache. Can be multiple sizes. List block
sizes in order from smallest to largest. The sizes you use will depend on your data access
patterns. Must be a multiple of 256 else you will run into 'java.io.IOException: Invalid HFile
block magic' when you go to read from cache. If you specify no values here, then you pick up the
default bucketsizes set in code (See BucketAllocator#DEFAULT_BUCKET_SIZES).

Default
none

hfile.format.version
Description
The HFile format version to use for new files. Version 3 adds support for tags in hfiles (See
http://hbase.apache.org/book.html#hbase.tags). Also see the configuration
'hbase.replication.rpc.codec'.

Default
3

hfile.block.bloom.cacheonwrite
Description
Enables cache-on-write for inline blocks of a compound Bloom filter.

Default
false

io.storefile.bloom.block.size
Description
The size in bytes of a single block ("chunk") of a compound Bloom filter. This size is
approximate, because Bloom blocks can only be inserted at data block boundaries, and the
number of keys per data block varies.

Default
131072

hbase.rs.cacheblocksonwrite
Description
Whether an HFile block should be added to the block cache when the block is finished.

Default
false

hbase.rpc.timeout
Description
This is for the RPC layer to define how long (millisecond) HBase client applications take for a

54
remote call to time out. It uses pings to check connections but will eventually throw a
TimeoutException.

Default
60000

hbase.client.operation.timeout
Description
Operation timeout is a top-level restriction (millisecond) that makes sure a blocking operation in
Table will not be blocked more than this. In each operation, if rpc request fails because of
timeout or other reason, it will retry until success or throw RetriesExhaustedException. But if
the total time being blocking reach the operation timeout before retries exhausted, it will break
early and throw SocketTimeoutException.

Default
1200000

hbase.cells.scanned.per.heartbeat.check
Description
The number of cells scanned in between heartbeat checks. Heartbeat checks occur during the
processing of scans to determine whether or not the server should stop scanning in order to
send back a heartbeat message to the client. Heartbeat messages are used to keep the client-
server connection alive during long running scans. Small values mean that the heartbeat checks
will occur more often and thus will provide a tighter bound on the execution time of the scan.
Larger values mean that the heartbeat checks occur less frequently

Default
10000

hbase.rpc.shortoperation.timeout
Description
This is another version of "hbase.rpc.timeout". For those RPC operation within cluster, we rely
on this configuration to set a short timeout limitation for short operation. For example, short rpc
timeout for region server’s trying to report to active master can benefit quicker master failover
process.

Default
10000

hbase.ipc.client.tcpnodelay
Description
Set no delay on rpc socket connections. See http://docs.oracle.com/javase/1.5.0/docs/api/java/net/
Socket.html#getTcpNoDelay()

Default
true

55
hbase.regionserver.hostname
Description
This config is for experts: don’t set its value unless you really know what you are doing. When
set to a non-empty value, this represents the (external facing) hostname for the underlying
server. See https://issues.apache.org/jira/browse/HBASE-12954 for details.

Default
none

hbase.regionserver.hostname.disable.master.reversedns
Description
This config is for experts: don’t set its value unless you really know what you are doing. When
set to true, regionserver will use the current node hostname for the servername and HMaster
will skip reverse DNS lookup and use the hostname sent by regionserver instead. Note that this
config and hbase.regionserver.hostname are mutually exclusive. See https://issues.apache.org/
jira/browse/HBASE-18226 for more details.

Default
false

hbase.master.keytab.file
Description
Full path to the kerberos keytab file to use for logging in the configured HMaster server
principal.

Default
none

hbase.master.kerberos.principal
Description
Ex. "hbase/[email protected]". The kerberos principal name that should be used to run the
HMaster process. The principal name should be in the form: user/hostname@DOMAIN. If
"_HOST" is used as the hostname portion, it will be replaced with the actual hostname of the
running instance.

Default
none

hbase.regionserver.keytab.file
Description
Full path to the kerberos keytab file to use for logging in the configured HRegionServer server
principal.

Default
none

56
hbase.regionserver.kerberos.principal
Description
Ex. "hbase/[email protected]". The kerberos principal name that should be used to run the
HRegionServer process. The principal name should be in the form: user/hostname@DOMAIN. If
"_HOST" is used as the hostname portion, it will be replaced with the actual hostname of the
running instance. An entry for this principal must exist in the file specified in
hbase.regionserver.keytab.file

Default
none

hadoop.policy.file
Description
The policy configuration file used by RPC servers to make authorization decisions on client
requests. Only used when HBase security is enabled.

Default
hbase-policy.xml

hbase.superuser
Description
List of users or groups (comma-separated), who are allowed full privileges, regardless of stored
ACLs, across the cluster. Only used when HBase security is enabled.

Default
none

hbase.auth.key.update.interval
Description
The update interval for master key for authentication tokens in servers in milliseconds. Only
used when HBase security is enabled.

Default
86400000

hbase.auth.token.max.lifetime
Description
The maximum lifetime in milliseconds after which an authentication token expires. Only used
when HBase security is enabled.

Default
604800000

hbase.ipc.client.fallback-to-simple-auth-allowed
Description
When a client is configured to attempt a secure connection, but attempts to connect to an
insecure server, that server may instruct the client to switch to SASL SIMPLE (unsecure)

57
authentication. This setting controls whether or not the client will accept this instruction from
the server. When false (the default), the client will not allow the fallback to SIMPLE
authentication, and will abort the connection.

Default
false

hbase.ipc.server.fallback-to-simple-auth-allowed
Description
When a server is configured to require secure connections, it will reject connection attempts
from clients using SASL SIMPLE (unsecure) authentication. This setting allows secure servers to
accept SASL SIMPLE connections from clients when the client requests. When false (the default),
the server will not allow the fallback to SIMPLE authentication, and will reject the connection.
WARNING: This setting should ONLY be used as a temporary measure while converting clients
over to secure authentication. It MUST BE DISABLED for secure operation.

Default
false

hbase.display.keys
Description
When this is set to true the webUI and such will display all start/end keys as part of the table
details, region names, etc. When this is set to false, the keys are hidden.

Default
true

hbase.coprocessor.enabled
Description
Enables or disables coprocessor loading. If 'false' (disabled), any other coprocessor related
configuration will be ignored.

Default
true

hbase.coprocessor.user.enabled
Description
Enables or disables user (aka. table) coprocessor loading. If 'false' (disabled), any table
coprocessor attributes in table descriptors will be ignored. If "hbase.coprocessor.enabled" is
'false' this setting has no effect.

Default
true

hbase.coprocessor.region.classes
Description
A comma-separated list of Coprocessors that are loaded by default on all tables. For any override
coprocessor method, these classes will be called in order. After implementing your own

58
Coprocessor, just put it in HBase’s classpath and add the fully qualified class name here. A
coprocessor can also be loaded on demand by setting HTableDescriptor.

Default
none

hbase.coprocessor.master.classes
Description
A comma-separated list of org.apache.hadoop.hbase.coprocessor.MasterObserver coprocessors
that are loaded by default on the active HMaster process. For any implemented coprocessor
methods, the listed classes will be called in order. After implementing your own
MasterObserver, just put it in HBase’s classpath and add the fully qualified class name here.

Default
none

hbase.coprocessor.abortonerror
Description
Set to true to cause the hosting server (master or regionserver) to abort if a coprocessor fails to
load, fails to initialize, or throws an unexpected Throwable object. Setting this to false will allow
the server to continue execution but the system wide state of the coprocessor in question will
become inconsistent as it will be properly executing in only a subset of servers, so this is most
useful for debugging only.

Default
true

hbase.rest.port
Description
The port for the HBase REST server.

Default
8080

hbase.rest.readonly
Description
Defines the mode the REST server will be started in. Possible values are: false: All HTTP methods
are permitted - GET/PUT/POST/DELETE. true: Only the GET method is permitted.

Default
false

hbase.rest.threads.max
Description
The maximum number of threads of the REST server thread pool. Threads in the pool are reused
to process REST requests. This controls the maximum number of requests processed
concurrently. It may help to control the memory used by the REST server to avoid OOM issues. If
the thread pool is full, incoming requests will be queued up and wait for some free threads.

59
Default
100

hbase.rest.threads.min
Description
The minimum number of threads of the REST server thread pool. The thread pool always has at
least these number of threads so the REST server is ready to serve incoming requests.

Default
2

hbase.rest.support.proxyuser
Description
Enables running the REST server to support proxy-user mode.

Default
false

hbase.defaults.for.version.skip
Description
Set to true to skip the 'hbase.defaults.for.version' check. Setting this to true can be useful in
contexts other than the other side of a maven generation; i.e. running in an IDE. You’ll want to
set this boolean to true to avoid seeing the RuntimeException complaint: "hbase-default.xml file
seems to be for and old version of HBase (\${hbase.version}), this version is X.X.X-SNAPSHOT"

Default
false

hbase.table.lock.enable
Description
Set to true to enable locking the table in zookeeper for schema change operations. Table locking
from master prevents concurrent schema modifications to corrupt table state.

Default
true

hbase.table.max.rowsize
Description
Maximum size of single row in bytes (default is 1 Gb) for Get’ting or Scan’ning without in-row
scan flag set. If row size exceeds this limit RowTooBigException is thrown to client.

Default
1073741824

hbase.thrift.minWorkerThreads
Description
The "core size" of the thread pool. New threads are created on every connection until this many
threads are created.

60
Default
16

hbase.thrift.maxWorkerThreads
Description
The maximum size of the thread pool. When the pending request queue overflows, new threads
are created until their number reaches this number. After that, the server starts dropping
connections.

Default
1000

hbase.thrift.maxQueuedRequests
Description
The maximum number of pending Thrift connections waiting in the queue. If there are no idle
threads in the pool, the server queues requests. Only when the queue overflows, new threads
are added, up to hbase.thrift.maxQueuedRequests threads.

Default
1000

hbase.regionserver.thrift.framed
Description
Use Thrift TFramedTransport on the server side. This is the recommended transport for thrift
servers and requires a similar setting on the client side. Changing this to false will select the
default transport, vulnerable to DoS when malformed requests are issued due to THRIFT-601.

Default
false

hbase.regionserver.thrift.framed.max_frame_size_in_mb
Description
Default frame size when using framed transport, in MB

Default
2

hbase.regionserver.thrift.compact
Description
Use Thrift TCompactProtocol binary serialization protocol.

Default
false

hbase.rootdir.perms
Description
FS Permissions for the root data subdirectory in a secure (kerberos) setup. When master starts, it
creates the rootdir with this permissions or sets the permissions if it does not match.

61
Default
700

hbase.wal.dir.perms
Description
FS Permissions for the root WAL directory in a secure(kerberos) setup. When master starts, it
creates the WAL dir with this permissions or sets the permissions if it does not match.

Default
700

hbase.data.umask.enable
Description
Enable, if true, that file permissions should be assigned to the files written by the regionserver

Default
false

hbase.data.umask
Description
File permissions that should be used to write data files when hbase.data.umask.enable is true

Default
000

hbase.snapshot.enabled
Description
Set to true to allow snapshots to be taken / restored / cloned.

Default
true

hbase.snapshot.restore.take.failsafe.snapshot
Description
Set to true to take a snapshot before the restore operation. The snapshot taken will be used in
case of failure, to restore the previous state. At the end of the restore operation this snapshot will
be deleted

Default
true

hbase.snapshot.restore.failsafe.name
Description
Name of the failsafe snapshot taken by the restore operation. You can use the {snapshot.name},
{table.name} and {restore.timestamp} variables to create a name based on what you are
restoring.

Default

62
hbase-failsafe-{snapshot.name}-{restore.timestamp}

hbase.snapshot.working.dir
Description
Location where the snapshotting process will occur. The location of the completed snapshots
will not change, but the temporary directory where the snapshot process occurs will be set to
this location. This can be a separate filesystem than the root directory, for performance increase
purposes. See HBASE-21098 for more information

Default
none

hbase.server.compactchecker.interval.multiplier
Description
The number that determines how often we scan to see if compaction is necessary. Normally,
compactions are done after some events (such as memstore flush), but if region didn’t receive a
lot of writes for some time, or due to different compaction policies, it may be necessary to check
it periodically. The interval between checks is hbase.server.compactchecker.interval.multiplier
multiplied by hbase.server.thread.wakefrequency.

Default
1000

hbase.lease.recovery.timeout
Description
How long we wait on dfs lease recovery in total before giving up.

Default
900000

hbase.lease.recovery.dfs.timeout
Description
How long between dfs recover lease invocations. Should be larger than the sum of the time it
takes for the namenode to issue a block recovery command as part of datanode;
dfs.heartbeat.interval and the time it takes for the primary datanode, performing block recovery
to timeout on a dead datanode; usually dfs.client.socket-timeout. See the end of HBASE-8389 for
more.

Default
64000

hbase.column.max.version
Description
New column family descriptors will use this value as the default number of versions to keep.

Default
1

63
dfs.client.read.shortcircuit
Description
If set to true, this configuration parameter enables short-circuit local reads.

Default
false

dfs.domain.socket.path
Description
This is a path to a UNIX domain socket that will be used for communication between the
DataNode and local HDFS clients, if dfs.client.read.shortcircuit is set to true. If the string "_PORT"
is present in this path, it will be replaced by the TCP port of the DataNode. Be careful about
permissions for the directory that hosts the shared domain socket; dfsclient will complain if
open to other users than the HBase user.

Default
none

hbase.dfs.client.read.shortcircuit.buffer.size
Description
If the DFSClient configuration dfs.client.read.shortcircuit.buffer.size is unset, we will use what is
configured here as the short circuit read default direct byte buffer size. DFSClient native default
is 1MB; HBase keeps its HDFS files open so number of file blocks * 1MB soon starts to add up and
threaten OOME because of a shortage of direct memory. So, we set it down from the default.
Make it > the default hbase block size set in the HColumnDescriptor which is usually 64k.

Default
131072

hbase.regionserver.checksum.verify
Description
If set to true (the default), HBase verifies the checksums for hfile blocks. HBase writes
checksums inline with the data when it writes out hfiles. HDFS (as of this writing) writes
checksums to a separate file than the data file necessitating extra seeks. Setting this flag saves
some on i/o. Checksum verification by HDFS will be internally disabled on hfile streams when
this flag is set. If the hbase-checksum verification fails, we will switch back to using HDFS
checksums (so do not disable HDFS checksums! And besides this feature applies to hfiles only,
not to WALs). If this parameter is set to false, then hbase will not verify any checksums, instead
it will depend on checksum verification being done in the HDFS client.

Default
true

hbase.hstore.bytes.per.checksum
Description
Number of bytes in a newly created checksum chunk for HBase-level checksums in hfile blocks.

Default

64
16384

hbase.hstore.checksum.algorithm
Description
Name of an algorithm that is used to compute checksums. Possible values are NULL, CRC32,
CRC32C.

Default
CRC32C

hbase.client.scanner.max.result.size
Description
Maximum number of bytes returned when calling a scanner’s next method. Note that when a
single row is larger than this limit the row is still returned completely. The default value is 2MB,
which is good for 1ge networks. With faster and/or high latency networks this value should be
increased.

Default
2097152

hbase.server.scanner.max.result.size
Description
Maximum number of bytes returned when calling a scanner’s next method. Note that when a
single row is larger than this limit the row is still returned completely. The default value is
100MB. This is a safety setting to protect the server from OOM situations.

Default
104857600

hbase.status.published
Description
This setting activates the publication by the master of the status of the region server. When a
region server dies and its recovery starts, the master will push this information to the client
application, to let them cut the connection immediately instead of waiting for a timeout.

Default
false

hbase.status.publisher.class
Description
Implementation of the status publication with a multicast message.

Default
org.apache.hadoop.hbase.master.ClusterStatusPublisher$MulticastPublisher

hbase.status.listener.class
Description
Implementation of the status listener with a multicast message.

65
Default
org.apache.hadoop.hbase.client.ClusterStatusListener$MulticastListener

hbase.status.multicast.address.ip
Description
Multicast address to use for the status publication by multicast.

Default
226.1.1.3

hbase.status.multicast.address.port
Description
Multicast port to use for the status publication by multicast.

Default
16100

hbase.dynamic.jars.dir
Description
The directory from which the custom filter JARs can be loaded dynamically by the region server
without the need to restart. However, an already loaded filter/co-processor class would not be
un-loaded. See HBASE-1936 for more details. Does not apply to coprocessors.

Default
${hbase.rootdir}/lib

hbase.security.authentication
Description
Controls whether or not secure authentication is enabled for HBase. Possible values are 'simple'
(no authentication), and 'kerberos'.

Default
simple

hbase.rest.filter.classes
Description
Servlet filters for REST service.

Default
org.apache.hadoop.hbase.rest.filter.GzipFilter

hbase.master.loadbalancer.class
Description
Class used to execute the regions balancing when the period occurs. See the class comment for
more on how it works http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/master/
balancer/StochasticLoadBalancer.html It replaces the DefaultLoadBalancer as the default (since
renamed as the SimpleLoadBalancer).

66
Default
org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer

hbase.master.loadbalance.bytable
Description
Factor Table name when the balancer runs. Default: false.

Default
false

hbase.master.normalizer.class
Description
Class used to execute the region normalization when the period occurs. See the class comment
for more on how it works http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/master/
normalizer/SimpleRegionNormalizer.html

Default
org.apache.hadoop.hbase.master.normalizer.SimpleRegionNormalizer

hbase.rest.csrf.enabled
Description
Set to true to enable protection against cross-site request forgery (CSRF)

Default
false

hbase.rest-csrf.browser-useragents-regex
Description
A comma-separated list of regular expressions used to match against an HTTP request’s User-
Agent header when protection against cross-site request forgery (CSRF) is enabled for REST
server by setting hbase.rest.csrf.enabled to true. If the incoming User-Agent matches any of these
regular expressions, then the request is considered to be sent by a browser, and therefore CSRF
prevention is enforced. If the request’s User-Agent does not match any of these regular
expressions, then the request is considered to be sent by something other than a browser, such
as scripted automation. In this case, CSRF is not a potential attack vector, so the prevention is not
enforced. This helps achieve backwards-compatibility with existing automation that has not
been updated to send the CSRF prevention header.

Default
Mozilla.,Opera.

hbase.security.exec.permission.checks
Description
If this setting is enabled and ACL based access control is active (the AccessController coprocessor
is installed either as a system coprocessor or on a table as a table coprocessor) then you must
grant all relevant users EXEC privilege if they require the ability to execute coprocessor
endpoint calls. EXEC privilege, like any other permission, can be granted globally to a user, or to
a user on a per table or per namespace basis. For more information on coprocessor endpoints,

67
see the coprocessor section of the HBase online manual. For more information on granting or
revoking permissions using the AccessController, see the security section of the HBase online
manual.

Default
false

hbase.procedure.regionserver.classes
Description
A comma-separated list of org.apache.hadoop.hbase.procedure.RegionServerProcedureManager
procedure managers that are loaded by default on the active HRegionServer process. The
lifecycle methods (init/start/stop) will be called by the active HRegionServer process to perform
the specific globally barriered procedure. After implementing your own
RegionServerProcedureManager, just put it in HBase’s classpath and add the fully qualified class
name here.

Default
none

hbase.procedure.master.classes
Description
A comma-separated list of org.apache.hadoop.hbase.procedure.MasterProcedureManager
procedure managers that are loaded by default on the active HMaster process. A procedure is
identified by its signature and users can use the signature and an instant name to trigger an
execution of a globally barriered procedure. After implementing your own
MasterProcedureManager, just put it in HBase’s classpath and add the fully qualified class name
here.

Default
none

hbase.coordinated.state.manager.class
Description
Fully qualified name of class implementing coordinated state manager.

Default
org.apache.hadoop.hbase.coordination.ZkCoordinatedStateManager

hbase.regionserver.storefile.refresh.period
Description
The period (in milliseconds) for refreshing the store files for the secondary regions. 0 means this
feature is disabled. Secondary regions sees new files (from flushes and compactions) from
primary once the secondary region refreshes the list of files in the region (there is no
notification mechanism). But too frequent refreshes might cause extra Namenode pressure. If
the files cannot be refreshed for longer than HFile TTL (hbase.master.hfilecleaner.ttl) the
requests are rejected. Configuring HFile TTL to a larger value is also recommended with this
setting.

68
Default
0

hbase.region.replica.replication.enabled
Description
Whether asynchronous WAL replication to the secondary region replicas is enabled or not. If
this is enabled, a replication peer named "region_replica_replication" will be created which will
tail the logs and replicate the mutations to region replicas for tables that have region replication
> 1. If this is enabled once, disabling this replication also requires disabling the replication peer
using shell or Admin java class. Replication to secondary region replicas works over standard
inter-cluster replication.

Default
false

hbase.http.filter.initializers
Description
A comma separated list of class names. Each class in the list must extend
org.apache.hadoop.hbase.http.FilterInitializer. The corresponding Filter will be initialized. Then,
the Filter will be applied to all user facing jsp and servlet web pages. The ordering of the list
defines the ordering of the filters. The default StaticUserWebFilter add a user principal as
defined by the hbase.http.staticuser.user property.

Default
org.apache.hadoop.hbase.http.lib.StaticUserWebFilter

hbase.security.visibility.mutations.checkauths
Description
This property if enabled, will check whether the labels in the visibility expression are associated
with the user issuing the mutation

Default
false

hbase.http.max.threads
Description
The maximum number of threads that the HTTP Server will create in its ThreadPool.

Default
16

hbase.replication.rpc.codec
Description
The codec that is to be used when replication is enabled so that the tags are also replicated. This
is used along with HFileV3 which supports tags in them. If tags are not used or if the hfile
version used is HFileV2 then KeyValueCodec can be used as the replication codec. Note that
using KeyValueCodecWithTags for replication when there are no tags causes no harm.

69
Default
org.apache.hadoop.hbase.codec.KeyValueCodecWithTags

hbase.replication.source.maxthreads
Description
The maximum number of threads any replication source will use for shipping edits to the sinks
in parallel. This also limits the number of chunks each replication batch is broken into. Larger
values can improve the replication throughput between the master and slave clusters. The
default of 10 will rarely need to be changed.

Default
10

hbase.http.staticuser.user
Description
The user name to filter as, on static web filters while rendering content. An example use is the
HDFS web UI (user to be used for browsing files).

Default
dr.stack

hbase.regionserver.handler.abort.on.error.percent
Description
The percent of region server RPC threads failed to abort RS. -1 Disable aborting; 0 Abort if even a
single handler has died; 0.x Abort only when this percent of handlers have died; 1 Abort only all
of the handers have died.

Default
0.5

hbase.mob.file.cache.size
Description
Number of opened file handlers to cache. A larger value will benefit reads by providing more
file handlers per mob file cache and would reduce frequent file opening and closing. However, if
this is set too high, this could lead to a "too many opened file handlers" The default value is 1000.

Default
1000

hbase.mob.cache.evict.period
Description
The amount of time in seconds before the mob cache evicts cached mob files. The default value
is 3600 seconds.

Default
3600

70
hbase.mob.cache.evict.remain.ratio
Description
The ratio (between 0.0 and 1.0) of files that remains cached after an eviction is triggered when
the number of cached mob files exceeds the hbase.mob.file.cache.size. The default value is 0.5f.

Default
0.5f

hbase.master.mob.ttl.cleaner.period
Description
The period that ExpiredMobFileCleanerChore runs. The unit is second. The default value is one
day. The MOB file name uses only the date part of the file creation time in it. We use this time for
deciding TTL expiry of the files. So the removal of TTL expired files might be delayed. The max
delay might be 24 hrs.

Default
86400

hbase.mob.compaction.mergeable.threshold
Description
If the size of a mob file is less than this value, it’s regarded as a small file and needs to be merged
in mob compaction. The default value is 1280MB.

Default
1342177280

hbase.mob.delfile.max.count
Description
The max number of del files that is allowed in the mob compaction. In the mob compaction,
when the number of existing del files is larger than this value, they are merged until number of
del files is not larger this value. The default value is 3.

Default
3

hbase.mob.compaction.batch.size
Description
The max number of the mob files that is allowed in a batch of the mob compaction. The mob
compaction merges the small mob files to bigger ones. If the number of the small files is very
large, it could lead to a "too many opened file handlers" in the merge. And the merge has to be
split into batches. This value limits the number of mob files that are selected in a batch of the
mob compaction. The default value is 100.

Default
100

hbase.mob.compaction.chore.period
Description

71
The period that MobCompactionChore runs. The unit is second. The default value is one week.

Default
604800

hbase.mob.compactor.class
Description
Implementation of mob compactor, the default one is PartitionedMobCompactor.

Default
org.apache.hadoop.hbase.mob.compactions.PartitionedMobCompactor

hbase.mob.compaction.threads.max
Description
The max number of threads used in MobCompactor.

Default
1

hbase.snapshot.master.timeout.millis
Description
Timeout for master for the snapshot procedure execution.

Default
300000

hbase.snapshot.region.timeout
Description
Timeout for regionservers to keep threads in snapshot request pool waiting.

Default
300000

hbase.rpc.rows.warning.threshold
Description
Number of rows in a batch operation above which a warning will be logged.

Default
5000

hbase.master.wait.on.service.seconds
Description
Default is 5 minutes. Make it 30 seconds for tests. See HBASE-19794 for some context.

Default
30

hbase.master.cleaner.snapshot.interval
Description

72
Snapshot Cleanup chore interval in milliseconds. The cleanup thread keeps running at this
interval to find all snapshots that are expired based on TTL and delete them.

Default
1800000

hbase.master.snapshot.ttl
Description
Default Snapshot TTL to be considered when the user does not specify TTL while creating
snapshot. Default value 0 indicates FOREVERE - snapshot should not be automatically deleted
until it is manually deleted

Default
0

hbase.master.regions.recovery.check.interval
Description
Regions Recovery Chore interval in milliseconds. This chore keeps running at this interval to
find all regions with configurable max store file ref count and reopens them.

Default
1200000

hbase.regions.recovery.store.file.ref.count
Description
Very large number of ref count on a compacted store file indicates that it is a ref leak on that
object(compacted store file). Such files can not be removed after it is invalidated via compaction.
Only way to recover in such scenario is to reopen the region which can release all resources, like
the refcount, leases, etc. This config represents Store files Ref Count threshold value considered
for reopening regions. Any region with compacted store files ref count > this value would be
eligible for reopening by master. Here, we get the max refCount among all refCounts on all
compacted away store files that belong to a particular region. Default value -1 indicates this
feature is turned off. Only positive integer value should be provided to enable this feature.

Default
-1

hbase.regionserver.slowlog.ringbuffer.size
Description
Default size of ringbuffer to be maintained by each RegionServer in order to store online slowlog
responses. This is an in-memory ring buffer of requests that were judged to be too slow in
addition to the responseTooSlow logging. The in-memory representation would be complete. For
more details, please look into Doc Section: Get Slow Response Log from shell

Default
256

73
hbase.regionserver.slowlog.buffer.enabled
Description
Indicates whether RegionServers have ring buffer running for storing Online Slow logs in FIFO
manner with limited entries. The size of the ring buffer is indicated by config:
hbase.regionserver.slowlog.ringbuffer.size The default value is false, turn this on and get latest
slowlog responses with complete data.

Default
false

hbase.regionserver.slowlog.systable.enabled
Description
Should be enabled only if hbase.regionserver.slowlog.buffer.enabled is enabled. If enabled
(true), all slow/large RPC logs would be persisted to system table hbase:slowlog (in addition to in-
memory ring buffer at each RegionServer). The records are stored in increasing order of time.
Operators can scan the table with various combination of ColumnValueFilter. More details are
provided in the doc section: "Get Slow/Large Response Logs from System table hbase:slowlog"

Default
false

hbase.rpc.rows.size.threshold.reject
Description
If value is true, RegionServer will abort batch requests of Put/Delete with number of rows in a
batch operation exceeding threshold defined by value of config:
hbase.rpc.rows.warning.threshold. The default value is false and hence, by default, only warning
will be logged. This config should be turned on to prevent RegionServer from serving very large
batch size of rows and this way we can improve CPU usages by discarding too large batch
request.

Default
false

7.3. hbase-env.sh
Set HBase environment variables in this file. Examples include options to pass the JVM on start of
an HBase daemon such as heap size and garbage collector configs. You can also set configurations
for HBase configuration, log directories, niceness, ssh options, where to locate process pid files, etc.
Open the file at conf/hbase-env.sh and peruse its content. Each option is fairly well documented. Add
your own environment variables here if you want them read by HBase daemons on startup.

Changes here will require a cluster restart for HBase to notice the change.

7.4. log4j.properties
Edit this file to change rate at which HBase files are rolled and to change the level at which HBase
logs messages.

74
Changes here will require a cluster restart for HBase to notice the change though log levels can be
changed for particular daemons via the HBase UI.

7.5. Client configuration and dependencies connecting


to an HBase cluster
If you are running HBase in standalone mode, you don’t need to configure anything for your client
to work provided that they are all on the same machine.

Starting release 3.0.0, the default connection registry has been switched to a master based
implementation. Refer to Master Registry (new as of 2.3.0) for more details about what a connection
registry is and implications of this change. Depending on your HBase version, following is the
expected minimal client configuration.

7.5.1. Up until 2.x.y releases

In 2.x.y releases, the default connection registry was based on ZooKeeper as the source of truth.
This means that the clients always looked up ZooKeeper znodes to fetch the required metadata. For
example, if an active master crashed and the a new master is elected, clients looked up the master
znode to fetch the active master address (similarly for meta locations). This meant that the clients
needed to have access to ZooKeeper and need to know the ZooKeeper ensemble information before
they can do anything. This can be configured in the client configuration xml as follows:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hbase.zookeeper.quorum</name>
<value>example1,example2,example3</value>
<description> Zookeeper ensemble information</description>
</property>
</configuration>

7.5.2. Starting 3.0.0 release

The default implementation was switched to a master based connection registry. With this
implementation, clients always contact the active or stand-by master RPC end points to fetch the
connection registry information. This means that the clients should have access to the list of active
and master end points before they can do anything. This can be configured in the client
configuration xml as follows:

75
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hbase.masters</name>
<value>example1,example2,example3</value>
<description>List of master rpc end points for the hbase cluster.</description>
</property>
</configuration>

The configuration value for hbase.masters is a comma separated list of host:port values. If no port
value is specified, the default of 16000 is assumed.

Usually this configuration is kept out in the hbase-site.xml and is picked up by the client from the
CLASSPATH.

If you are configuring an IDE to run an HBase client, you should include the conf/ directory on your
classpath so hbase-site.xml settings can be found (or add src/test/resources to pick up the hbase-
site.xml used by tests).

For Java applications using Maven, including the hbase-shaded-client module is the recommended
dependency when connecting to a cluster:

<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-shaded-client</artifactId>
<version>2.0.0</version>
</dependency>

7.5.3. Java client configuration

The configuration used by a Java client is kept in an HBaseConfiguration instance.

The factory method on HBaseConfiguration, HBaseConfiguration.create();, on invocation, will read


in the content of the first hbase-site.xml found on the client’s CLASSPATH, if one is present (Invocation
will also factor in any hbase-default.xml found; an hbase-default.xml ships inside the
hbase.X.X.X.jar). It is also possible to specify configuration directly without having to read from a
hbase-site.xml.

For example, to set the ZooKeeper ensemble for the cluster programmatically do as follows:

Configuration config = HBaseConfiguration.create();


config.set("hbase.zookeeper.quorum", "localhost"); // Until 2.x.y versions
// ---- or ----
config.set("hbase.masters", "localhost:1234"); // Starting 3.0.0 version

76
7.6. Timeout settings
HBase provides a wide variety of timeout settings to limit the execution time of various remote
operations.

• hbase.rpc.timeout

• hbase.rpc.read.timeout

• hbase.rpc.write.timeout

• hbase.client.operation.timeout

• hbase.client.meta.operation.timeout

• hbase.client.scanner.timeout.period

The hbase.rpc.timeout property limits how long a single RPC call can run before timing out. To fine
tune read or write related RPC timeouts set hbase.rpc.read.timeout and hbase.rpc.write.timeout
configuration properties. In the absence of these properties hbase.rpc.timeout will be used.

A higher-level timeout is hbase.client.operation.timeout which is valid for each client call. When
an RPC call fails for instance for a timeout due to hbase.rpc.timeout it will be retried until
hbase.client.operation.timeout is reached. Client operation timeout for system tables can be fine
tuned by setting hbase.client.meta.operation.timeout configuration value. When this is not set its
value will use hbase.client.operation.timeout.

Timeout for scan operations is controlled differently. Use hbase.client.scanner.timeout.period


property to set this timeout.

77
Chapter 8. Example Configurations
8.1. Basic Distributed HBase Install
Here is a basic configuration example for a distributed ten node cluster: * The nodes are named
example0, example1, etc., through node example9 in this example. * The HBase Master and the HDFS
NameNode are running on the node example0. * RegionServers run on nodes example1-example9. * A
3-node ZooKeeper ensemble runs on example1, example2, and example3 on the default ports. *
ZooKeeper data is persisted to the directory /export/zookeeper.

Below we show what the main configuration files — hbase-site.xml, regionservers, and hbase-
env.sh — found in the HBase conf directory might look like.

8.1.1. hbase-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hbase.zookeeper.quorum</name>
<value>example1,example2,example3</value>
<description>The directory shared by RegionServers.
</description>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/export/zookeeper</value>
<description>Property from ZooKeeper config zoo.cfg.
The directory where the snapshot is stored.
</description>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://example0:8020/hbase</value>
<description>The directory shared by RegionServers.
</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
<description>The mode the cluster will be in. Possible values are
false: standalone and pseudo-distributed setups with managed ZooKeeper
true: fully-distributed with unmanaged ZooKeeper Quorum (see hbase-env.sh)
</description>
</property>
</configuration>

78
8.1.2. regionservers

In this file you list the nodes that will run RegionServers. In our case, these nodes are example1-
example9.

example1
example2
example3
example4
example5
example6
example7
example8
example9

8.1.3. hbase-env.sh

The following lines in the hbase-env.sh file show how to set the JAVA_HOME environment variable
(required for HBase) and set the heap to 4 GB (rather than the default value of 1 GB). If you copy
and paste this example, be sure to adjust the JAVA_HOME to suit your environment.

# The java implementation to use.


export JAVA_HOME=/usr/java/jdk1.8.0/

# The maximum amount of heap to use. Default is left to JVM default.


export HBASE_HEAPSIZE=4G

Use rsync to copy the content of the conf directory to all nodes of the cluster.

79
Chapter 9. The Important Configurations
Below we list some important configurations. We’ve divided this section into required configuration
and worth-a-look recommended configs.

9.1. Required Configurations


Review the os and hadoop sections.

9.1.1. Big Cluster Configurations

If you have a cluster with a lot of regions, it is possible that a Regionserver checks in briefly after
the Master starts while all the remaining RegionServers lag behind. This first server to check in will
be assigned all regions which is not optimal. To prevent the above scenario from happening, up the
hbase.master.wait.on.regionservers.mintostart property from its default value of 1. See HBASE-
6389 Modify the conditions to ensure that Master waits for sufficient number of Region Servers
before starting region assignments for more detail.

9.2. Recommended Configurations


9.2.1. ZooKeeper Configuration

zookeeper.session.timeout

The default timeout is 90 seconds (specified in milliseconds). This means that if a server crashes, it
will be 90 seconds before the Master notices the crash and starts recovery. You might need to tune
the timeout down to a minute or even less so the Master notices failures sooner. Before changing
this value, be sure you have your JVM garbage collection configuration under control, otherwise, a
long garbage collection that lasts beyond the ZooKeeper session timeout will take out your
RegionServer. (You might be fine with this — you probably want recovery to start on the server if a
RegionServer has been in GC for a long period of time).

To change this configuration, edit hbase-site.xml, copy the changed file across the cluster and
restart.

We set this value high to save our having to field questions up on the mailing lists asking why a
RegionServer went down during a massive import. The usual cause is that their JVM is untuned and
they are running into long GC pauses. Our thinking is that while users are getting familiar with
HBase, we’d save them having to know all of its intricacies. Later when they’ve built some
confidence, then they can play with configuration such as this.

Number of ZooKeeper Instances

See zookeeper.

9.2.2. HDFS Configurations

80
dfs.datanode.failed.volumes.tolerated

This is the "…number of volumes that are allowed to fail before a DataNode stops offering service.
By default, any volume failure will cause a datanode to shutdown" from the hdfs-default.xml
description. You might want to set this to about half the amount of your available disks.

hbase.regionserver.handler.count

This setting defines the number of threads that are kept open to answer incoming requests to user
tables. The rule of thumb is to keep this number low when the payload per request approaches the
MB (big puts, scans using a large cache) and high when the payload is small (gets, small puts, ICVs,
deletes). The total size of the queries in progress is limited by the setting
hbase.ipc.server.max.callqueue.size.

It is safe to set that number to the maximum number of incoming clients if their payload is small,
the typical example being a cluster that serves a website since puts aren’t typically buffered and
most of the operations are gets.

The reason why it is dangerous to keep this setting high is that the aggregate size of all the puts that
are currently happening in a region server may impose too much pressure on its memory, or even
trigger an OutOfMemoryError. A RegionServer running on low memory will trigger its JVM’s
garbage collector to run more frequently up to a point where GC pauses become noticeable (the
reason being that all the memory used to keep all the requests' payloads cannot be trashed, no
matter how hard the garbage collector tries). After some time, the overall cluster throughput is
affected since every request that hits that RegionServer will take longer, which exacerbates the
problem even more.

You can get a sense of whether you have too little or too many handlers by rpc.logging on an
individual RegionServer then tailing its logs (Queued requests consume memory).

9.2.3. Configuration for large memory machines

HBase ships with a reasonable, conservative configuration that will work on nearly all machine
types that people might want to test with. If you have larger machines — HBase has 8G and larger
heap — you might find the following configuration options helpful. TODO.

9.2.4. Compression

You should consider enabling ColumnFamily compression. There are several options that are near-
frictionless and in most all cases boost performance by reducing the size of StoreFiles and thus
reducing I/O.

See compression for more information.

9.2.5. Configuring the size and number of WAL files

HBase uses wal to recover the memstore data that has not been flushed to disk in case of an RS
failure. These WAL files should be configured to be slightly smaller than HDFS block (by default a
HDFS block is 64Mb and a WAL file is ~60Mb).

81
HBase also has a limit on the number of WAL files, designed to ensure there’s never too much data
that needs to be replayed during recovery. This limit needs to be set according to memstore
configuration, so that all the necessary data would fit. It is recommended to allocate enough WAL
files to store at least that much data (when all memstores are close to full). For example, with 16Gb
RS heap, default memstore settings (0.4), and default WAL file size (~60Mb), 16Gb*0.4/60, the
starting point for WAL file count is ~109. However, as all memstores are not expected to be full all
the time, less WAL files can be allocated.

9.2.6. Managed Splitting

HBase generally handles splitting of your regions based upon the settings in your hbase-default.xml
and hbase-site.xml configuration files. Important settings include
hbase.regionserver.region.split.policy, hbase.hregion.max.filesize,
hbase.regionserver.regionSplitLimit. A simplistic view of splitting is that when a region grows to
hbase.hregion.max.filesize, it is split. For most usage patterns, you should use automatic splitting.
See manual region splitting decisions for more information about manual region splitting.

Instead of allowing HBase to split your regions automatically, you can choose to manage the
splitting yourself. Manually managing splits works if you know your keyspace well, otherwise let
HBase figure where to split for you. Manual splitting can mitigate region creation and movement
under load. It also makes it so region boundaries are known and invariant (if you disable region
splitting). If you use manual splits, it is easier doing staggered, time-based major compactions to
spread out your network IO load.

Disable Automatic Splitting


To disable automatic splitting, you can set region split policy in either cluster configuration or table
configuration to be org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy

Automatic Splitting Is Recommended


If you disable automatic splits to diagnose a problem or during a period of fast
 data growth, it is recommended to re-enable them when your situation becomes
more stable. The potential benefits of managing region splits yourself are not
undisputed.

Determine the Optimal Number of Pre-Split Regions


The optimal number of pre-split regions depends on your application and environment. A good rule
of thumb is to start with 10 pre-split regions per server and watch as data grows over time. It is
better to err on the side of too few regions and perform rolling splits later. The optimal number of
regions depends upon the largest StoreFile in your region. The size of the largest StoreFile will
increase with time if the amount of data grows. The goal is for the largest region to be just large
enough that the compaction selection algorithm only compacts it during a timed major compaction.
Otherwise, the cluster can be prone to compaction storms with a large number of regions under
compaction at the same time. It is important to understand that the data growth causes compaction
storms and not the manual split decision.

If the regions are split into too many large regions, you can increase the major compaction interval
by configuring HConstants.MAJOR_COMPACTION_PERIOD. The
org.apache.hadoop.hbase.util.RegionSplitter utility also provides a network-IO-safe rolling split of

82
all regions.

9.2.7. Managed Compactions

By default, major compactions are scheduled to run once in a 7-day period.

If you need to control exactly when and how often major compaction runs, you can disable
managed major compactions. See the entry for hbase.hregion.majorcompaction in the
compaction.parameters table for details.

Do Not Disable Major Compactions


Major compactions are absolutely necessary for StoreFile clean-up. Do not disable
 them altogether. You can run major compactions manually via the HBase shell or
via the Admin API.

For more information about compactions and the compaction file selection process, see compaction

9.2.8. Speculative Execution

Speculative Execution of MapReduce tasks is on by default, and for HBase clusters it is generally
advised to turn off Speculative Execution at a system-level unless you need it for a specific case,
where it can be configured per-job. Set the properties mapreduce.map.speculative and
mapreduce.reduce.speculative to false.

9.3. Other Configurations


9.3.1. Balancer

The balancer is a periodic operation which is run on the master to redistribute regions on the
cluster. It is configured via hbase.balancer.period and defaults to 300000 (5 minutes).

See master.processes.loadbalancer for more information on the LoadBalancer.

9.3.2. Disabling Blockcache

Do not turn off block cache (You’d do it by setting hfile.block.cache.size to zero). Currently, we do
not do well if you do this because the RegionServer will spend all its time loading HFile indices over
and over again. If your working set is such that block cache does you no good, at least size the block
cache such that HFile indices will stay up in the cache (you can get a rough idea on the size you
need by surveying RegionServer UIs; you’ll see index block size accounted near the top of the
webpage).

9.3.3. Nagle’s or the small package problem

If a big 40ms or so occasional delay is seen in operations against HBase, try the Nagles' setting. For
example, see the user mailing list thread, Inconsistent scan performance with caching set to 1 and
the issue cited therein where setting notcpdelay improved scan speeds. You might also see the
graphs on the tail of HBASE-7008 Set scanner caching to a better default where our Lars Hofhansl

83
tries various data sizes w/ Nagle’s on and off measuring the effect.

9.3.4. Better Mean Time to Recover (MTTR)

This section is about configurations that will make servers come back faster after a fail. See the
Deveraj Das and Nicolas Liochon blog post Introduction to HBase Mean Time to Recover (MTTR) for
a brief introduction.

The issue HBASE-8354 forces Namenode into loop with lease recovery requests is messy but has a
bunch of good discussion toward the end on low timeouts and how to cause faster recovery
including citation of fixes added to HDFS. Read the Varun Sharma comments. The below suggested
configurations are Varun’s suggestions distilled and tested. Make sure you are running on a late-
version HDFS so you have the fixes he refers to and himself adds to HDFS that help HBase MTTR
(e.g. HDFS-3703, HDFS-3712, and HDFS-4791 — Hadoop 2 for sure has them and late Hadoop 1 has
some). Set the following in the RegionServer.

<property>
<name>hbase.lease.recovery.dfs.timeout</name>
<value>23000</value>
<description>How much time we allow elapse between calls to recover lease.
Should be larger than the dfs timeout.</description>
</property>
<property>
<name>dfs.client.socket-timeout</name>
<value>10000</value>
<description>Down the DFS timeout from 60 to 10 seconds.</description>
</property>

And on the NameNode/DataNode side, set the following to enable 'staleness' introduced in HDFS-
3703, HDFS-3912.

84
<property>
<name>dfs.client.socket-timeout</name>
<value>10000</value>
<description>Down the DFS timeout from 60 to 10 seconds.</description>
</property>
<property>
<name>dfs.datanode.socket.write.timeout</name>
<value>10000</value>
<description>Down the DFS timeout from 8 * 60 to 10 seconds.</description>
</property>
<property>
<name>ipc.client.connect.timeout</name>
<value>3000</value>
<description>Down from 60 seconds to 3.</description>
</property>
<property>
<name>ipc.client.connect.max.retries.on.timeouts</name>
<value>2</value>
<description>Down from 45 seconds to 3 (2 == 3 retries).</description>
</property>
<property>
<name>dfs.namenode.avoid.read.stale.datanode</name>
<value>true</value>
<description>Enable stale state in hdfs</description>
</property>
<property>
<name>dfs.namenode.stale.datanode.interval</name>
<value>20000</value>
<description>Down from default 30 seconds</description>
</property>
<property>
<name>dfs.namenode.avoid.write.stale.datanode</name>
<value>true</value>
<description>Enable stale state in hdfs</description>
</property>

9.3.5. JMX

JMX (Java Management Extensions) provides built-in instrumentation that enables you to monitor
and manage the Java VM. To enable monitoring and management from remote systems, you need
to set system property com.sun.management.jmxremote.port (the port number through which you
want to enable JMX RMI connections) when you start the Java VM. See the official documentation
for more information. Historically, besides above port mentioned, JMX opens two additional
random TCP listening ports, which could lead to port conflict problem. (See HBASE-10289 for
details)

As an alternative, you can use the coprocessor-based JMX implementation provided by HBase. To
enable it, add below property in hbase-site.xml:

85
<property>
<name>hbase.coprocessor.regionserver.classes</name>
<value>org.apache.hadoop.hbase.JMXListener</value>
</property>

 DO NOT set com.sun.management.jmxremote.port for Java VM at the same time.

Currently it supports Master and RegionServer Java VM. By default, the JMX listens on TCP port
10102, you can further configure the port using below properties:

<property>
<name>regionserver.rmi.registry.port</name>
<value>61130</value>
</property>
<property>
<name>regionserver.rmi.connector.port</name>
<value>61140</value>
</property>

The registry port can be shared with connector port in most cases, so you only need to configure
regionserver.rmi.registry.port. However, if you want to use SSL communication, the 2 ports must
be configured to different values.

By default the password authentication and SSL communication is disabled. To enable password
authentication, you need to update hbase-env.sh like below:

export HBASE_JMX_BASE="-Dcom.sun.management.jmxremote.authenticate=true
\
-Dcom.sun.management.jmxremote.password.file=your_password_file
\
-Dcom.sun.management.jmxremote.access.file=your_access_file"

export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS $HBASE_JMX_BASE "


export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS $HBASE_JMX_BASE "

See example password/access file under $JRE_HOME/lib/management.

To enable SSL communication with password authentication, follow below steps:

86
#1. generate a key pair, stored in myKeyStore
keytool -genkey -alias jconsole -keystore myKeyStore

#2. export it to file jconsole.cert


keytool -export -alias jconsole -keystore myKeyStore -file jconsole.cert

#3. copy jconsole.cert to jconsole client machine, import it to jconsoleKeyStore


keytool -import -alias jconsole -keystore jconsoleKeyStore -file jconsole.cert

And then update hbase-env.sh like below:

export HBASE_JMX_BASE="-Dcom.sun.management.jmxremote.ssl=true
\
-Djavax.net.ssl.keyStore=/home/tianq/myKeyStore
\
-Djavax.net.ssl.keyStorePassword=your_password_in_step_1
\
-Dcom.sun.management.jmxremote.authenticate=true
\
-Dcom.sun.management.jmxremote.password.file=your_password file
\
-Dcom.sun.management.jmxremote.access.file=your_access_file"

export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS $HBASE_JMX_BASE "


export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS $HBASE_JMX_BASE "

Finally start jconsole on the client using the key store:

jconsole -J-Djavax.net.ssl.trustStore=/home/tianq/jconsoleKeyStore

To enable the HBase JMX implementation on Master, you also need to add below
 property in hbase-site.xml:

<property>
<name>hbase.coprocessor.master.classes</name>
<value>org.apache.hadoop.hbase.JMXListener</value>
</property>

The corresponding properties for port configuration are master.rmi.registry.port (by default
10101) and master.rmi.connector.port (by default the same as registry.port)

87
Chapter 10. Dynamic Configuration
It is possible to change a subset of the configuration without requiring a server restart. In the HBase
shell, the operations update_config and update_all_config will prompt a server or all servers to
reload configuration.

Only a subset of all configurations can currently be changed in the running server. Here are those
configurations:

Table 3. Configurations support dynamically change

Key

hbase.ipc.server.fallback-to-simple-auth-allowed

hbase.cleaner.scan.dir.concurrent.size

hbase.regionserver.thread.compaction.large

hbase.regionserver.thread.compaction.small

hbase.regionserver.thread.split

hbase.regionserver.throughput.controller

hbase.regionserver.thread.hfilecleaner.throttle

hbase.regionserver.hfilecleaner.large.queue.size

hbase.regionserver.hfilecleaner.small.queue.size

hbase.regionserver.hfilecleaner.large.thread.count

hbase.regionserver.hfilecleaner.small.thread.count

hbase.regionserver.hfilecleaner.thread.timeout.msec

hbase.regionserver.hfilecleaner.thread.check.interval.msec

hbase.regionserver.flush.throughput.controller

hbase.hstore.compaction.max.size

hbase.hstore.compaction.max.size.offpeak

hbase.hstore.compaction.min.size

hbase.hstore.compaction.min

hbase.hstore.compaction.max

hbase.hstore.compaction.ratio

hbase.hstore.compaction.ratio.offpeak

hbase.regionserver.thread.compaction.throttle

hbase.hregion.majorcompaction

hbase.hregion.majorcompaction.jitter

hbase.hstore.min.locality.to.skip.major.compact

hbase.hstore.compaction.date.tiered.max.storefile.age.millis

88
Key

hbase.hstore.compaction.date.tiered.incoming.window.min

hbase.hstore.compaction.date.tiered.window.policy.class

hbase.hstore.compaction.date.tiered.single.output.for.minor.compaction

hbase.hstore.compaction.date.tiered.window.factory.class

hbase.offpeak.start.hour

hbase.offpeak.end.hour

hbase.oldwals.cleaner.thread.size

hbase.oldwals.cleaner.thread.timeout.msec

hbase.oldwals.cleaner.thread.check.interval.msec

hbase.procedure.worker.keep.alive.time.msec

hbase.procedure.worker.add.stuck.percentage

hbase.procedure.worker.monitor.interval.msec

hbase.procedure.worker.stuck.threshold.msec

hbase.regions.slop

hbase.regions.overallSlop

hbase.balancer.tablesOnMaster

hbase.balancer.tablesOnMaster.systemTablesOnly

hbase.util.ip.to.rack.determiner

hbase.ipc.server.max.callqueue.length

hbase.ipc.server.priority.max.callqueue.length

hbase.ipc.server.callqueue.type

hbase.ipc.server.callqueue.codel.target.delay

hbase.ipc.server.callqueue.codel.interval

hbase.ipc.server.callqueue.codel.lifo.threshold

hbase.master.balancer.stochastic.maxSteps

hbase.master.balancer.stochastic.stepsPerRegion

hbase.master.balancer.stochastic.maxRunningTime

hbase.master.balancer.stochastic.runMaxSteps

hbase.master.balancer.stochastic.numRegionLoadsToRemember

hbase.master.loadbalance.bytable

hbase.master.balancer.stochastic.minCostNeedBalance

hbase.master.balancer.stochastic.localityCost

hbase.master.balancer.stochastic.rackLocalityCost

89
Key

hbase.master.balancer.stochastic.readRequestCost

hbase.master.balancer.stochastic.writeRequestCost

hbase.master.balancer.stochastic.memstoreSizeCost

hbase.master.balancer.stochastic.storefileSizeCost

hbase.master.balancer.stochastic.regionReplicaHostCostKey

hbase.master.balancer.stochastic.regionReplicaRackCostKey

hbase.master.balancer.stochastic.regionCountCost

hbase.master.balancer.stochastic.primaryRegionCountCost

hbase.master.balancer.stochastic.moveCost

hbase.master.balancer.stochastic.maxMovePercent

hbase.master.balancer.stochastic.tableSkewCost

hbase.master.regions.recovery.check.interval

hbase.regions.recovery.store.file.ref.count

90
Upgrading
You cannot skip major versions when upgrading. If you are upgrading from version 0.98.x to 2.x,
you must first go from 0.98.x to 1.2.x and then go from 1.2.x to 2.x.

Review Apache HBase Configuration, in particular Hadoop. Familiarize yourself with Support and
Testing Expectations.

91
Chapter 11. HBase version number and
compatibility
11.1. Aspirational Semantic Versioning
Starting with the 1.0.0 release, HBase is working towards Semantic Versioning for its release
versioning. In summary:

Given a version number MAJOR.MINOR.PATCH, increment the:


• MAJOR version when you make incompatible API changes,

• MINOR version when you add functionality in a backwards-compatible manner, and

• PATCH version when you make backwards-compatible bug fixes.

• Additional labels for pre-release and build metadata are available as extensions to the
MAJOR.MINOR.PATCH format.

Compatibility Dimensions
In addition to the usual API versioning considerations HBase has other compatibility dimensions
that we need to consider.

Client-Server wire protocol compatibility


• Allows updating client and server out of sync.

• We could only allow upgrading the server first. I.e. the server would be backward compatible to
an old client, that way new APIs are OK.

• Example: A user should be able to use an old client to connect to an upgraded cluster.

Server-Server protocol compatibility


• Servers of different versions can co-exist in the same cluster.

• The wire protocol between servers is compatible.

• Workers for distributed tasks, such as replication and log splitting, can co-exist in the same
cluster.

• Dependent protocols (such as using ZK for coordination) will also not be changed.

• Example: A user can perform a rolling upgrade.

File format compatibility


• Support file formats backward and forward compatible

• Example: File, ZK encoding, directory layout is upgraded automatically as part of an HBase


upgrade. User can downgrade to the older version and everything will continue to work.

Client API compatibility


• Allow changing or removing existing client APIs.

• An API needs to be deprecated for a whole major version before we will change/remove it.

92
◦ An example: An API was deprecated in 2.0.1 and will be marked for deletion in 4.0.0. On the
other hand, an API deprecated in 2.0.0 can be removed in 3.0.0.

◦ Occasionally mistakes are made and internal classes are marked with a higher access level
than they should. In these rare circumstances, we will accelerate the deprecation schedule
to the next major version (i.e., deprecated in 2.2.x, marked IA.Private 3.0.0). Such changes
are communicated and explained via release note in Jira.

• APIs available in a patch version will be available in all later patch versions. However, new APIs
may be added which will not be available in earlier patch versions.
[1]
• New APIs introduced in a patch version will only be added in a source compatible way : i.e.
code that implements public APIs will continue to compile.

◦ Example: A user using a newly deprecated API does not need to modify application code
with HBase API calls until the next major version. *

Client Binary compatibility


• Client code written to APIs available in a given patch release can run unchanged (no
recompilation needed) against the new jars of later patch versions.

• Client code written to APIs available in a given patch release might not run against the old jars
from an earlier patch version.

◦ Example: Old compiled client code will work unchanged with the new jars.

• If a Client implements an HBase Interface, a recompile MAY be required upgrading to a newer


minor version (See release notes for warning about incompatible changes). All effort will be
made to provide a default implementation so this case should not arise.

Server-Side Limited API compatibility (taken from Hadoop)


• Internal APIs are marked as Stable, Evolving, or Unstable

• This implies binary compatibility for coprocessors and plugins (pluggable classes, including
replication) as long as these are only using marked interfaces/classes.

• Example: Old compiled Coprocessor, Filter, or Plugin code will work unchanged with the new
jars.

Dependency Compatibility
• An upgrade of HBase will not require an incompatible upgrade of a dependent project, except
for Apache Hadoop.

• An upgrade of HBase will not require an incompatible upgrade of the Java runtime.

• Example: Upgrading HBase to a version that supports Dependency Compatibility won’t require
that you upgrade your Apache ZooKeeper service.

• Example: If your current version of HBase supported running on JDK 8, then an upgrade to a
version that supports Dependency Compatibility will also run on JDK 8.

93
Hadoop Versions
Previously, we tried to maintain dependency compatibility for the underly Hadoop
service but over the last few years this has proven untenable. While the HBase
project attempts to maintain support for older versions of Hadoop, we drop the
 "supported" designator for minor versions that fail to continue to see releases.
Additionally, the Hadoop project has its own set of compatibility guidelines, which
means in some cases having to update to a newer supported minor release might
break some of our compatibility promises.

Operational Compatibility
• Metric changes

• Behavioral changes of services

• JMX APIs exposed via the /jmx/ endpoint

Summary
• A patch upgrade is a drop-in replacement. Any change that is not Java binary and source
[2]
compatible would not be allowed. Downgrading versions within patch releases may not be
compatible.

• A minor upgrade requires no application/client code modification. Ideally it would be a drop-in


replacement but client code, coprocessors, filters, etc might have to be recompiled if new jars
are used.

• A major upgrade allows the HBase community to make breaking changes.

[4]
Table 4. Compatibility Matrix

Major Minor Patch

Client-Server wire N Y Y
Compatibility

Server-Server N Y Y
Compatibility
[3]
File Format N Y Y
Compatibility

Client API N Y Y
Compatibility

Client Binary N N Y
Compatibility

Server-Side Limited API Compatibility

Stable N Y Y

Evolving N N Y

Unstable N N N

Dependency N Y Y
Compatibility

94
Operational N N Y
Compatibility

11.1.1. HBase API Surface

HBase has a lot of API points, but for the compatibility matrix above, we differentiate between
Client API, Limited Private API, and Private API. HBase uses Apache Yetus Audience Annotations to
guide downstream expectations for stability.

• InterfaceAudience (javadocs): captures the intended audience, possible values include:

◦ Public: safe for end users and external projects

◦ LimitedPrivate: used for internals we expect to be pluggable, such as coprocessors

◦ Private: strictly for use within HBase itself Classes which are defined as IA.Private may be
used as parameters or return values for interfaces which are declared IA.LimitedPrivate.
Treat the IA.Private object as opaque; do not try to access its methods or fields directly.

• InterfaceStability (javadocs): describes what types of interface changes are permitted. Possible
values include:

◦ Stable: the interface is fixed and is not expected to change

◦ Evolving: the interface may change in future minor verisons

◦ Unstable: the interface may change at any time

Please keep in mind the following interactions between the InterfaceAudience and
InterfaceStability annotations within the HBase project:

• IA.Public classes are inherently stable and adhere to our stability guarantees relating to the
type of upgrade (major, minor, or patch).

• IA.LimitedPrivate classes should always be annotated with one of the given InterfaceStability
values. If they are not, you should presume they are IS.Unstable.

• IA.Private classes should be considered implicitly unstable, with no guarantee of stability


between releases.

HBase Client API


HBase Client API consists of all the classes or methods that are marked with
InterfaceAudience.Public interface. All main classes in hbase-client and dependent modules
have either InterfaceAudience.Public, InterfaceAudience.LimitedPrivate, or
InterfaceAudience.Private marker. Not all classes in other modules (hbase-server, etc) have the
marker. If a class is not annotated with one of these, it is assumed to be a
InterfaceAudience.Private class.

HBase LimitedPrivate API


LimitedPrivate annotation comes with a set of target consumers for the interfaces. Those
consumers are coprocessors, phoenix, replication endpoint implementations or similar. At this
point, HBase only guarantees source and binary compatibility for these interfaces between
patch versions.

95
HBase Private API
All classes annotated with InterfaceAudience.Private or all classes that do not have the
annotation are for HBase internal use only. The interfaces and method signatures can change at
any point in time. If you are relying on a particular interface that is marked Private, you should
open a jira to propose changing the interface to be Public or LimitedPrivate, or an interface
exposed for this purpose.

Binary Compatibility
When we say two HBase versions are compatible, we mean that the versions are wire and binary
compatible. Compatible HBase versions means that clients can talk to compatible but differently
versioned servers. It means too that you can just swap out the jars of one version and replace them
with the jars of another, compatible version and all will just work. Unless otherwise specified,
HBase point versions are (mostly) binary compatible. You can safely do rolling upgrades between
binary compatible versions; i.e. across maintenance releases: e.g. from 1.4.4 to 1.4.6. See Does
compatibility between versions also mean binary compatibility? discussion on the HBase dev
mailing list.

11.2. Rolling Upgrades


A rolling upgrade is the process by which you update the servers in your cluster a server at a time.
You can rolling upgrade across HBase versions if they are binary or wire compatible. See Rolling
Upgrade Between Versions that are Binary/Wire Compatible for more on what this means. Coarsely,
a rolling upgrade is a graceful stop each server, update the software, and then restart. You do this
for each server in the cluster. Usually you upgrade the Master first and then the RegionServers. See
Rolling Restart for tools that can help use the rolling upgrade process.

For example, in the below, HBase was symlinked to the actual HBase install. On upgrade, before
running a rolling restart over the cluster, we changed the symlink to point at the new HBase
software version and then ran

$ HADOOP_HOME=~/hadoop-2.6.0-CRC-SNAPSHOT ~/hbase/bin/rolling-restart.sh --config


~/conf_hbase

The rolling-restart script will first gracefully stop and restart the master, and then each of the
RegionServers in turn. Because the symlink was changed, on restart the server will come up using
the new HBase version. Check logs for errors as the rolling upgrade proceeds.

Rolling Upgrade Between Versions that are Binary/Wire Compatible


Unless otherwise specified, HBase minor versions are binary compatible. You can do a Rolling
Upgrades between HBase point versions. For example, you can go to 1.4.4 from 1.4.6 by doing a
rolling upgrade across the cluster replacing the 1.4.4 binary with a 1.4.6 binary.

In the minor version-particular sections below, we call out where the versions are wire/protocol
compatible and in this case, it is also possible to do a Rolling Upgrades.

[1] See 'Source Compatibility' https://blogs.oracle.com/darcy/entry/kinds_of_compatibility


[2] See http://docs.oracle.com/javase/specs/jls/se7/html/jls-13.html.

96
[3] comp_matrix_offline_upgrade_note,Running an offline upgrade tool without downgrade might be needed. We will typically
only support migrating data from major version X to major version X+1.
[4] Note that this indicates what could break, not that it will break. We will/should add specifics in our release notes.

97
Chapter 12. Rollback
Sometimes things don’t go as planned when attempting an upgrade. This section explains how to
perform a rollback to an earlier HBase release. Note that this should only be needed between Major
and some Minor releases. You should always be able to downgrade between HBase Patch releases
within the same Minor version. These instructions may require you to take steps before you start
the upgrade process, so be sure to read through this section beforehand.

12.1. Caveats
Rollback vs Downgrade
This section describes how to perform a rollback on an upgrade between HBase minor and major
versions. In this document, rollback refers to the process of taking an upgraded cluster and
restoring it to the old version while losing all changes that have occurred since upgrade. By contrast,
a cluster downgrade would restore an upgraded cluster to the old version while maintaining any
data written since the upgrade. We currently only offer instructions to rollback HBase clusters.
Further, rollback only works when these instructions are followed prior to performing the upgrade.

When these instructions talk about rollback vs downgrade of prerequisite cluster services (i.e.
HDFS), you should treat leaving the service version the same as a degenerate case of downgrade.

Replication
Unless you are doing an all-service rollback, the HBase cluster will lose any configured peers for
HBase replication. If your cluster is configured for HBase replication, then prior to following these
instructions you should document all replication peers. After performing the rollback you should
then add each documented peer back to the cluster. For more information on enabling HBase
replication, listing peers, and adding a peer see Managing and Configuring Cluster Replication. Note
also that data written to the cluster since the upgrade may or may not have already been replicated
to any peers. Determining which, if any, peers have seen replication data as well as rolling back the
data in those peers is out of the scope of this guide.

Data Locality
Unless you are doing an all-service rollback, going through a rollback procedure will likely destroy
all locality for Region Servers. You should expect degraded performance until after the cluster has
had time to go through compactions to restore data locality. Optionally, you can force a compaction
to speed this process up at the cost of generating cluster load.

Configurable Locations
The instructions below assume default locations for the HBase data directory and the HBase znode.
Both of these locations are configurable and you should verify the value used in your cluster before
proceeding. In the event that you have a different value, just replace the default with the one found
in your configuration * HBase data directory is configured via the key 'hbase.rootdir' and has a
default value of '/hbase'. * HBase znode is configured via the key 'zookeeper.znode.parent' and has
a default value of '/hbase'.

98
12.2. All service rollback
If you will be performing a rollback of both the HDFS and ZooKeeper services, then HBase’s data
will be rolled back in the process.

Requirements
• Ability to rollback HDFS and ZooKeeper

Before upgrade
No additional steps are needed pre-upgrade. As an extra precautionary measure, you may wish to
use distcp to back up the HBase data off of the cluster to be upgraded. To do so, follow the steps in
the 'Before upgrade' section of 'Rollback after HDFS downgrade' but copy to another HDFS instance
instead of within the same instance.

Performing a rollback
1. Stop HBase

2. Perform a rollback for HDFS and ZooKeeper (HBase should remain stopped)

3. Change the installed version of HBase to the previous version

4. Start HBase

5. Verify HBase contents—use the HBase shell to list tables and scan some known values.

12.3. Rollback after HDFS rollback and ZooKeeper


downgrade
If you will be rolling back HDFS but going through a ZooKeeper downgrade, then HBase will be in
an inconsistent state. You must ensure the cluster is not started until you complete this process.

Requirements
• Ability to rollback HDFS

• Ability to downgrade ZooKeeper

Before upgrade
No additional steps are needed pre-upgrade. As an extra precautionary measure, you may wish to
use distcp to back up the HBase data off of the cluster to be upgraded. To do so, follow the steps in
the 'Before upgrade' section of 'Rollback after HDFS downgrade' but copy to another HDFS instance
instead of within the same instance.

Performing a rollback
1. Stop HBase

2. Perform a rollback for HDFS and a downgrade for ZooKeeper (HBase should remain stopped)

3. Change the installed version of HBase to the previous version

4. Clean out ZooKeeper information related to HBase. WARNING: This step will permanently
destroy all replication peers. Please see the section on HBase Replication under Caveats for
more information.

99
Clean HBase information out of ZooKeeper

[hpnewton@gateway_node.example.com ~]$ zookeeper-client -server


zookeeper1.example.com:2181,zookeeper2.example.com:2181,zookeeper3.example.com:2181
Welcome to ZooKeeper!
JLine support is disabled
rmr /hbase
quit
Quitting...

5. Start HBase

6. Verify HBase contents—use the HBase shell to list tables and scan some known values.

12.4. Rollback after HDFS downgrade


If you will be performing an HDFS downgrade, then you’ll need to follow these instructions
regardless of whether ZooKeeper goes through rollback, downgrade, or reinstallation.

Requirements
• Ability to downgrade HDFS

• Pre-upgrade cluster must be able to run MapReduce jobs

• HDFS super user access

• Sufficient space in HDFS for at least two copies of the HBase data directory

Before upgrade
Before beginning the upgrade process, you must take a complete backup of HBase’s backing data.
The following instructions cover backing up the data within the current HDFS instance.
Alternatively, you can use the distcp command to copy the data to another HDFS cluster.

1. Stop the HBase cluster

2. Copy the HBase data directory to a backup location using the distcp command as the HDFS
super user (shown below on a security enabled cluster)

Using distcp to backup the HBase data directory

[hpnewton@gateway_node.example.com ~]$ kinit -k -t hdfs.keytab [email protected]


[hpnewton@gateway_node.example.com ~]$ hadoop distcp /hbase /hbase-pre-upgrade-
backup

3. Distcp will launch a mapreduce job to handle copying the files in a distributed fashion. Check
the output of the distcp command to ensure this job completed successfully.

Performing a rollback
1. Stop HBase

2. Perform a downgrade for HDFS and a downgrade/rollback for ZooKeeper (HBase should remain
stopped)

100
3. Change the installed version of HBase to the previous version

4. Restore the HBase data directory from prior to the upgrade as the HDFS super user (shown
below on a security enabled cluster). If you backed up your data on another HDFS cluster
instead of locally, you will need to use the distcp command to copy it back to the current HDFS
cluster.

Restore the HBase data directory

[hpnewton@gateway_node.example.com ~]$ kinit -k -t hdfs.keytab [email protected]


[hpnewton@gateway_node.example.com ~]$ hdfs dfs -mv /hbase /hbase-upgrade-rollback
[hpnewton@gateway_node.example.com ~]$ hdfs dfs -mv /hbase-pre-upgrade-backup
/hbase

5. Clean out ZooKeeper information related to HBase. WARNING: This step will permanently
destroy all replication peers. Please see the section on HBase Replication under Caveats for
more information.

Clean HBase information out of ZooKeeper

[hpnewton@gateway_node.example.com ~]$ zookeeper-client -server


zookeeper1.example.com:2181,zookeeper2.example.com:2181,zookeeper3.example.com:2181
Welcome to ZooKeeper!
JLine support is disabled
rmr /hbase
quit
Quitting...

6. Start HBase

7. Verify HBase contents–use the HBase shell to list tables and scan some known values.

101
Chapter 13. Upgrade Paths
13.1. Upgrade from 2.0.x-2.2.x to 2.3+
There is no special consideration upgrading to hbase-2.3.x from earlier versions. From 2.2.x, it
should be rolling upgradeable. From 2.1.x or 2.0.x, you will need to clear the Upgrade from 2.0 or
2.1 to 2.2+ hurdle first.

13.1.1. Upgraded ZooKeeper Dependency Version

Our dependency on Apache ZooKeeper has been upgraded to 3.5.7 (HBASE-24132), as 3.4.x is EOL.
The newer 3.5.x client is compatible with the older 3.4.x server. However, if you’re using HBase in
stand-alone mode and perform an in-place upgrade, there are some upgrade steps documented by
the ZooKeeper community. This doesn’t impact a production deployment, but would impact a
developer’s local environment.

13.1.2. New In-Master Procedure Store

Of note, HBase 2.3.0 changes the in-Master Procedure Store implementation. It was a dedicated
custom store (see MasterProcWAL) to instead use a standard HBase Region (HBASE-23326). The
migration from the old to new format is automatic run by the new 2.3.0 Master on startup. The old
MasterProcWALs dir which hosted the old custom implementation files in ${hbase.rootdir} is
deleted on successful migration. A new MasterProc sub-directory replaces it to host the Store files
and WALs for the new Procedure Store in-Master Region. The in-Master Region is unusual in that it
writes to an alternate location at ${hbase.rootdir}/MasterProc rather than under
${hbase.rootdir}/data in the filesystem and the special Procedure Store in-Master Region is hidden
from all clients other than the active Master itself. Otherwise, it is like any other with the Master
process running flushes and compactions, archiving WALs when over-flushed, and so on. Its files
are readable by standard Region and Store file tooling for triage and analysis as long as they are
pointed to the appropriate location in the filesystem.

13.2. Upgrade from 2.0 or 2.1 to 2.2+


HBase 2.2+ uses a new Procedure form assiging/unassigning/moving Regions. It does not process
HBase 2.1 and 2.0’s Unassign/Assign Procedure types. Upgrade requires that we first drain the
Master Procedure Store of old style Procedures before starting the new 2.2 Master. So you need to
make sure that before you kill the old version (2.0 or 2.1) Master, there is no region in transition.
And once the new version (2.2+) Master is up, you can rolling upgrade RegionServers one by one.

And there is a more safer way if you are running 2.1.1+ or 2.0.3+ cluster. It need four steps to
upgrade Master.

1. Shutdown both active and standby Masters (Your cluster will continue to server reads and
writes without interruption).

2. Set the property hbase.procedure.upgrade-to-2-2 to true in hbase-site.xml for the Master, and
start only one Master, still using the 2.1.1+ (or 2.0.3+) version.

102
3. Wait until the Master quits. Confirm that there is a 'READY TO ROLLING UPGRADE' message in
the Master log as the cause of the shutdown. The Procedure Store is now empty.

4. Start new Masters with the new 2.2+ version.

Then you can rolling upgrade RegionServers one by one. See HBASE-21075 for more details.

13.3. Upgrading from 1.x to 2.x


In this section we will first call out significant changes compared to the prior stable HBase release
and then go over the upgrade process. Be sure to read the former with care so you avoid suprises.

13.3.1. Changes of Note!

First we’ll cover deployment / operational changes that you might hit when upgrading to HBase
2.0+. After that we’ll call out changes for downstream applications. Please note that Coprocessors
are covered in the operational section. Also note that this section is not meant to convey
information about new features that may be of interest to you. For a complete summary of changes,
please see the CHANGES.txt file in the source release artifact for the version you are planning to
upgrade to.

Update to basic prerequisite minimums in HBase 2.0+


As noted in the section Basic Prerequisites, HBase 2.0+ requires a minimum of Java 8 and Hadoop
2.6. The HBase community recommends ensuring you have already completed any needed
upgrades in prerequisites prior to upgrading your HBase version.

HBCK must match HBase server version


You must not use an HBase 1.x version of HBCK against an HBase 2.0+ cluster. HBCK is strongly tied
to the HBase server version. Using the HBCK tool from an earlier release against an HBase 2.0+
cluster will destructively alter said cluster in unrecoverable ways.

As of HBase 2.0, HBCK (A.K.A HBCK1 or hbck1) is a read-only tool that can report the status of some
non-public system internals but will often misread state because it does not understand the
workings of hbase2.

To read about HBCK’s replacement, see HBase HBCK2 in Apache HBase Operational Management.

Related, before you upgrade, ensure that hbck1 reports no INCONSISTENCIES. Fixing
 hbase1-type inconsistencies post-upgrade is an involved process.

Configuration settings no longer in HBase 2.0+


The following configuration settings are no longer applicable or available. For details, please see
the detailed release notes.

• hbase.config.read.zookeeper.config (see ZooKeeper configs no longer read from zoo.cfg for


migration details)

• hbase.zookeeper.useMulti (HBase now always uses ZK’s multi functionality)

• hbase.rpc.client.threads.max

103
• hbase.rpc.client.nativetransport

• hbase.fs.tmp.dir

• hbase.bucketcache.combinedcache.enabled

• hbase.bucketcache.ioengine no longer supports the 'heap' value.

• hbase.bulkload.staging.dir

• hbase.balancer.tablesOnMaster wasn’t removed, strictly speaking, but its meaning has


fundamentally changed and users should not set it. See the section "Master hosting regions"
feature broken and unsupported for details.

• hbase.master.distributed.log.replay See the section "Distributed Log Replay" feature broken and
removed for details

• hbase.regionserver.disallow.writes.when.recovering See the section "Distributed Log Replay"


feature broken and removed for details

• hbase.regionserver.wal.logreplay.batch.size See the section "Distributed Log Replay" feature


broken and removed for details

• hbase.master.catalog.timeout

• hbase.regionserver.catalog.timeout

• hbase.metrics.exposeOperationTimes

• hbase.metrics.showTableName

• hbase.online.schema.update.enable (HBase now always supports this)

• hbase.thrift.htablepool.size.max

Configuration properties that were renamed in HBase 2.0+


The following properties have been renamed. Attempts to set the old property will be ignored at
run time.

Table 5. Renamed properties

Old name New name

hbase.rpc.server.nativetransport hbase.netty.nativetransport

hbase.netty.rpc.server.worker.count hbase.netty.worker.count

hbase.hfile.compactions.discharger.interval hbase.hfile.compaction.discharger.interval

hbase.hregion.percolumnfamilyflush.size.lower. hbase.hregion.percolumnfamilyflush.size.lower.
bound bound.min

Configuration settings with different defaults in HBase 2.0+


The following configuration settings changed their default value. Where applicable, the value to set
to restore the behavior of HBase 1.2 is given.

• hbase.security.authorization now defaults to false. set to true to restore same behavior as


previous default.

• hbase.client.retries.number is now set to 10. Previously it was 35. Downstream users are

104
advised to use client timeouts as described in section Timeout settings instead.

• hbase.client.serverside.retries.multiplier is now set to 3. Previously it was 10. Downstream users


are advised to use client timesout as describe in section Timeout settings instead.

• hbase.master.fileSplitTimeout is now set to 10 minutes. Previously it was 30 seconds.

• hbase.regionserver.logroll.multiplier is now set to 0.5. Previously it was 0.95. This change is tied
with the following doubling of block size. Combined, these two configuration changes should
make for WALs of about the same size as those in hbase-1.x but there should be less incidence of
small blocks because we fail to roll the WAL before we hit the blocksize threshold. See HBASE-
19148 for discussion.

• hbase.regionserver.hlog.blocksize defaults to 2x the HDFS default block size for the WAL dir.
Previously it was equal to the HDFS default block size for the WAL dir.

• hbase.client.start.log.errors.counter changed to 5. Previously it was 9.

• hbase.ipc.server.callqueue.type changed to 'fifo'. In HBase versions 1.0 - 1.2 it was 'deadline'. In


prior and later 1.x versions it already defaults to 'fifo'.

• hbase.hregion.memstore.chunkpool.maxsize is 1.0 by default. Previously it was 0.0. Effectively,


this means previously we would not use a chunk pool when our memstore is onheap and now
we will. See the section Long GC pauses for more infromation about the MSLAB chunk pool.

• hbase.master.cleaner.interval is now set to 10 minutes. Previously it was 1 minute.

• hbase.master.procedure.threads will now default to 1/4 of the number of available CPUs, but
not less than 16 threads. Previously it would be number of threads equal to number of CPUs.

• hbase.hstore.blockingStoreFiles is now 16. Previously it was 10.

• hbase.http.max.threads is now 16. Previously it was 10.

• hbase.client.max.perserver.tasks is now 2. Previously it was 5.

• hbase.normalizer.period is now 5 minutes. Previously it was 30 minutes.

• hbase.regionserver.region.split.policy is now SteppingSplitPolicy. Previously it was


IncreasingToUpperBoundRegionSplitPolicy.

• replication.source.ratio is now 0.5. Previously it was 0.1.

"Master hosting regions" feature broken and unsupported


The feature "Master acts as region server" and associated follow-on work available in HBase 1.y is
non-functional in HBase 2.y and should not be used in a production setting due to deadlock on
Master initialization. Downstream users are advised to treat related configuration settings as
experimental and the feature as inappropriate for production settings.

A brief summary of related changes:

• Master no longer carries regions by default

• hbase.balancer.tablesOnMaster is a boolean, default false (if it holds an HBase 1.x list of tables,
will default to false)

• hbase.balancer.tablesOnMaster.systemTablesOnly is boolean to keep user tables off master.


default false

105
• those wishing to replicate old list-of-servers config should deploy a stand-alone RegionServer
process and then rely on Region Server Groups

"Distributed Log Replay" feature broken and removed


The Distributed Log Replay feature was broken and has been removed from HBase 2.y+. As a
consequence all related configs, metrics, RPC fields, and logging have also been removed. Note that
this feature was found to be unreliable in the run up to HBase 1.0, defaulted to being unused, and
was effectively removed in HBase 1.2.0 when we started ignoring the config that turns it on
(HBASE-14465). If you are currently using the feature, be sure to perform a clean shutdown, ensure
all DLR work is complete, and disable the feature prior to upgrading.

prefix-tree encoding removed


The prefix-tree encoding was removed from HBase 2.0.0 (HBASE-19179). It was (late!) deprecated in
hbase-1.2.7, hbase-1.4.0, and hbase-1.3.2.

This feature was removed because it as not being actively maintained. If interested in reviving this
sweet facility which improved random read latencies at the expensive of slowed writes, write the
HBase developers list at dev at hbase dot apache dot org.

The prefix-tree encoding needs to be removed from all tables before upgrading to HBase 2.0+. To do
that first you need to change the encoding from PREFIX_TREE to something else that is supported in
HBase 2.0. After that you have to major compact the tables that were using PREFIX_TREE encoding
before. To check which column families are using incompatible data block encoding you can use
Pre-Upgrade Validator.

Changed metrics
The following metrics have changed names:

• Metrics previously published under the name "AssignmentManger" [sic] are now published
under the name "AssignmentManager"

The following metrics have changed their meaning:

• The metric 'blockCacheEvictionCount' published on a per-region server basis no longer includes


blocks removed from the cache due to the invalidation of the hfiles they are from (e.g. via
compaction).

• The metric 'totalRequestCount' increments once per request; previously it incremented by the
number of Actions carried in the request; e.g. if a request was a multi made of four Gets and two
Puts, we’d increment 'totalRequestCount' by six; now we increment by one regardless. Expect to
see lower values for this metric in hbase-2.0.0.

• The 'readRequestCount' now counts reads that return a non-empty row where in older hbases,
we’d increment 'readRequestCount' whether a Result or not. This change will flatten the profile
of the read-requests graphs if requests for non-existent rows. A YCSB read-heavy workload can
do this dependent on how the database was loaded.

The following metrics have been removed:

• Metrics related to the Distributed Log Replay feature are no longer present. They were
previsouly found in the region server context under the name 'replay'. See the section

106
"Distributed Log Replay" feature broken and removed for details.

The following metrics have been added:

• 'totalRowActionRequestCount' is a count of region row actions summing reads and writes.

Changed logging
HBase-2.0.0 now uses slf4j as its logging frontend. Prevously, we used log4j (1.2). For most the
transition should be seamless; slf4j does a good job interpreting log4j.properties logging
configuration files such that you should not notice any difference in your log system emissions.

That said, your log4j.properties may need freshening. See HBASE-20351 for example, where a stale
log configuration file manifest as netty configuration being dumped at DEBUG level as preamble on
every shell command invocation.

ZooKeeper configs no longer read from zoo.cfg


HBase no longer optionally reads the 'zoo.cfg' file for ZooKeeper related configuration settings. If
you previously relied on the 'hbase.config.read.zookeeper.config' config for this functionality, you
should migrate any needed settings to the hbase-site.xml file while adding the prefix
'hbase.zookeeper.property.' to each property name.

Changes in permissions
The following permission related changes either altered semantics or defaults:

• Permissions granted to a user now merge with existing permissions for that user, rather than
over-writing them. (see the release note on HBASE-17472 for details)

• Region Server Group commands (added in 1.4.0) now require admin privileges.

Most Admin APIs don’t work against an HBase 2.0+ cluster from pre-HBase 2.0 clients
A number of admin commands are known to not work when used from a pre-HBase 2.0 client. This
includes an HBase Shell that has the library jars from pre-HBase 2.0. You will need to plan for an
outage of use of admin APIs and commands until you can also update to the needed client version.

The following client operations do not work against HBase 2.0+ cluster when executed from a pre-
HBase 2.0 client:

• list_procedures

• split

• merge_region

• list_quotas

• enable_table_replication

• disable_table_replication

• Snapshot related commands

Deprecated in 1.0 admin commands have been removed.


The following commands that were deprecated in 1.0 have been removed. Where applicable the
replacement command is listed.

107
• The 'hlog' command has been removed. Downstream users should rely on the 'wal' command
instead.

Region Server memory consumption changes.


Users upgrading from versions prior to HBase 1.4 should read the instructions in section Region
Server memory consumption changes..

Additionally, HBase 2.0 has changed how memstore memory is tracked for flushing decisions.
Previously, both the data size and overhead for storage were used to calculate utilization against
the flush threashold. Now, only data size is used to make these per-region decisions. Globally the
addition of the storage overhead is used to make decisions about forced flushes.

Web UI for splitting and merging operate on row prefixes


Previously, the Web UI included functionality on table status pages to merge or split based on an
encoded region name. In HBase 2.0, instead this functionality works by taking a row prefix.

Special upgrading for Replication users from pre-HBase 1.4


User running versions of HBase prior to the 1.4.0 release that make use of replication should be
sure to read the instructions in the section Replication peer’s TableCFs config.

HBase shell changes


The HBase shell command relies on a bundled JRuby instance. This bundled JRuby been updated
from version 1.6.8 to version 9.1.10.0. The represents a change from Ruby 1.8 to Ruby 2.3.3, which
introduces non-compatible language changes for user scripts.

The HBase shell command now ignores the '--return-values' flag that was present in early HBase 1.4
releases. Instead the shell always behaves as though that flag were passed. If you wish to avoid
having expression results printed in the console you should alter your IRB configuration as noted in
the section irbrc.

Coprocessor APIs have changed in HBase 2.0+


All Coprocessor APIs have been refactored to improve supportability around binary API
compatibility for future versions of HBase. If you or applications you rely on have custom HBase
coprocessors, you should read the release notes for HBASE-18169 for details of changes you will
need to make prior to upgrading to HBase 2.0+.

For example, if you had a BaseRegionObserver in HBase 1.2 then at a minimum you will need to
update it to implement both RegionObserver and RegionCoprocessor and add the method

...
@Override
public Optional<RegionObserver> getRegionObserver() {
return Optional.of(this);
}
...

HBase 2.0+ can no longer write HFile v2 files.


HBase has simplified our internal HFile handling. As a result, we can no longer write HFile versions

108
earlier than the default of version 3. Upgrading users should ensure that hfile.format.version is not
set to 2 in hbase-site.xml before upgrading. Failing to do so will cause Region Server failure. HBase
can still read HFiles written in the older version 2 format.

HBase 2.0+ can no longer read Sequence File based WAL file.
HBase can no longer read the deprecated WAL files written in the Apache Hadoop Sequence File
format. The hbase.regionserver.hlog.reader.impl and hbase.regionserver.hlog.reader.impl
configuration entries should be set to use the Protobuf based WAL reader / writer classes. This
implementation has been the default since HBase 0.96, so legacy WAL files should not be a concern
for most downstream users.

A clean cluster shutdown should ensure there are no WAL files. If you are unsure of a given WAL
file’s format you can use the hbase wal command to parse files while the HBase cluster is offline. In
HBase 2.0+, this command will not be able to read a Sequence File based WAL. For more
information on the tool see the section WALPrettyPrinter.

Change in behavior for filters


The Filter ReturnCode NEXT_ROW has been redefined as skipping to next row in current family, not
to next row in all family. it’s more reasonable, because ReturnCode is a concept in store level, not in
region level.

Downstream HBase 2.0+ users should use the shaded client


Downstream users are strongly urged to rely on the Maven coordinates org.apache.hbase:hbase-
shaded-client for their runtime use. This artifact contains all the needed implementation details for
talking to an HBase cluster while minimizing the number of third party dependencies exposed.

Note that this artifact exposes some classes in the org.apache.hadoop package space (e.g.
o.a.h.configuration.Configuration) so that we can maintain source compatibility with our public
API. Those classes are included so that they can be altered to use the same relocated third party
dependencies as the rest of the HBase client code. In the event that you need to also use Hadoop in
your code, you should ensure all Hadoop related jars precede the HBase client jar in your classpath.

Downstream HBase 2.0+ users of MapReduce must switch to new artifact


Downstream users of HBase’s integration for Apache Hadoop MapReduce must switch to relying on
the org.apache.hbase:hbase-shaded-mapreduce module for their runtime use. Historically,
downstream users relied on either the org.apache.hbase:hbase-server or org.apache.hbase:hbase-
shaded-server artifacts for these classes. Both uses are no longer supported and in the vast majority
of cases will fail at runtime.

Note that this artifact exposes some classes in the org.apache.hadoop package space (e.g.
o.a.h.configuration.Configuration) so that we can maintain source compatibility with our public
API. Those classes are included so that they can be altered to use the same relocated third party
dependencies as the rest of the HBase client code. In the event that you need to also use Hadoop in
your code, you should ensure all Hadoop related jars precede the HBase client jar in your classpath.

Significant changes to runtime classpath


A number of internal dependencies for HBase were updated or removed from the runtime
classpath. Downstream client users who do not follow the guidance in Downstream HBase 2.0+
users should use the shaded client will have to examine the set of dependencies Maven pulls in for

109
impact. Downstream users of LimitedPrivate Coprocessor APIs will need to examine the runtime
environment for impact. For details on our new handling of third party libraries that have
historically been a problem with respect to harmonizing compatible runtime versions, see the
reference guide section The hbase-thirdparty dependency and shading/relocation.

Multiple breaking changes to source and binary compatibility for client API
The Java client API for HBase has a number of changes that break both source and binary
compatibility for details see the Compatibility Check Report for the release you’ll be upgrading to.

Tracing implementation changes


The backing implementation of HBase’s tracing features was updated from Apache HTrace 3 to
HTrace 4, which includes several breaking changes. While HTrace 3 and 4 can coexist in the same
runtime, they will not integrate with each other, leading to disjoint trace information.

The internal changes to HBase during this upgrade were sufficient for compilation, but it has not
been confirmed that there are no regressions in tracing functionality. Please consider this feature
expiremental for the immediate future.

If you previously relied on client side tracing integrated with HBase operations, it is recommended
that you upgrade your usage to HTrace 4 as well.

After the Apache HTrace project moved to the Attic/retired, the traces in HBase are left broken and
unmaintained since HBase 2.0. A new project HBASE-22120 will replace HTrace with OpenTracing.

HFile lose forward compatability


HFiles generated by 2.0.0, 2.0.1, 2.1.0 are not forward compatible to 1.4.6-, 1.3.2.1-, 1.2.6.1-, and other
inactive releases. Why HFile lose compatability is hbase in new versions (2.0.0, 2.0.1, 2.1.0) use
protobuf to serialize/deserialize TimeRangeTracker (TRT) while old versions use
DataInput/DataOutput. To solve this, We have to put HBASE-21012 to 2.x and put HBASE-21013 in
1.x. For more information, please check HBASE-21008.

Performance
You will likely see a change in the performance profile on upgrade to hbase-2.0.0 given read and
write paths have undergone significant change. On release, writes may be slower with reads about
the same or much better, dependent on context. Be prepared to spend time re-tuning (See Apache
HBase Performance Tuning). Performance is also an area that is now under active review so look
forward to improvement in coming releases (See HBASE-20188 TESTING Performance).

Integration Tests and Kerberos


Integration Tests (IntegrationTests*) used to rely on the Kerberos credential cache for
authentication against secured clusters. This used to lead to tests failing due to authentication
failures when the tickets in the credential cache expired. As of hbase-2.0.0 (and hbase-1.3.0+), the
integration test clients will make use of the configuration properties hbase.client.keytab.file and
hbase.client.kerberos.principal. They are required. The clients will perform a login from the
configured keytab file and automatically refresh the credentials in the background for the process
lifetime (See HBASE-16231).

Default Compaction Throughput


HBase 2.x comes with default limits to the speed at which compactions can execute. This limit is

110
defined per RegionServer. In previous versions of HBase earlier than 1.5, there was no limit to the
speed at which a compaction could run by default. Applying a limit to the throughput of a
compaction should ensure more stable operations from RegionServers.

Take care to notice that this limit is per RegionServer, not per compaction.

The throughput limit is defined as a range of bytes written per second, and is allowed to vary
within the given lower and upper bound. RegionServers observe the current throughput of a
compaction and apply a linear formula to adjust the allowed throughput, within the lower and
upper bound, with respect to external pressure. For compactions, external pressure is defined as
the number of store files with respect to the maximum number of allowed store files. The more
store files, the higher the compaction pressure.

Configuration of this throughput is governed by the following properties.

• The lower bound is defined by hbase.hstore.compaction.throughput.lower.bound and defaults to


50 MB/s (52428800).

• The upper bound is defined by hbase.hstore.compaction.throughput.higher.bound and defaults to


100 MB/s (104857600).

To revert this behavior to the unlimited compaction throughput of earlier versions of HBase, please
set the following property to the implementation that applies no limits to compactions.

hbase.regionserver.throughput.controller=org.apache.hadoop.hbase.regionserver.throttle.NoLimitT
hroughputController

13.3.2. Upgrading Coprocessors to 2.0

Coprocessors have changed substantially in 2.0 ranging from top level design changes in class
hierarchies to changed/removed methods, interfaces, etc. (Parent jira: HBASE-18169 Coprocessor fix
and cleanup before 2.0.0 release). Some of the reasons for such widespread changes:

1. Pass Interfaces instead of Implementations; e.g. TableDescriptor instead of HTableDescriptor


and Region instead of HRegion (HBASE-18241 Change client.Table and client.Admin to not use
HTableDescriptor).

2. Design refactor so implementers need to fill out less boilerplate and so we can do more compile-
time checking (HBASE-17732)

3. Purge Protocol Buffers from Coprocessor API (HBASE-18859, HBASE-16769, etc)

4. Cut back on what we expose to Coprocessors removing hooks on internals that were too private
to expose (for eg. HBASE-18453 CompactionRequest should not be exposed to user directly;
HBASE-18298 RegionServerServices Interface cleanup for CP expose; etc)

To use coprocessors in 2.0, they should be rebuilt against new API otherwise they will fail to load
and HBase processes will die.

Suggested order of changes to upgrade the coprocessors:

1. Directly implement observer interfaces instead of extending Base*Observer classes. Change Foo
extends BaseXXXObserver to Foo implements XXXObserver. (HBASE-17312).

111
2. Adapt to design change from Inheritence to Composition (HBASE-17732) by following this
example.

3. getTable() has been removed from the CoprocessorEnvrionment, coprocessors should self-
manage Table instances.

Some examples of writing coprocessors with new API can be found in hbase-example module here .

Lastly, if an api has been changed/removed that breaks you in an irreparable way, and if there’s a
good justification to add it back, bring it our notice ([email protected]).

13.3.3. Rolling Upgrade from 1.x to 2.x

Rolling upgrades are currently an experimental feature. They have had limited testing. There are
likely corner cases as yet uncovered in our limited experience so you should be careful if you go
this route. The stop/upgrade/start described in the next section, Upgrade process from 1.x to 2.x, is
the safest route.

That said, the below is a prescription for a rolling upgrade of a 1.4 cluster.

Pre-Requirements
• Upgrade to the latest 1.4.x release. Pre 1.4 releases may also work but are not tested, so please
upgrade to 1.4.3+ before upgrading to 2.x, unless you are an expert and familiar with the region
assignment and crash processing. See the section Upgrading from pre-1.4 to 1.4+ on how to
upgrade to 1.4.x.

• Make sure that the zk-less assignment is enabled, i.e, set hbase.assignment.usezk to false. This is
the most important thing. It allows the 1.x master to assign/unassign regions to/from 2.x region
servers. See the release note section of HBASE-11059 on how to migrate from zk based
assignment to zk less assignment.

• Before you upgrade, ensure that hbck1 reports no INCONSISTENCIES. Fixing hbase1-type
inconsistencies post-upgrade is an involved process.

• We have tested rolling upgrading from 1.4.3 to 2.1.0, but it should also work if you want to
upgrade to 2.0.x.

Instructions
1. Unload a region server and upgrade it to 2.1.0. With HBASE-17931 in place, the meta region and
regions for other system tables will be moved to this region server immediately. If not, please
move them manually to the new region server. This is very important because

◦ The schema of meta region is hard coded, if meta is on an old region server, then the new
region servers can not access it as it does not have some families, for example, table state.

◦ Client with lower version can communicate with server with higher version, but not vice
versa. If the meta region is on an old region server, the new region server will use a client
with higher version to communicate with a server with lower version, this may introduce
strange problems.

2. Rolling upgrade all other region servers.

3. Upgrading masters.

112
It is OK that during the rolling upgrading there are region server crashes. The 1.x master can assign
regions to both 1.x and 2.x region servers, and HBASE-19166 fixed a problem so that 1.x region
server can also read the WALs written by 2.x region server and split them.

please read the Changes of Note! section carefully before rolling upgrading. Make
sure that you do not use the removed features in 2.0, for example, the prefix-tree
 encoding, the old hfile format, etc. They could both fail the upgrading and leave
the cluster in an intermediate state and hard to recover.

If you have success running this prescription, please notify the dev list with a note
 on your experience and/or update the above with any deviations you may have
taken so others going this route can benefit from your efforts.

13.3.4. Upgrade process from 1.x to 2.x

To upgrade an existing HBase 1.x cluster, you should:

• Ensure that hbck1 reports no INCONSISTENCIES. Fixing hbase1-type inconsistencies post-upgrade


is an involved process. Fix all hbck1 complaints before proceeding.

• Clean shutdown of existing 1.x cluster

• Update coprocessors

• Upgrade Master roles first

• Upgrade RegionServers

• (Eventually) Upgrade Clients

13.4. Upgrading from pre-1.4 to 1.4+


13.4.1. Region Server memory consumption changes.

Users upgrading from versions prior to HBase 1.4 should be aware that the estimates of heap usage
by the memstore objects (KeyValue, object and array header sizes, etc) have been made more
accurate for heap sizes up to 32G (using CompressedOops), resulting in them dropping by 10-50% in
practice. This also results in less number of flushes and compactions due to "fatter" flushes. YMMV.
As a result, the actual heap usage of the memstore before being flushed may increase by up to
100%. If configured memory limits for the region server had been tuned based on observed usage,
this change could result in worse GC behavior or even OutOfMemory errors. Set the environment
property (not hbase-site.xml) "hbase.memorylayout.use.unsafe" to false to disable.

13.4.2. Replication peer’s TableCFs config

Before 1.4, the table name can’t include namespace for replication peer’s TableCFs config. It was
fixed by add TableCFs to ReplicationPeerConfig which was stored on Zookeeper. So when upgrade
to 1.4, you have to update the original ReplicationPeerConfig data on Zookeeper firstly. There are
four steps to upgrade when your cluster have a replication peer with TableCFs config.

• Disable the replication peer.

113
• If master has permission to write replication peer znode, then rolling update master directly. If
not, use TableCFsUpdater tool to update the replication peer’s config.

$ bin/hbase org.apache.hadoop.hbase.replication.master.TableCFsUpdater update

• Rolling update regionservers.

• Enable the replication peer.

Notes:

• Can’t use the old client(before 1.4) to change the replication peer’s config. Because the client will
write config to Zookeeper directly, the old client will miss TableCFs config. And the old client
write TableCFs config to the old tablecfs znode, it will not work for new version regionserver.

13.4.3. Raw scan now ignores TTL

Doing a raw scan will now return results that have expired according to TTL settings.

13.5. Upgrading from pre-1.3 to 1.3+


If running Integration Tests under Kerberos, see Integration Tests and Kerberos.

13.6. Upgrading to 1.x


Please consult the documentation published specifically for the version of HBase that you are
upgrading to for details on the upgrade process.

114
The Apache HBase Shell
The Apache HBase Shell is (J)Ruby's IRB with some HBase particular commands added. Anything
you can do in IRB, you should be able to do in the HBase Shell.

To run the HBase shell, do as follows:

$ ./bin/hbase shell

Type help and then <RETURN> to see a listing of shell commands and options. Browse at least the
paragraphs at the end of the help output for the gist of how variables and command arguments are
entered into the HBase shell; in particular note how table names, rows, and columns, etc., must be
quoted.

See shell exercises for example basic shell operation.

Here is a nicely formatted listing of all shell commands by Rajeshbabu Chintaguntla.

115
Chapter 14. Scripting with Ruby
For examples scripting Apache HBase, look in the HBase bin directory. Look at the files that end in
*.rb. To run one of these files, do as follows:

$ ./bin/hbase org.jruby.Main PATH_TO_SCRIPT

116
Chapter 15. Running the Shell in Non-
Interactive Mode
A new non-interactive mode has been added to the HBase Shell (HBASE-11658). Non-interactive
mode captures the exit status (success or failure) of HBase Shell commands and passes that status
back to the command interpreter. If you use the normal interactive mode, the HBase Shell will only
ever return its own exit status, which will nearly always be 0 for success.

To invoke non-interactive mode, pass the -n or --non-interactive option to HBase Shell.

117
Chapter 16. HBase Shell in OS Scripts
You can use the HBase shell from within operating system script interpreters like the Bash shell
which is the default command interpreter for most Linux and UNIX distributions. The following
guidelines use Bash syntax, but could be adjusted to work with C-style shells such as csh or tcsh,
and could probably be modified to work with the Microsoft Windows script interpreter as well.
Submissions are welcome.

Spawning HBase Shell commands in this way is slow, so keep that in mind when
 you are deciding when combining HBase operations with the operating system
command line is appropriate.

Example 3. Passing Commands to the HBase Shell

You can pass commands to the HBase Shell in non-interactive mode (see
hbase.shell.noninteractive) using the echo command and the | (pipe) operator. Be sure to
escape characters in the HBase commands which would otherwise be interpreted by the shell.
Some debug-level output has been truncated from the example below.

$ echo "describe 'test1'" | ./hbase shell -n

Version 0.98.3-hadoop2, rd5e65a9144e315bb0a964e7730871af32f5018d5, Sat May 31


19:56:09 PDT 2014

describe 'test1'

DESCRIPTION ENABLED
'test1', {NAME => 'cf', DATA_BLOCK_ENCODING => 'NON true
E', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0',
VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIO
NS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS =>
'false', BLOCKSIZE => '65536', IN_MEMORY => 'false'
, BLOCKCACHE => 'true'}
1 row(s) in 3.2410 seconds

To suppress all output, echo it to /dev/null:

$ echo "describe 'test'" | ./hbase shell -n > /dev/null 2>&1

118
Example 4. Checking the Result of a Scripted Command

Since scripts are not designed to be run interactively, you need a way to check whether your
command failed or succeeded. The HBase shell uses the standard convention of returning a
value of 0 for successful commands, and some non-zero value for failed commands. Bash
stores a command’s return value in a special environment variable called $?. Because that
variable is overwritten each time the shell runs any command, you should store the result in a
different, script-defined variable.

This is a naive script that shows one way to store the return value and make a decision based
upon it.

#!/bin/bash

echo "describe 'test'" | ./hbase shell -n > /dev/null 2>&1


status=$?
echo "The status was " $status
if ($status == 0); then
echo "The command succeeded"
else
echo "The command may have failed."
fi
return $status

16.1. Checking for Success or Failure In Scripts


Getting an exit code of 0 means that the command you scripted definitely succeeded. However,
getting a non-zero exit code does not necessarily mean the command failed. The command could
have succeeded, but the client lost connectivity, or some other event obscured its success. This is
because RPC commands are stateless. The only way to be sure of the status of an operation is to
check. For instance, if your script creates a table, but returns a non-zero exit value, you should
check whether the table was actually created before trying again to create it.

119
Chapter 17. Read HBase Shell Commands
from a Command File
You can enter HBase Shell commands into a text file, one command per line, and pass that file to
the HBase Shell.

Example Command File

create 'test', 'cf'


list 'test'
put 'test', 'row1', 'cf:a', 'value1'
put 'test', 'row2', 'cf:b', 'value2'
put 'test', 'row3', 'cf:c', 'value3'
put 'test', 'row4', 'cf:d', 'value4'
scan 'test'
get 'test', 'row1'
disable 'test'
enable 'test'

120
Example 5. Directing HBase Shell to Execute the Commands

Pass the path to the command file as the only argument to the hbase shell command. Each
command is executed and its output is shown. If you do not include the exit command in your
script, you are returned to the HBase shell prompt. There is no way to programmatically check
each individual command for success or failure. Also, though you see the output for each
command, the commands themselves are not echoed to the screen so it can be difficult to line
up the command with its output.

$ ./hbase shell ./sample_commands.txt


0 row(s) in 3.4170 seconds

TABLE
test
1 row(s) in 0.0590 seconds

0 row(s) in 0.1540 seconds

0 row(s) in 0.0080 seconds

0 row(s) in 0.0060 seconds

0 row(s) in 0.0060 seconds

ROW COLUMN+CELL
row1 column=cf:a, timestamp=1407130286968, value=value1
row2 column=cf:b, timestamp=1407130286997, value=value2
row3 column=cf:c, timestamp=1407130287007, value=value3
row4 column=cf:d, timestamp=1407130287015, value=value4
4 row(s) in 0.0420 seconds

COLUMN CELL
cf:a timestamp=1407130286968, value=value1
1 row(s) in 0.0110 seconds

0 row(s) in 1.5630 seconds

0 row(s) in 0.4360 seconds

121
Chapter 18. Passing VM Options to the Shell
You can pass VM options to the HBase Shell using the HBASE_SHELL_OPTS environment variable. You
can set this in your environment, for instance by editing ~/.bashrc, or set it as part of the command
to launch HBase Shell. The following example sets several garbage-collection-related variables, just
for the lifetime of the VM running the HBase Shell. The command should be run all on a single line,
but is broken by the \ character, for readability.

$ HBASE_SHELL_OPTS="-verbose:gc -XX:+PrintGCApplicationStoppedTime
-XX:+PrintGCDateStamps \
-XX:+PrintGCDetails -Xloggc:$HBASE_HOME/logs/gc-hbase.log" ./bin/hbase shell

122
Chapter 19. Overriding configuration
starting the HBase Shell
As of hbase-2.0.5/hbase-2.1.3/hbase-2.2.0/hbase-1.4.10/hbase-1.5.0, you can pass or override hbase
configuration as specified in hbase-*.xml by passing your key/values prefixed with -D on the
command-line as follows:

$ ./bin/hbase shell
-Dhbase.zookeeper.quorum=ZK0.remote.cluster.example.org,ZK1.remote.cluster.example.org
,ZK2.remote.cluster.example.org -Draining=false
...
hbase(main):001:0> @shell.hbase.configuration.get("hbase.zookeeper.quorum")
=>
"ZK0.remote.cluster.example.org,ZK1.remote.cluster.example.org,ZK2.remote.cluster.exam
ple.org"
hbase(main):002:0> @shell.hbase.configuration.get("raining")
=> "false"

123
Chapter 20. Shell Tricks
20.1. Table variables
HBase 0.95 adds shell commands that provides jruby-style object-oriented references for tables.
Previously all of the shell commands that act upon a table have a procedural style that always took
the name of the table as an argument. HBase 0.95 introduces the ability to assign a table to a jruby
variable. The table reference can be used to perform data read write operations such as puts, scans,
and gets well as admin functionality such as disabling, dropping, describing tables.

For example, previously you would always specify a table name:

hbase(main):000:0> create 't', 'f'


0 row(s) in 1.0970 seconds
hbase(main):001:0> put 't', 'rold', 'f', 'v'
0 row(s) in 0.0080 seconds

hbase(main):002:0> scan 't'


ROW COLUMN+CELL
rold column=f:, timestamp=1378473207660, value=v
1 row(s) in 0.0130 seconds

hbase(main):003:0> describe 't'


DESCRIPTION
ENABLED
't', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_
true
SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2
147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false
', BLOCKCACHE => 'true'}
1 row(s) in 1.4430 seconds

hbase(main):004:0> disable 't'


0 row(s) in 14.8700 seconds

hbase(main):005:0> drop 't'


0 row(s) in 23.1670 seconds

hbase(main):006:0>

Now you can assign the table to a variable and use the results in jruby shell code.

124
hbase(main):007 > t = create 't', 'f'
0 row(s) in 1.0970 seconds

=> Hbase::Table - t
hbase(main):008 > t.put 'r', 'f', 'v'
0 row(s) in 0.0640 seconds
hbase(main):009 > t.scan
ROW COLUMN+CELL
r column=f:, timestamp=1331865816290, value=v
1 row(s) in 0.0110 seconds
hbase(main):010:0> t.describe
DESCRIPTION
ENABLED
't', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_
true
SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2
147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false
', BLOCKCACHE => 'true'}
1 row(s) in 0.0210 seconds
hbase(main):038:0> t.disable
0 row(s) in 6.2350 seconds
hbase(main):039:0> t.drop
0 row(s) in 0.2340 seconds

If the table has already been created, you can assign a Table to a variable by using the get_table
method:

hbase(main):011 > create 't','f'


0 row(s) in 1.2500 seconds

=> Hbase::Table - t
hbase(main):012:0> tab = get_table 't'
0 row(s) in 0.0010 seconds

=> Hbase::Table - t
hbase(main):013:0> tab.put 'r1' ,'f', 'v'
0 row(s) in 0.0100 seconds
hbase(main):014:0> tab.scan
ROW COLUMN+CELL
r1 column=f:, timestamp=1378473876949, value=v
1 row(s) in 0.0240 seconds
hbase(main):015:0>

The list functionality has also been extended so that it returns a list of table names as strings. You
can then use jruby to script table operations based on these names. The list_snapshots command
also acts similarly.

125
hbase(main):016 > tables = list('t.*')
TABLE
t
1 row(s) in 0.1040 seconds

=> #<#<Class:0x7677ce29>:0x21d377a4>
hbase(main):017:0> tables.map { |t| disable t ; drop t}
0 row(s) in 2.2510 seconds

=> [nil]
hbase(main):018:0>

20.2. irbrc
Create an .irbrc file for yourself in your home directory. Add customizations. A useful one is
command history so commands are save across Shell invocations:

$ more .irbrc
require 'irb/ext/save-history'
IRB.conf[:SAVE_HISTORY] = 100
IRB.conf[:HISTORY_FILE] = "#{ENV['HOME']}/.irb-save-history"

If you’d like to avoid printing the result of evaluting each expression to stderr, for example the
array of tables returned from the "list" command:

$ echo "IRB.conf[:ECHO] = false" >>~/.irbrc

See the ruby documentation of .irbrc to learn about other possible configurations.

20.3. LOG data to timestamp


To convert the date '08/08/16 20:56:29' from an hbase log into a timestamp, do:

hbase(main):021:0> import java.text.SimpleDateFormat


hbase(main):022:0> import java.text.ParsePosition
hbase(main):023:0> SimpleDateFormat.new("yy/MM/dd HH:mm:ss").parse("08/08/16
20:56:29", ParsePosition.new(0)).getTime() => 1218920189000

To go the other direction:

hbase(main):021:0> import java.util.Date


hbase(main):022:0> Date.new(1218920189000).toString() => "Sat Aug 16 20:56:29 UTC
2008"

126
To output in a format that is exactly like that of the HBase log format will take a little messing with
SimpleDateFormat.

20.4. Query Shell Configuration

hbase(main):001:0> @shell.hbase.configuration.get("hbase.rpc.timeout")
=> "60000"

To set a config in the shell:

hbase(main):005:0> @shell.hbase.configuration.setInt("hbase.rpc.timeout", 61010)


hbase(main):006:0> @shell.hbase.configuration.get("hbase.rpc.timeout")
=> "61010"

20.5. Pre-splitting tables with the HBase Shell


You can use a variety of options to pre-split tables when creating them via the HBase Shell create
command.

The simplest approach is to specify an array of split points when creating the table. Note that when
specifying string literals as split points, these will create split points based on the underlying byte
representation of the string. So when specifying a split point of '10', we are actually specifying the
byte split point '\x31\30'.

The split points will define n+1 regions where n is the number of split points. The lowest region will
contain all keys from the lowest possible key up to but not including the first split point key. The
next region will contain keys from the first split point up to, but not including the next split point
key. This will continue for all split points up to the last. The last region will be defined from the last
split point up to the maximum possible key.

hbase>create 't1','f',SPLITS => ['10','20','30']

In the above example, the table 't1' will be created with column family 'f', pre-split to four regions.
Note the first region will contain all keys from '\x00' up to '\x30' (as '\x31' is the ASCII code for '1').

You can pass the split points in a file using following variation. In this example, the splits are read
from a file corresponding to the local path on the local filesystem. Each line in the file specifies a
split point key.

hbase>create 't14','f',SPLITS_FILE=>'splits.txt'

The other options are to automatically compute splits based on a desired number of regions and a
splitting algorithm. HBase supplies algorithms for splitting the key range based on uniform splits or
based on hexadecimal keys, but you can provide your own splitting algorithm to subdivide the key

127
range.

# create table with four regions based on random bytes keys


hbase>create 't2','f1', { NUMREGIONS => 4 , SPLITALGO => 'UniformSplit' }

# create table with five regions based on hex keys


hbase>create 't3','f1', { NUMREGIONS => 5, SPLITALGO => 'HexStringSplit' }

As the HBase Shell is effectively a Ruby environment, you can use simple Ruby scripts to compute
splits algorithmically.

# generate splits for long (Ruby fixnum) key range from start to end key
hbase(main):070:0> def gen_splits(start_key,end_key,num_regions)
hbase(main):071:1> results=[]
hbase(main):072:1> range=end_key-start_key
hbase(main):073:1> incr=(range/num_regions).floor
hbase(main):074:1> for i in 1 .. num_regions-1
hbase(main):075:2> results.push([i*incr+start_key].pack("N"))
hbase(main):076:2> end
hbase(main):077:1> return results
hbase(main):078:1> end
hbase(main):079:0>
hbase(main):080:0> splits=gen_splits(1,2000000,10)
=> ["\000\003\r@", "\000\006\032\177", "\000\t'\276", "\000\f4\375", "\000\017B<",
"\000\022O{", "\000\025\\\272", "\000\030i\371", "\000\ew8"]
hbase(main):081:0> create 'test_splits','f',SPLITS=>splits
0 row(s) in 0.2670 seconds

=> Hbase::Table - test_splits

Note that the HBase Shell command truncate effectively drops and recreates the table with default
options which will discard any pre-splitting. If you need to truncate a pre-split table, you must drop
and recreate the table explicitly to re-specify custom split options.

20.6. Debug
20.6.1. Shell debug switch

You can set a debug switch in the shell to see more output — e.g. more of the stack trace on
exception — when you run a command:

hbase> debug <RETURN>

20.6.2. DEBUG log level

To enable DEBUG level logging in the shell, launch it with the -d option.

128
$ ./bin/hbase shell -d

20.7. Commands
20.7.1. count

Count command returns the number of rows in a table. It’s quite fast when configured with the
right CACHE

hbase> count '<tablename>', CACHE => 1000

The above count fetches 1000 rows at a time. Set CACHE lower if your rows are big. Default is to
fetch one row at a time.

129
Data Model
In HBase, data is stored in tables, which have rows and columns. This is a terminology overlap with
relational databases (RDBMSs), but this is not a helpful analogy. Instead, it can be helpful to think of
an HBase table as a multi-dimensional map.

HBase Data Model Terminology


Table
An HBase table consists of multiple rows.

Row
A row in HBase consists of a row key and one or more columns with values associated with
them. Rows are sorted alphabetically by the row key as they are stored. For this reason, the
design of the row key is very important. The goal is to store data in such a way that related rows
are near each other. A common row key pattern is a website domain. If your row keys are
domains, you should probably store them in reverse (org.apache.www, org.apache.mail,
org.apache.jira). This way, all of the Apache domains are near each other in the table, rather
than being spread out based on the first letter of the subdomain.

Column
A column in HBase consists of a column family and a column qualifier, which are delimited by a
: (colon) character.

Column Family
Column families physically colocate a set of columns and their values, often for performance
reasons. Each column family has a set of storage properties, such as whether its values should be
cached in memory, how its data is compressed or its row keys are encoded, and others. Each row
in a table has the same column families, though a given row might not store anything in a given
column family.

Column Qualifier
A column qualifier is added to a column family to provide the index for a given piece of data.
Given a column family content, a column qualifier might be content:html, and another might be
content:pdf. Though column families are fixed at table creation, column qualifiers are mutable
and may differ greatly between rows.

Cell
A cell is a combination of row, column family, and column qualifier, and contains a value and a
timestamp, which represents the value’s version.

Timestamp
A timestamp is written alongside each value, and is the identifier for a given version of a value.
By default, the timestamp represents the time on the RegionServer when the data was written,
but you can specify a different timestamp value when you put data into the cell.

130
Chapter 21. Conceptual View
You can read a very understandable explanation of the HBase data model in the blog post
Understanding HBase and BigTable by Jim R. Wilson. Another good explanation is available in the
PDF Introduction to Basic Schema Design by Amandeep Khurana.

It may help to read different perspectives to get a solid understanding of HBase schema design. The
linked articles cover the same ground as the information in this section.

The following example is a slightly modified form of the one on page 2 of the BigTable paper. There
is a table called webtable that contains two rows (com.cnn.www and com.example.www) and three
column families named contents, anchor, and people. In this example, for the first row (com.cnn.www),
anchor contains two columns (anchor:cssnsi.com, anchor:my.look.ca) and contents contains one
column (contents:html). This example contains 5 versions of the row with the row key com.cnn.www,
and one version of the row with the row key com.example.www. The contents:html column qualifier
contains the entire HTML of a given website. Qualifiers of the anchor column family each contain
the external site which links to the site represented by the row, along with the text it used in the
anchor of its link. The people column family represents people associated with the site.

Column Names
By convention, a column name is made of its column family prefix and a qualifier.
 For example, the column contents:html is made up of the column family contents
and the html qualifier. The colon character (:) delimits the column family from the
column family qualifier.

Table 6. Table webtable

Row Key Time Stamp ColumnFamily ColumnFamily ColumnFamily


contents anchor people

"com.cnn.www" t9 anchor:cnnsi.com
= "CNN"

"com.cnn.www" t8 anchor:my.look.ca
= "CNN.com"

"com.cnn.www" t6 contents:html =
"<html>…"

"com.cnn.www" t5 contents:html =
"<html>…"

"com.cnn.www" t3 contents:html =
"<html>…"

"com.example.ww t5 contents:html = people:author =


w" "<html>…" "John Doe"

Cells in this table that appear to be empty do not take space, or in fact exist, in HBase. This is what
makes HBase "sparse." A tabular view is not the only possible way to look at data in HBase, or even
the most accurate. The following represents the same information as a multi-dimensional map. This
is only a mock-up for illustrative purposes and may not be strictly accurate.

131
{
"com.cnn.www": {
contents: {
t6: contents:html: "<html>..."
t5: contents:html: "<html>..."
t3: contents:html: "<html>..."
}
anchor: {
t9: anchor:cnnsi.com = "CNN"
t8: anchor:my.look.ca = "CNN.com"
}
people: {}
}
"com.example.www": {
contents: {
t5: contents:html: "<html>..."
}
anchor: {}
people: {
t5: people:author: "John Doe"
}
}
}

132
Chapter 22. Physical View
Although at a conceptual level tables may be viewed as a sparse set of rows, they are physically
stored by column family. A new column qualifier (column_family:column_qualifier) can be added
to an existing column family at any time.

Table 7. ColumnFamily anchor

Row Key Time Stamp Column Family anchor

"com.cnn.www" t9 anchor:cnnsi.com = "CNN"

"com.cnn.www" t8 anchor:my.look.ca = "CNN.com"

Table 8. ColumnFamily contents

Row Key Time Stamp ColumnFamily contents:

"com.cnn.www" t6 contents:html = "<html>…"

"com.cnn.www" t5 contents:html = "<html>…"

"com.cnn.www" t3 contents:html = "<html>…"

The empty cells shown in the conceptual view are not stored at all. Thus a request for the value of
the contents:html column at time stamp t8 would return no value. Similarly, a request for an
anchor:my.look.ca value at time stamp t9 would return no value. However, if no timestamp is
supplied, the most recent value for a particular column would be returned. Given multiple
versions, the most recent is also the first one found, since timestamps are stored in descending
order. Thus a request for the values of all columns in the row com.cnn.www if no timestamp is
specified would be: the value of contents:html from timestamp t6, the value of anchor:cnnsi.com
from timestamp t9, the value of anchor:my.look.ca from timestamp t8.

For more information about the internals of how Apache HBase stores data, see regions.arch.

133
Chapter 23. Namespace
A namespace is a logical grouping of tables analogous to a database in relation database systems.
This abstraction lays the groundwork for upcoming multi-tenancy related features:

• Quota Management (HBASE-8410) - Restrict the amount of resources (i.e. regions, tables) a
namespace can consume.

• Namespace Security Administration (HBASE-9206) - Provide another level of security


administration for tenants.

• Region server groups (HBASE-6721) - A namespace/table can be pinned onto a subset of


RegionServers thus guaranteeing a coarse level of isolation.

23.1. Namespace management


A namespace can be created, removed or altered. Namespace membership is determined during
table creation by specifying a fully-qualified table name of the form:

<table namespace>:<table qualifier>

Example 6. Examples

#Create a namespace
create_namespace 'my_ns'

#create my_table in my_ns namespace


create 'my_ns:my_table', 'fam'

#drop namespace
drop_namespace 'my_ns'

#alter namespace
alter_namespace 'my_ns', {METHOD => 'set', 'PROPERTY_NAME' => 'PROPERTY_VALUE'}

23.2. Predefined namespaces


There are two predefined special namespaces:

• hbase - system namespace, used to contain HBase internal tables

• default - tables with no explicit specified namespace will automatically fall into this namespace

134
Example 7. Examples

#namespace=foo and table qualifier=bar


create 'foo:bar', 'fam'

#namespace=default and table qualifier=bar


create 'bar', 'fam'

135
Chapter 24. Table
Tables are declared up front at schema definition time.

136
Chapter 25. Row
Row keys are uninterpreted bytes. Rows are lexicographically sorted with the lowest order
appearing first in a table. The empty byte array is used to denote both the start and end of a tables'
namespace.

137
Chapter 26. Column Family
Columns in Apache HBase are grouped into column families. All column members of a column
family have the same prefix. For example, the columns courses:history and courses:math are both
members of the courses column family. The colon character (:) delimits the column family from the
column family qualifier. The column family prefix must be composed of printable characters. The
qualifying tail, the column family qualifier, can be made of any arbitrary bytes. Column families
must be declared up front at schema definition time whereas columns do not need to be defined at
schema time but can be conjured on the fly while the table is up and running.

Physically, all column family members are stored together on the filesystem. Because tunings and
storage specifications are done at the column family level, it is advised that all column family
members have the same general access pattern and size characteristics.

138
Chapter 27. Cells
A {row, column, version} tuple exactly specifies a cell in HBase. Cell content is uninterpreted bytes

139
Chapter 28. Data Model Operations
The four primary data model operations are Get, Put, Scan, and Delete. Operations are applied via
Table instances.

28.1. Get
Get returns attributes for a specified row. Gets are executed via Table.get

28.2. Put
Put either adds new rows to a table (if the key is new) or can update existing rows (if the key
already exists). Puts are executed via Table.put (non-writeBuffer) or Table.batch (non-writeBuffer)

28.3. Scans
Scan allow iteration over multiple rows for specified attributes.

The following is an example of a Scan on a Table instance. Assume that a table is populated with
rows with keys "row1", "row2", "row3", and then another set of rows with the keys "abc1", "abc2",
and "abc3". The following example shows how to set a Scan instance to return the rows beginning
with "row".

public static final byte[] CF = "cf".getBytes();


public static final byte[] ATTR = "attr".getBytes();
...

Table table = ... // instantiate a Table instance

Scan scan = new Scan();


scan.addColumn(CF, ATTR);
scan.setRowPrefixFilter(Bytes.toBytes("row"));
ResultScanner rs = table.getScanner(scan);
try {
for (Result r = rs.next(); r != null; r = rs.next()) {
// process result...
}
} finally {
rs.close(); // always close the ResultScanner!
}

Note that generally the easiest way to specify a specific stop point for a scan is by using the
InclusiveStopFilter class.

140
28.4. Delete
Delete removes a row from a table. Deletes are executed via Table.delete.

HBase does not modify data in place, and so deletes are handled by creating new markers called
tombstones. These tombstones, along with the dead values, are cleaned up on major compactions.

See version.delete for more information on deleting versions of columns, and see compaction for
more information on compactions.

141
Chapter 29. Versions
A {row, column, version} tuple exactly specifies a cell in HBase. It’s possible to have an unbounded
number of cells where the row and column are the same but the cell address differs only in its
version dimension.

While rows and column keys are expressed as bytes, the version is specified using a long integer.
Typically this long contains time instances such as those returned by java.util.Date.getTime() or
System.currentTimeMillis(), that is: the difference, measured in milliseconds, between the current
time and midnight, January 1, 1970 UTC.

The HBase version dimension is stored in decreasing order, so that when reading from a store file,
the most recent values are found first.

There is a lot of confusion over the semantics of cell versions, in HBase. In particular:

• If multiple writes to a cell have the same version, only the last written is fetchable.

• It is OK to write cells in a non-increasing version order.

Below we describe how the version dimension in HBase currently works. See HBASE-2406 for
discussion of HBase versions. Bending time in HBase makes for a good read on the version, or time,
dimension in HBase. It has more detail on versioning than is provided here.

As of this writing, the limitation Overwriting values at existing timestamps mentioned in the article
no longer holds in HBase. This section is basically a synopsis of this article by Bruno Dumon.

29.1. Specifying the Number of Versions to Store


The maximum number of versions to store for a given column is part of the column schema and is
specified at table creation, or via an alter command, via HColumnDescriptor.DEFAULT_VERSIONS. Prior
to HBase 0.96, the default number of versions kept was 3, but in 0.96 and newer has been changed
to 1.

Example 8. Modify the Maximum Number of Versions for a Column Family

This example uses HBase Shell to keep a maximum of 5 versions of all columns in column
family f1. You could also use HColumnDescriptor.

hbase> alter ‘t1′, NAME => ‘f1′, VERSIONS => 5

142
Example 9. Modify the Minimum Number of Versions for a Column Family

You can also specify the minimum number of versions to store per column family. By default,
this is set to 0, which means the feature is disabled. The following example sets the minimum
number of versions on all columns in column family f1 to 2, via HBase Shell. You could also use
HColumnDescriptor.

hbase> alter ‘t1′, NAME => ‘f1′, MIN_VERSIONS => 2

Starting with HBase 0.98.2, you can specify a global default for the maximum number of versions
kept for all newly-created columns, by setting hbase.column.max.version in hbase-site.xml. See
hbase.column.max.version.

29.2. Versions and HBase Operations


In this section we look at the behavior of the version dimension for each of the core HBase
operations.

29.2.1. Get/Scan

Gets are implemented on top of Scans. The below discussion of Get applies equally to Scans.

By default, i.e. if you specify no explicit version, when doing a get, the cell whose version has the
largest value is returned (which may or may not be the latest one written, see later). The default
behavior can be modified in the following ways:

• to return more than one version, see Get.setMaxVersions()

• to return versions other than the latest, see Get.setTimeRange()

To retrieve the latest version that is less than or equal to a given value, thus giving the 'latest'
state of the record at a certain point in time, just use a range from 0 to the desired version and
set the max versions to 1.

29.2.2. Default Get Example

The following Get will only retrieve the current version of the row

public static final byte[] CF = "cf".getBytes();


public static final byte[] ATTR = "attr".getBytes();
...
Get get = new Get(Bytes.toBytes("row1"));
Result r = table.get(get);
byte[] b = r.getValue(CF, ATTR); // returns current version of value

143
29.2.3. Versioned Get Example

The following Get will return the last 3 versions of the row.

public static final byte[] CF = "cf".getBytes();


public static final byte[] ATTR = "attr".getBytes();
...
Get get = new Get(Bytes.toBytes("row1"));
get.setMaxVersions(3); // will return last 3 versions of row
Result r = table.get(get);
byte[] b = r.getValue(CF, ATTR); // returns current version of value
List<Cell> cells = r.getColumnCells(CF, ATTR); // returns all versions of this column

29.2.4. Put

Doing a put always creates a new version of a cell, at a certain timestamp. By default the system
uses the server’s currentTimeMillis, but you can specify the version (= the long integer) yourself, on
a per-column level. This means you could assign a time in the past or the future, or use the long
value for non-time purposes.

To overwrite an existing value, do a put at exactly the same row, column, and version as that of the
cell you want to overwrite.

Implicit Version Example

The following Put will be implicitly versioned by HBase with the current time.

public static final byte[] CF = "cf".getBytes();


public static final byte[] ATTR = "attr".getBytes();
...
Put put = new Put(Bytes.toBytes(row));
put.add(CF, ATTR, Bytes.toBytes( data));
table.put(put);

Explicit Version Example

The following Put has the version timestamp explicitly set.

public static final byte[] CF = "cf".getBytes();


public static final byte[] ATTR = "attr".getBytes();
...
Put put = new Put( Bytes.toBytes(row));
long explicitTimeInMs = 555; // just an example
put.add(CF, ATTR, explicitTimeInMs, Bytes.toBytes(data));
table.put(put);

Caution: the version timestamp is used internally by HBase for things like time-to-live calculations.

144
It’s usually best to avoid setting this timestamp yourself. Prefer using a separate timestamp
attribute of the row, or have the timestamp as a part of the row key, or both.

Cell Version Example

The following Put uses a method getCellBuilder() to get a CellBuilder instance that already has
relevant Type and Row set.

public static final byte[] CF = "cf".getBytes();


public static final byte[] ATTR = "attr".getBytes();
...

Put put = new Put(Bytes.toBytes(row));


put.add(put.getCellBuilder().setQualifier(ATTR)
.setFamily(CF)
.setValue(Bytes.toBytes(data))
.build());
table.put(put);

29.2.5. Delete

There are three different types of internal delete markers. See Lars Hofhansl’s blog for discussion
of his attempt adding another, Scanning in HBase: Prefix Delete Marker.

• Delete: for a specific version of a column.

• Delete column: for all versions of a column.

• Delete family: for all columns of a particular ColumnFamily

When deleting an entire row, HBase will internally create a tombstone for each ColumnFamily (i.e.,
not each individual column).

Deletes work by creating tombstone markers. For example, let’s suppose we want to delete a row.
For this you can specify a version, or else by default the currentTimeMillis is used. What this means
is delete all cells where the version is less than or equal to this version. HBase never modifies data in
place, so for example a delete will not immediately delete (or mark as deleted) the entries in the
storage file that correspond to the delete condition. Rather, a so-called tombstone is written, which
will mask the deleted values. When HBase does a major compaction, the tombstones are processed
to actually remove the dead values, together with the tombstones themselves. If the version you
specified when deleting a row is larger than the version of any value in the row, then you can
consider the complete row to be deleted.

For an informative discussion on how deletes and versioning interact, see the thread Put
w/timestamp → Deleteall → Put w/ timestamp fails up on the user mailing list.

Also see keyvalue for more information on the internal KeyValue format.

Delete markers are purged during the next major compaction of the store, unless the
KEEP_DELETED_CELLS option is set in the column family (See Keeping Deleted Cells). To keep the

145
deletes for a configurable amount of time, you can set the delete TTL via the
hbase.hstore.time.to.purge.deletes property in hbase-site.xml. If hbase.hstore.time.to.purge.deletes
is not set, or set to 0, all delete markers, including those with timestamps in the future, are purged
during the next major compaction. Otherwise, a delete marker with a timestamp in the future is
kept until the major compaction which occurs after the time represented by the marker’s
timestamp plus the value of hbase.hstore.time.to.purge.deletes, in milliseconds.

This behavior represents a fix for an unexpected change that was introduced in
 HBase 0.94, and was fixed in HBASE-10118. The change has been backported to
HBase 0.94 and newer branches.

29.3. Optional New Version and Delete behavior in


HBase-2.0.0
In hbase-2.0.0, the operator can specify an alternate version and delete treatment by setting the
column descriptor property NEW_VERSION_BEHAVIOR to true (To set a property on a column family
descriptor, you must first disable the table and then alter the column family descriptor; see Keeping
Deleted Cells for an example of editing an attribute on a column family descriptor).

The 'new version behavior', undoes the limitations listed below whereby a Delete ALWAYS
overshadows a Put if at the same location — i.e. same row, column family, qualifier and
timestamp — regardless of which arrived first. Version accounting is also changed as deleted
versions are considered toward total version count. This is done to ensure results are not changed
should a major compaction intercede. See HBASE-15968 and linked issues for discussion.

Running with this new configuration currently costs; we factor the Cell MVCC on every compare so
we burn more CPU. The slow down will depend. In testing we’ve seen between 0% and 25%
degradation.

If replicating, it is advised that you run with the new serial replication feature (See HBASE-9465; the
serial replication feature did NOT make it into hbase-2.0.0 but should arrive in a subsequent hbase-
2.x release) as now the order in which Mutations arrive is a factor.

29.4. Current Limitations


The below limitations are addressed in hbase-2.0.0. See the section above, Optional New Version
and Delete behavior in HBase-2.0.0.

29.4.1. Deletes mask Puts

Deletes mask puts, even puts that happened after the delete was entered. See HBASE-2256.
Remember that a delete writes a tombstone, which only disappears after then next major
compaction has run. Suppose you do a delete of everything ⇐ T. After this you do a new put with a
timestamp ⇐ T. This put, even if it happened after the delete, will be masked by the delete
tombstone. Performing the put will not fail, but when you do a get you will notice the put did have
no effect. It will start working again after the major compaction has run. These issues should not be
a problem if you use always-increasing versions for new puts to a row. But they can occur even if

146
you do not care about time: just do delete and put immediately after each other, and there is some
chance they happen within the same millisecond.

29.4.2. Major compactions change query results

…create three cell versions at t1, t2 and t3, with a maximum-versions setting of 2. So when getting all
versions, only the values at t2 and t3 will be returned. But if you delete the version at t2 or t3, the one
at t1 will appear again. Obviously, once a major compaction has run, such behavior will not be the
case anymore… (See Garbage Collection in Bending time in HBase.)

147
Chapter 30. Sort Order
All data model operations HBase return data in sorted order. First by row, then by ColumnFamily,
followed by column qualifier, and finally timestamp (sorted in reverse, so newest records are
returned first).

148
Chapter 31. Column Metadata
There is no store of column metadata outside of the internal KeyValue instances for a
ColumnFamily. Thus, while HBase can support not only a wide number of columns per row, but a
heterogeneous set of columns between rows as well, it is your responsibility to keep track of the
column names.

The only way to get a complete set of columns that exist for a ColumnFamily is to process all the
rows. For more information about how HBase stores data internally, see keyvalue.

149
Chapter 32. Joins
Whether HBase supports joins is a common question on the dist-list, and there is a simple answer: it
doesn’t, at not least in the way that RDBMS' support them (e.g., with equi-joins or outer-joins in
SQL). As has been illustrated in this chapter, the read data model operations in HBase are Get and
Scan.

However, that doesn’t mean that equivalent join functionality can’t be supported in your
application, but you have to do it yourself. The two primary strategies are either denormalizing the
data upon writing to HBase, or to have lookup tables and do the join between HBase tables in your
application or MapReduce code (and as RDBMS' demonstrate, there are several strategies for this
depending on the size of the tables, e.g., nested loops vs. hash-joins). So which is the best approach?
It depends on what you are trying to do, and as such there isn’t a single answer that works for
every use case.

150
Chapter 33. ACID
See ACID Semantics. Lars Hofhansl has also written a note on ACID in HBase.

151
HBase and Schema Design
A good introduction on the strength and weaknesses modelling on the various non-rdbms
datastores is to be found in Ian Varley’s Master thesis, No Relation: The Mixed Blessings of Non-
Relational Databases. It is a little dated now but a good background read if you have a moment on
how HBase schema modeling differs from how it is done in an RDBMS. Also, read keyvalue for how
HBase stores data internally, and the section on schema.casestudies.

The documentation on the Cloud Bigtable website, Designing Your Schema, is pertinent and nicely
done and lessons learned there equally apply here in HBase land; just divide any quoted values by
~10 to get what works for HBase: e.g. where it says individual values can be ~10MBs in size, HBase
can do similar — perhaps best to go smaller if you can — and where it says a maximum of 100
column families in Cloud Bigtable, think ~10 when modeling on HBase.

See also Robert Yokota’s HBase Application Archetypes (an update on work done by other HBasers),
for a helpful categorization of use cases that do well on top of the HBase model.

152
Chapter 34. Schema Creation
HBase schemas can be created or updated using the The Apache HBase Shell or by using Admin in
the Java API.

Tables must be disabled when making ColumnFamily modifications, for example:

Configuration config = HBaseConfiguration.create();


Admin admin = new Admin(conf);
TableName table = TableName.valueOf("myTable");

admin.disableTable(table);

HColumnDescriptor cf1 = ...;


admin.addColumn(table, cf1); // adding new ColumnFamily
HColumnDescriptor cf2 = ...;
admin.modifyColumn(table, cf2); // modifying existing ColumnFamily

admin.enableTable(table);

See client dependencies for more information about configuring client connections.

online schema changes are supported in the 0.92.x codebase, but the 0.90.x
 codebase requires the table to be disabled.

34.1. Schema Updates


When changes are made to either Tables or ColumnFamilies (e.g. region size, block size), these
changes take effect the next time there is a major compaction and the StoreFiles get re-written.

See store for more information on StoreFiles.

153
Chapter 35. Table Schema Rules Of Thumb
There are many different data sets, with different access patterns and service-level expectations.
Therefore, these rules of thumb are only an overview. Read the rest of this chapter to get more
details after you have gone through this list.

• Aim to have regions sized between 10 and 50 GB.

• Aim to have cells no larger than 10 MB, or 50 MB if you use mob. Otherwise, consider storing
your cell data in HDFS and store a pointer to the data in HBase.

• A typical schema has between 1 and 3 column families per table. HBase tables should not be
designed to mimic RDBMS tables.

• Around 50-100 regions is a good number for a table with 1 or 2 column families. Remember that
a region is a contiguous segment of a column family.

• Keep your column family names as short as possible. The column family names are stored for
every value (ignoring prefix encoding). They should not be self-documenting and descriptive
like in a typical RDBMS.

• If you are storing time-based machine data or logging information, and the row key is based on
device ID or service ID plus time, you can end up with a pattern where older data regions never
have additional writes beyond a certain age. In this type of situation, you end up with a small
number of active regions and a large number of older regions which have no new writes. For
these situations, you can tolerate a larger number of regions because your resource
consumption is driven by the active regions only.

• If only one column family is busy with writes, only that column family accomulates memory. Be
aware of write patterns when allocating resources.

154
RegionServer Sizing Rules of Thumb
Lars Hofhansl wrote a great blog post about RegionServer memory sizing. The upshot is that you
probably need more memory than you think you need. He goes into the impact of region size,
memstore size, HDFS replication factor, and other things to check.

Personally I would place the maximum disk space per machine that can be
served exclusively with HBase around 6T, unless you have a very read-
heavy workload. In that case the Java heap should be 32GB (20G regions,
128M memstores, the rest defaults).

— Lars Hofhansl, http://hadoop-hbase.blogspot.com/2013/01/hbase-region-server-memory-sizing.html

155
Chapter 36. On the number of column
families
HBase currently does not do well with anything above two or three column families so keep the
number of column families in your schema low. Currently, flushing is done on a per Region basis so
if one column family is carrying the bulk of the data bringing on flushes, the adjacent families will
also be flushed even though the amount of data they carry is small. When many column families
exist the flushing interaction can make for a bunch of needless i/o (To be addressed by changing
flushing to work on a per column family basis). In addition, compactions triggered at table/region
level will happen per store too.

Try to make do with one column family if you can in your schemas. Only introduce a second and
third column family in the case where data access is usually column scoped; i.e. you query one
column family or the other but usually not both at the one time.

36.1. Cardinality of ColumnFamilies


Where multiple ColumnFamilies exist in a single table, be aware of the cardinality (i.e., number of
rows). If ColumnFamilyA has 1 million rows and ColumnFamilyB has 1 billion rows,
ColumnFamilyA’s data will likely be spread across many, many regions (and RegionServers). This
makes mass scans for ColumnFamilyA less efficient.

156
Chapter 37. Rowkey Design
37.1. Hotspotting
Rows in HBase are sorted lexicographically by row key. This design optimizes for scans, allowing
you to store related rows, or rows that will be read together, near each other. However, poorly
designed row keys are a common source of hotspotting. Hotspotting occurs when a large amount of
client traffic is directed at one node, or only a few nodes, of a cluster. This traffic may represent
reads, writes, or other operations. The traffic overwhelms the single machine responsible for
hosting that region, causing performance degradation and potentially leading to region
unavailability. This can also have adverse effects on other regions hosted by the same region server
as that host is unable to service the requested load. It is important to design data access patterns
such that the cluster is fully and evenly utilized.

To prevent hotspotting on writes, design your row keys such that rows that truly do need to be in
the same region are, but in the bigger picture, data is being written to multiple regions across the
cluster, rather than one at a time. Some common techniques for avoiding hotspotting are described
below, along with some of their advantages and drawbacks.

Salting
Salting in this sense has nothing to do with cryptography, but refers to adding random data to the
start of a row key. In this case, salting refers to adding a randomly-assigned prefix to the row key to
cause it to sort differently than it otherwise would. The number of possible prefixes correspond to
the number of regions you want to spread the data across. Salting can be helpful if you have a few
"hot" row key patterns which come up over and over amongst other more evenly-distributed rows.
Consider the following example, which shows that salting can spread write load across multiple
RegionServers, and illustrates some of the negative implications for reads.

157
Example 10. Salting Example

Suppose you have the following list of row keys, and your table is split such that there is one
region for each letter of the alphabet. Prefix 'a' is one region, prefix 'b' is another. In this table,
all rows starting with 'f' are in the same region. This example focuses on rows with keys like
the following:

foo0001
foo0002
foo0003
foo0004

Now, imagine that you would like to spread these across four different regions. You decide to
use four different salts: a, b, c, and d. In this scenario, each of these letter prefixes will be on a
different region. After applying the salts, you have the following rowkeys instead. Since you
can now write to four separate regions, you theoretically have four times the throughput when
writing that you would have if all the writes were going to the same region.

a-foo0003
b-foo0001
c-foo0004
d-foo0002

Then, if you add another row, it will randomly be assigned one of the four possible salt values
and end up near one of the existing rows.

a-foo0003
b-foo0001
c-foo0003
c-foo0004
d-foo0002

Since this assignment will be random, you will need to do more work if you want to retrieve
the rows in lexicographic order. In this way, salting attempts to increase throughput on writes,
but has a cost during reads.

Hashing
Instead of a random assignment, you could use a one-way hash that would cause a given row to
always be "salted" with the same prefix, in a way that would spread the load across the
RegionServers, but allow for predictability during reads. Using a deterministic hash allows the
client to reconstruct the complete rowkey and use a Get operation to retrieve that row as normal.

158
Example 11. Hashing Example

Given the same situation in the salting example above, you could instead apply a one-way hash
that would cause the row with key foo0003 to always, and predictably, receive the a prefix.
Then, to retrieve that row, you would already know the key. You could also optimize things so
that certain pairs of keys were always in the same region, for instance.

Reversing the Key


A third common trick for preventing hotspotting is to reverse a fixed-width or numeric row key so
that the part that changes the most often (the least significant digit) is first. This effectively
randomizes row keys, but sacrifices row ordering properties.

See https://communities.intel.com/community/itpeernetwork/datastack/blog/2013/11/10/discussion-
on-designing-hbase-tables, and article on Salted Tables from the Phoenix project, and the discussion
in the comments of HBASE-11682 for more information about avoiding hotspotting.

37.2. Monotonically Increasing Row Keys/Timeseries


Data
In the HBase chapter of Tom White’s book Hadoop: The Definitive Guide (O’Reilly) there is a an
optimization note on watching out for a phenomenon where an import process walks in lock-step
with all clients in concert pounding one of the table’s regions (and thus, a single node), then moving
onto the next region, etc. With monotonically increasing row-keys (i.e., using a timestamp), this will
happen. See this comic by IKai Lan on why monotonically increasing row keys are problematic in
BigTable-like datastores: monotonically increasing values are bad. The pile-up on a single region
brought on by monotonically increasing keys can be mitigated by randomizing the input records to
not be in sorted order, but in general it’s best to avoid using a timestamp or a sequence (e.g. 1, 2, 3)
as the row-key.

If you do need to upload time series data into HBase, you should study OpenTSDB as a successful
example. It has a page describing the schema it uses in HBase. The key format in OpenTSDB is
effectively [metric_type][event_timestamp], which would appear at first glance to contradict the
previous advice about not using a timestamp as the key. However, the difference is that the
timestamp is not in the lead position of the key, and the design assumption is that there are dozens
or hundreds (or more) of different metric types. Thus, even with a continual stream of input data
with a mix of metric types, the Puts are distributed across various points of regions in the table.

See schema.casestudies for some rowkey design examples.

37.3. Try to minimize row and column sizes


In HBase, values are always freighted with their coordinates; as a cell value passes through the
system, it’ll be accompanied by its row, column name, and timestamp - always. If your rows and
column names are large, especially compared to the size of the cell value, then you may run up
against some interesting scenarios. One such is the case described by Marc Limotte at the tail of
HBASE-3551 (recommended!). Therein, the indices that are kept on HBase storefiles (StoreFile

159
(HFile)) to facilitate random access may end up occupying large chunks of the HBase allotted RAM
because the cell value coordinates are large. Mark in the above cited comment suggests upping the
block size so entries in the store file index happen at a larger interval or modify the table schema so
it makes for smaller rows and column names. Compression will also make for larger indices. See
the thread a question storefileIndexSize up on the user mailing list.

Most of the time small inefficiencies don’t matter all that much. Unfortunately, this is a case where
they do. Whatever patterns are selected for ColumnFamilies, attributes, and rowkeys they could be
repeated several billion times in your data.

See keyvalue for more information on HBase stores data internally to see why this is important.

37.3.1. Column Families

Try to keep the ColumnFamily names as small as possible, preferably one character (e.g. "d" for
data/default).

See KeyValue for more information on HBase stores data internally to see why this is important.

37.3.2. Attributes

Although verbose attribute names (e.g., "myVeryImportantAttribute") are easier to read, prefer
shorter attribute names (e.g., "via") to store in HBase.

See keyvalue for more information on HBase stores data internally to see why this is important.

37.3.3. Rowkey Length

Keep them as short as is reasonable such that they can still be useful for required data access (e.g.
Get vs. Scan). A short key that is useless for data access is not better than a longer key with better
get/scan properties. Expect tradeoffs when designing rowkeys.

37.3.4. Byte Patterns

A long is 8 bytes. You can store an unsigned number up to 18,446,744,073,709,551,615 in those eight
bytes. If you stored this number as a String — presuming a byte per character — you need nearly 3x
the bytes.

Not convinced? Below is some sample code that you can run on your own.

160
// long
//
long l = 1234567890L;
byte[] lb = Bytes.toBytes(l);
System.out.println("long bytes length: " + lb.length); // returns 8

String s = String.valueOf(l);
byte[] sb = Bytes.toBytes(s);
System.out.println("long as string length: " + sb.length); // returns 10

// hash
//
MessageDigest md = MessageDigest.getInstance("MD5");
byte[] digest = md.digest(Bytes.toBytes(s));
System.out.println("md5 digest bytes length: " + digest.length); // returns 16

String sDigest = new String(digest);


byte[] sbDigest = Bytes.toBytes(sDigest);
System.out.println("md5 digest as string length: " + sbDigest.length); // returns
26

Unfortunately, using a binary representation of a type will make your data harder to read outside
of your code. For example, this is what you will see in the shell when you increment a value:

hbase(main):001:0> incr 't', 'r', 'f:q', 1


COUNTER VALUE = 1

hbase(main):002:0> get 't', 'r'


COLUMN CELL
f:q timestamp=1369163040570, value=\x00\x00
\x00\x00\x00\x00\x00\x01
1 row(s) in 0.0310 seconds

The shell makes a best effort to print a string, and it this case it decided to just print the hex. The
same will happen to your row keys inside the region names. It can be okay if you know what’s
being stored, but it might also be unreadable if arbitrary data can be put in the same cells. This is
the main trade-off.

37.4. Reverse Timestamps


Reverse Scan API
HBASE-4811 implements an API to scan a table or a range within a table in reverse,
 reducing the need to optimize your schema for forward or reverse scanning. This
feature is available in HBase 0.98 and later. See Scan.setReversed() for more
information.

A common problem in database processing is quickly finding the most recent version of a value. A

161
technique using reverse timestamps as a part of the key can help greatly with a special case of this
problem. Also found in the HBase chapter of Tom White’s book Hadoop: The Definitive Guide
(O’Reilly), the technique involves appending (Long.MAX_VALUE - timestamp) to the end of any key, e.g.
[key][reverse_timestamp].

The most recent value for [key] in a table can be found by performing a Scan for [key] and
obtaining the first record. Since HBase keys are in sorted order, this key sorts before any older row-
keys for [key] and thus is first.

This technique would be used instead of using Number of Versions where the intent is to hold onto
all versions "forever" (or a very long time) and at the same time quickly obtain access to any other
version by using the same Scan technique.

37.5. Rowkeys and ColumnFamilies


Rowkeys are scoped to ColumnFamilies. Thus, the same rowkey could exist in each ColumnFamily
that exists in a table without collision.

37.6. Immutability of Rowkeys


Rowkeys cannot be changed. The only way they can be "changed" in a table is if the row is deleted
and then re-inserted. This is a fairly common question on the HBase dist-list so it pays to get the
rowkeys right the first time (and/or before you’ve inserted a lot of data).

37.7. Relationship Between RowKeys and Region Splits


If you pre-split your table, it is critical to understand how your rowkey will be distributed across
the region boundaries. As an example of why this is important, consider the example of using
displayable hex characters as the lead position of the key (e.g., "0000000000000000" to
"ffffffffffffffff"). Running those key ranges through Bytes.split (which is the split strategy used
when creating regions in Admin.createTable(byte[] startKey, byte[] endKey, numRegions) for 10
regions will generate the following splits…

48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 // 0
54 -10 -10 -10 -10 -10 -10 -10 -10 -10 -10 -10 -10 -10 -10 -10 // 6
61 -67 -67 -67 -67 -67 -67 -67 -67 -67 -67 -67 -67 -67 -67 -68 // =
68 -124 -124 -124 -124 -124 -124 -124 -124 -124 -124 -124 -124 -124 -124 -126 // D
75 75 75 75 75 75 75 75 75 75 75 75 75 75 75 72 // K
82 18 18 18 18 18 18 18 18 18 18 18 18 18 18 14 // R
88 -40 -40 -40 -40 -40 -40 -40 -40 -40 -40 -40 -40 -40 -40 -44 // X
95 -97 -97 -97 -97 -97 -97 -97 -97 -97 -97 -97 -97 -97 -97 -102 // _
102 102 102 102 102 102 102 102 102 102 102 102 102 102 102 102 // f

(note: the lead byte is listed to the right as a comment.) Given that the first split is a '0' and the last
split is an 'f', everything is great, right? Not so fast.

The problem is that all the data is going to pile up in the first 2 regions and the last region thus

162
creating a "lumpy" (and possibly "hot") region problem. To understand why, refer to an ASCII Table.
'0' is byte 48, and 'f' is byte 102, but there is a huge gap in byte values (bytes 58 to 96) that will never
appear in this keyspace because the only values are [0-9] and [a-f]. Thus, the middle regions will
never be used. To make pre-splitting work with this example keyspace, a custom definition of splits
(i.e., and not relying on the built-in split method) is required.

Lesson #1: Pre-splitting tables is generally a best practice, but you need to pre-split them in such a
way that all the regions are accessible in the keyspace. While this example demonstrated the
problem with a hex-key keyspace, the same problem can happen with any keyspace. Know your
data.

Lesson #2: While generally not advisable, using hex-keys (and more generally, displayable data) can
still work with pre-split tables as long as all the created regions are accessible in the keyspace.

To conclude this example, the following is an example of how appropriate splits can be pre-created
for hex-keys:.

public static boolean createTable(Admin admin, HTableDescriptor table, byte[][]


splits)
throws IOException {
try {
admin.createTable( table, splits );
return true;
} catch (TableExistsException e) {
logger.info("table " + table.getNameAsString() + " already exists");
// the table already exists...
return false;
}
}

public static byte[][] getHexSplits(String startKey, String endKey, int numRegions) {


byte[][] splits = new byte[numRegions-1][];
BigInteger lowestKey = new BigInteger(startKey, 16);
BigInteger highestKey = new BigInteger(endKey, 16);
BigInteger range = highestKey.subtract(lowestKey);
BigInteger regionIncrement = range.divide(BigInteger.valueOf(numRegions));
lowestKey = lowestKey.add(regionIncrement);
for(int i=0; i < numRegions-1;i++) {
BigInteger key = lowestKey.add(regionIncrement.multiply(BigInteger.valueOf(i)));
byte[] b = String.format("%016x", key).getBytes();
splits[i] = b;
}
return splits;
}

163
Chapter 38. Number of Versions
38.1. Maximum Number of Versions
The maximum number of row versions to store is configured per column family via
HColumnDescriptor. The default for max versions is 1. This is an important parameter because as
described in Data Model section HBase does not overwrite row values, but rather stores different
values per row by time (and qualifier). Excess versions are removed during major compactions.
The number of max versions may need to be increased or decreased depending on application
needs.

It is not recommended setting the number of max versions to an exceedingly high level (e.g.,
hundreds or more) unless those old values are very dear to you because this will greatly increase
StoreFile size.

38.2. Minimum Number of Versions


Like maximum number of row versions, the minimum number of row versions to keep is
configured per column family via HColumnDescriptor. The default for min versions is 0, which
means the feature is disabled. The minimum number of row versions parameter is used together
with the time-to-live parameter and can be combined with the number of row versions parameter
to allow configurations such as "keep the last T minutes worth of data, at most N versions, but keep
at least M versions around" (where M is the value for minimum number of row versions, M<N). This
parameter should only be set when time-to-live is enabled for a column family and must be less
than the number of row versions.

164
Chapter 39. Supported Datatypes
HBase supports a "bytes-in/bytes-out" interface via Put and Result, so anything that can be
converted to an array of bytes can be stored as a value. Input could be strings, numbers, complex
objects, or even images as long as they can rendered as bytes.

There are practical limits to the size of values (e.g., storing 10-50MB objects in HBase would
probably be too much to ask); search the mailing list for conversations on this topic. All rows in
HBase conform to the Data Model, and that includes versioning. Take that into consideration when
making your design, as well as block size for the ColumnFamily.

39.1. Counters
One supported datatype that deserves special mention are "counters" (i.e., the ability to do atomic
increments of numbers). See Increment in Table.

Synchronization on counters are done on the RegionServer, not in the client.

165
Chapter 40. Joins
If you have multiple tables, don’t forget to factor in the potential for Joins into the schema design.

166
Chapter 41. Time To Live (TTL)
ColumnFamilies can set a TTL length in seconds, and HBase will automatically delete rows once the
expiration time is reached. This applies to all versions of a row - even the current one. The TTL time
encoded in the HBase for the row is specified in UTC.

Store files which contains only expired rows are deleted on minor compaction. Setting
hbase.store.delete.expired.storefile to false disables this feature. Setting minimum number of
versions to other than 0 also disables this.

See HColumnDescriptor for more information.

Recent versions of HBase also support setting time to live on a per cell basis. See HBASE-10560 for
more information. Cell TTLs are submitted as an attribute on mutation requests (Appends,
Increments, Puts, etc.) using Mutation#setTTL. If the TTL attribute is set, it will be applied to all cells
updated on the server by the operation. There are two notable differences between cell TTL
handling and ColumnFamily TTLs:

• Cell TTLs are expressed in units of milliseconds instead of seconds.

• A cell TTLs cannot extend the effective lifetime of a cell beyond a ColumnFamily level TTL
setting.

167
Chapter 42. Keeping Deleted Cells
By default, delete markers extend back to the beginning of time. Therefore, Get or Scan operations
will not see a deleted cell (row or column), even when the Get or Scan operation indicates a time
range before the delete marker was placed.

ColumnFamilies can optionally keep deleted cells. In this case, deleted cells can still be retrieved, as
long as these operations specify a time range that ends before the timestamp of any delete that
would affect the cells. This allows for point-in-time queries even in the presence of deletes.

Deleted cells are still subject to TTL and there will never be more than "maximum number of
versions" deleted cells. A new "raw" scan options returns all deleted rows and the delete markers.

Change the Value of KEEP_DELETED_CELLS Using HBase Shell

hbase> hbase> alter ‘t1′, NAME => ‘f1′, KEEP_DELETED_CELLS => true

Example 12. Change the Value of KEEP_DELETED_CELLS Using the API

...
HColumnDescriptor.setKeepDeletedCells(true);
...

Let us illustrate the basic effect of setting the KEEP_DELETED_CELLS attribute on a table.

First, without:

168
create 'test', {NAME=>'e', VERSIONS=>2147483647}
put 'test', 'r1', 'e:c1', 'value', 10
put 'test', 'r1', 'e:c1', 'value', 12
put 'test', 'r1', 'e:c1', 'value', 14
delete 'test', 'r1', 'e:c1', 11

hbase(main):017:0> scan 'test', {RAW=>true, VERSIONS=>1000}


ROW COLUMN+CELL
r1 column=e:c1, timestamp=14, value
=value
r1 column=e:c1, timestamp=12, value
=value
r1 column=e:c1, timestamp=11, type
=DeleteColumn
r1 column=e:c1, timestamp=10, value
=value
1 row(s) in 0.0120 seconds

hbase(main):018:0> flush 'test'


0 row(s) in 0.0350 seconds

hbase(main):019:0> scan 'test', {RAW=>true, VERSIONS=>1000}


ROW COLUMN+CELL
r1 column=e:c1, timestamp=14, value
=value
r1 column=e:c1, timestamp=12, value
=value
r1 column=e:c1, timestamp=11, type
=DeleteColumn
1 row(s) in 0.0120 seconds

hbase(main):020:0> major_compact 'test'


0 row(s) in 0.0260 seconds

hbase(main):021:0> scan 'test', {RAW=>true, VERSIONS=>1000}


ROW COLUMN+CELL
r1 column=e:c1, timestamp=14, value
=value
r1 column=e:c1, timestamp=12, value
=value
1 row(s) in 0.0120 seconds

Notice how delete cells are let go.

Now let’s run the same test only with KEEP_DELETED_CELLS set on the table (you can do table or per-
column-family):

hbase(main):005:0> create 'test', {NAME=>'e', VERSIONS=>2147483647, KEEP_DELETED_CELLS


=> true}

169
0 row(s) in 0.2160 seconds

=> Hbase::Table - test


hbase(main):006:0> put 'test', 'r1', 'e:c1', 'value', 10
0 row(s) in 0.1070 seconds

hbase(main):007:0> put 'test', 'r1', 'e:c1', 'value', 12


0 row(s) in 0.0140 seconds

hbase(main):008:0> put 'test', 'r1', 'e:c1', 'value', 14


0 row(s) in 0.0160 seconds

hbase(main):009:0> delete 'test', 'r1', 'e:c1', 11


0 row(s) in 0.0290 seconds

hbase(main):010:0> scan 'test', {RAW=>true, VERSIONS=>1000}


ROW
COLUMN+CELL
r1
column=e:c1, timestamp=14, value=value
r1
column=e:c1, timestamp=12, value=value
r1
column=e:c1, timestamp=11, type=DeleteColumn
r1
column=e:c1, timestamp=10, value=value
1 row(s) in 0.0550 seconds

hbase(main):011:0> flush 'test'


0 row(s) in 0.2780 seconds

hbase(main):012:0> scan 'test', {RAW=>true, VERSIONS=>1000}


ROW
COLUMN+CELL
r1
column=e:c1, timestamp=14, value=value
r1
column=e:c1, timestamp=12, value=value
r1
column=e:c1, timestamp=11, type=DeleteColumn
r1
column=e:c1, timestamp=10, value=value
1 row(s) in 0.0620 seconds

hbase(main):013:0> major_compact 'test'


0 row(s) in 0.0530 seconds

hbase(main):014:0> scan 'test', {RAW=>true, VERSIONS=>1000}


ROW
COLUMN+CELL
r1

170
column=e:c1, timestamp=14, value=value
r1
column=e:c1, timestamp=12, value=value
r1
column=e:c1, timestamp=11, type=DeleteColumn
r1
column=e:c1, timestamp=10, value=value
1 row(s) in 0.0650 seconds

KEEP_DELETED_CELLS is to avoid removing Cells from HBase when the only reason to remove
them is the delete marker. So with KEEP_DELETED_CELLS enabled deleted cells would get removed
if either you write more versions than the configured max, or you have a TTL and Cells are in
excess of the configured timeout, etc.

171
Chapter 43. Secondary Indexes and
Alternate Query Paths
This section could also be titled "what if my table rowkey looks like this but I also want to query my
table like that." A common example on the dist-list is where a row-key is of the format "user-
timestamp" but there are reporting requirements on activity across users for certain time ranges.
Thus, selecting by user is easy because it is in the lead position of the key, but time is not.

There is no single answer on the best way to handle this because it depends on…

• Number of users

• Data size and data arrival rate

• Flexibility of reporting requirements (e.g., completely ad-hoc date selection vs. pre-configured
ranges)

• Desired execution speed of query (e.g., 90 seconds may be reasonable to some for an ad-hoc
report, whereas it may be too long for others)

and solutions are also influenced by the size of the cluster and how much processing power you
have to throw at the solution. Common techniques are in sub-sections below. This is a
comprehensive, but not exhaustive, list of approaches.

It should not be a surprise that secondary indexes require additional cluster space and processing.
This is precisely what happens in an RDBMS because the act of creating an alternate index requires
both space and processing cycles to update. RDBMS products are more advanced in this regard to
handle alternative index management out of the box. However, HBase scales better at larger data
volumes, so this is a feature trade-off.

Pay attention to Apache HBase Performance Tuning when implementing any of these approaches.

Additionally, see the David Butler response in this dist-list thread HBase, mail # user -
Stargate+hbase

43.1. Filter Query


Depending on the case, it may be appropriate to use Client Request Filters. In this case, no
secondary index is created. However, don’t try a full-scan on a large table like this from an
application (i.e., single-threaded client).

43.2. Periodic-Update Secondary Index


A secondary index could be created in another table which is periodically updated via a
MapReduce job. The job could be executed intra-day, but depending on load-strategy it could still
potentially be out of sync with the main data table.

See mapreduce.example.readwrite for more information.

172
43.3. Dual-Write Secondary Index
Another strategy is to build the secondary index while publishing data to the cluster (e.g., write to
data table, write to index table). If this is approach is taken after a data table already exists, then
bootstrapping will be needed for the secondary index with a MapReduce job (see
secondary.indexes.periodic).

43.4. Summary Tables


Where time-ranges are very wide (e.g., year-long report) and where the data is voluminous,
summary tables are a common approach. These would be generated with MapReduce jobs into
another table.

See mapreduce.example.summary for more information.

43.5. Coprocessor Secondary Index


Coprocessors act like RDBMS triggers. These were added in 0.92. For more information, see
coprocessors

173
Chapter 44. Constraints
HBase currently supports 'constraints' in traditional (SQL) database parlance. The advised usage for
Constraints is in enforcing business rules for attributes in the table (e.g. make sure values are in the
range 1-10). Constraints could also be used to enforce referential integrity, but this is strongly
discouraged as it will dramatically decrease the write throughput of the tables where integrity
checking is enabled. Extensive documentation on using Constraints can be found at Constraint
since version 0.94.

174
Chapter 45. Schema Design Case Studies
The following will describe some typical data ingestion use-cases with HBase, and how the rowkey
design and construction can be approached. Note: this is just an illustration of potential
approaches, not an exhaustive list. Know your data, and know your processing requirements.

It is highly recommended that you read the rest of the HBase and Schema Design first, before
reading these case studies.

The following case studies are described:

• Log Data / Timeseries Data

• Log Data / Timeseries on Steroids

• Customer/Order

• Tall/Wide/Middle Schema Design

• List Data

45.1. Case Study - Log Data and Timeseries Data


Assume that the following data elements are being collected.

• Hostname

• Timestamp

• Log event

• Value/message

We can store them in an HBase table called LOG_DATA, but what will the rowkey be? From these
attributes the rowkey will be some combination of hostname, timestamp, and log-event - but what
specifically?

45.1.1. Timestamp In The Rowkey Lead Position

The rowkey [timestamp][hostname][log-event] suffers from the monotonically increasing rowkey


problem described in Monotonically Increasing Row Keys/Timeseries Data.

There is another pattern frequently mentioned in the dist-lists about "bucketing" timestamps, by
performing a mod operation on the timestamp. If time-oriented scans are important, this could be a
useful approach. Attention must be paid to the number of buckets, because this will require the
same number of scans to return results.

long bucket = timestamp % numBuckets;

to construct:

175
[bucket][timestamp][hostname][log-event]

As stated above, to select data for a particular timerange, a Scan will need to be performed for each
bucket. 100 buckets, for example, will provide a wide distribution in the keyspace but it will require
100 Scans to obtain data for a single timestamp, so there are trade-offs.

45.1.2. Host In The Rowkey Lead Position

The rowkey [hostname][log-event][timestamp] is a candidate if there is a large-ish number of hosts


to spread the writes and reads across the keyspace. This approach would be useful if scanning by
hostname was a priority.

45.1.3. Timestamp, or Reverse Timestamp?

If the most important access path is to pull most recent events, then storing the timestamps as
reverse-timestamps (e.g., timestamp = Long.MAX_VALUE – timestamp) will create the property of being
able to do a Scan on [hostname][log-event] to obtain the most recently captured events.

Neither approach is wrong, it just depends on what is most appropriate for the situation.

Reverse Scan API


HBASE-4811 implements an API to scan a table or a range within a table in reverse,
 reducing the need to optimize your schema for forward or reverse scanning. This
feature is available in HBase 0.98 and later. See Scan.setReversed() for more
information.

45.1.4. Variable Length or Fixed Length Rowkeys?

It is critical to remember that rowkeys are stamped on every column in HBase. If the hostname is a
and the event type is e1 then the resulting rowkey would be quite small. However, what if the
ingested hostname is myserver1.mycompany.com and the event type is
com.package1.subpackage2.subsubpackage3.ImportantService?

It might make sense to use some substitution in the rowkey. There are at least two approaches:
hashed and numeric. In the Hostname In The Rowkey Lead Position example, it might look like this:

Composite Rowkey With Hashes:

• [MD5 hash of hostname] = 16 bytes

• [MD5 hash of event-type] = 16 bytes

• [timestamp] = 8 bytes

Composite Rowkey With Numeric Substitution:

For this approach another lookup table would be needed in addition to LOG_DATA, called
LOG_TYPES. The rowkey of LOG_TYPES would be:

176
• [type] (e.g., byte indicating hostname vs. event-type)

• [bytes] variable length bytes for raw hostname or event-type.

A column for this rowkey could be a long with an assigned number, which could be obtained by
using an HBase counter

So the resulting composite rowkey would be:

• [substituted long for hostname] = 8 bytes

• [substituted long for event type] = 8 bytes

• [timestamp] = 8 bytes

In either the Hash or Numeric substitution approach, the raw values for hostname and event-type
can be stored as columns.

45.2. Case Study - Log Data and Timeseries Data on


Steroids
This effectively is the OpenTSDB approach. What OpenTSDB does is re-write data and pack rows
into columns for certain time-periods. For a detailed explanation, see: http://opentsdb.net/
schema.html, and Lessons Learned from OpenTSDB from HBaseCon2012.

But this is how the general concept works: data is ingested, for example, in this manner…

[hostname][log-event][timestamp1]
[hostname][log-event][timestamp2]
[hostname][log-event][timestamp3]

with separate rowkeys for each detailed event, but is re-written like this…

[hostname][log-event][timerange]

and each of the above events are converted into columns stored with a time-offset relative to the
beginning timerange (e.g., every 5 minutes). This is obviously a very advanced processing
technique, but HBase makes this possible.

45.3. Case Study - Customer/Order


Assume that HBase is used to store customer and order information. There are two core record-
types being ingested: a Customer record type, and Order record type.

The Customer record type would include all the things that you’d typically expect:

• Customer number

• Customer name

177
• Address (e.g., city, state, zip)

• Phone numbers, etc.

The Order record type would include things like:

• Customer number

• Order number

• Sales date

• A series of nested objects for shipping locations and line-items (see Order Object Design for
details)

Assuming that the combination of customer number and sales order uniquely identify an order,
these two attributes will compose the rowkey, and specifically a composite key such as:

[customer number][order number]

for an ORDER table. However, there are more design decisions to make: are the raw values the best
choices for rowkeys?

The same design questions in the Log Data use-case confront us here. What is the keyspace of the
customer number, and what is the format (e.g., numeric? alphanumeric?) As it is advantageous to
use fixed-length keys in HBase, as well as keys that can support a reasonable spread in the
keyspace, similar options appear:

Composite Rowkey With Hashes:

• [MD5 of customer number] = 16 bytes

• [MD5 of order number] = 16 bytes

Composite Numeric/Hash Combo Rowkey:

• [substituted long for customer number] = 8 bytes

• [MD5 of order number] = 16 bytes

45.3.1. Single Table? Multiple Tables?

A traditional design approach would have separate tables for CUSTOMER and SALES. Another
option is to pack multiple record types into a single table (e.g., CUSTOMER++).

Customer Record Type Rowkey:

• [customer-id]

• [type] = type indicating `1' for customer record type

Order Record Type Rowkey:

• [customer-id]

178
• [type] = type indicating `2' for order record type

• [order]

The advantage of this particular CUSTOMER++ approach is that organizes many different record-
types by customer-id (e.g., a single scan could get you everything about that customer). The
disadvantage is that it’s not as easy to scan for a particular record-type.

45.3.2. Order Object Design

Now we need to address how to model the Order object. Assume that the class structure is as
follows:

Order
(an Order can have multiple ShippingLocations

LineItem
(a ShippingLocation can have multiple LineItems

there are multiple options on storing this data.

Completely Normalized

With this approach, there would be separate tables for ORDER, SHIPPING_LOCATION, and
LINE_ITEM.

The ORDER table’s rowkey was described above: schema.casestudies.custorder

The SHIPPING_LOCATION’s composite rowkey would be something like this:

• [order-rowkey]

• [shipping location number] (e.g., 1st location, 2nd, etc.)

The LINE_ITEM table’s composite rowkey would be something like this:

• [order-rowkey]

• [shipping location number] (e.g., 1st location, 2nd, etc.)

• [line item number] (e.g., 1st lineitem, 2nd, etc.)

Such a normalized model is likely to be the approach with an RDBMS, but that’s not your only
option with HBase. The cons of such an approach is that to retrieve information about any Order,
you will need:

• Get on the ORDER table for the Order

• Scan on the SHIPPING_LOCATION table for that order to get the ShippingLocation instances

• Scan on the LINE_ITEM for each ShippingLocation

granted, this is what an RDBMS would do under the covers anyway, but since there are no joins in
HBase you’re just more aware of this fact.

179
Single Table With Record Types

With this approach, there would exist a single table ORDER that would contain

The Order rowkey was described above: schema.casestudies.custorder

• [order-rowkey]

• [ORDER record type]

The ShippingLocation composite rowkey would be something like this:

• [order-rowkey]

• [SHIPPING record type]

• [shipping location number] (e.g., 1st location, 2nd, etc.)

The LineItem composite rowkey would be something like this:

• [order-rowkey]

• [LINE record type]

• [shipping location number] (e.g., 1st location, 2nd, etc.)

• [line item number] (e.g., 1st lineitem, 2nd, etc.)

Denormalized

A variant of the Single Table With Record Types approach is to denormalize and flatten some of the
object hierarchy, such as collapsing the ShippingLocation attributes onto each LineItem instance.

The LineItem composite rowkey would be something like this:

• [order-rowkey]

• [LINE record type]

• [line item number] (e.g., 1st lineitem, 2nd, etc., care must be taken that there are unique across
the entire order)

and the LineItem columns would be something like this:

• itemNumber

• quantity

• price

• shipToLine1 (denormalized from ShippingLocation)

• shipToLine2 (denormalized from ShippingLocation)

• shipToCity (denormalized from ShippingLocation)

• shipToState (denormalized from ShippingLocation)

• shipToZip (denormalized from ShippingLocation)

180
The pros of this approach include a less complex object hierarchy, but one of the cons is that
updating gets more complicated in case any of this information changes.

Object BLOB

With this approach, the entire Order object graph is treated, in one way or another, as a BLOB. For
example, the ORDER table’s rowkey was described above: schema.casestudies.custorder, and a
single column called "order" would contain an object that could be deserialized that contained a
container Order, ShippingLocations, and LineItems.

There are many options here: JSON, XML, Java Serialization, Avro, Hadoop Writables, etc. All of
them are variants of the same approach: encode the object graph to a byte-array. Care should be
taken with this approach to ensure backward compatibility in case the object model changes such
that older persisted structures can still be read back out of HBase.

Pros are being able to manage complex object graphs with minimal I/O (e.g., a single HBase Get per
Order in this example), but the cons include the aforementioned warning about backward
compatibility of serialization, language dependencies of serialization (e.g., Java Serialization only
works with Java clients), the fact that you have to deserialize the entire object to get any piece of
information inside the BLOB, and the difficulty in getting frameworks like Hive to work with
custom objects like this.

45.4. Case Study - "Tall/Wide/Middle" Schema Design


Smackdown
This section will describe additional schema design questions that appear on the dist-list,
specifically about tall and wide tables. These are general guidelines and not laws - each application
must consider its own needs.

45.4.1. Rows vs. Versions

A common question is whether one should prefer rows or HBase’s built-in-versioning. The context
is typically where there are "a lot" of versions of a row to be retained (e.g., where it is significantly
above the HBase default of 1 max versions). The rows-approach would require storing a timestamp
in some portion of the rowkey so that they would not overwrite with each successive update.

Preference: Rows (generally speaking).

45.4.2. Rows vs. Columns

Another common question is whether one should prefer rows or columns. The context is typically
in extreme cases of wide tables, such as having 1 row with 1 million attributes, or 1 million rows
with 1 columns apiece.

Preference: Rows (generally speaking). To be clear, this guideline is in the context is in extremely
wide cases, not in the standard use-case where one needs to store a few dozen or hundred columns.
But there is also a middle path between these two options, and that is "Rows as Columns."

181
45.4.3. Rows as Columns

The middle path between Rows vs. Columns is packing data that would be a separate row into
columns, for certain rows. OpenTSDB is the best example of this case where a single row represents
a defined time-range, and then discrete events are treated as columns. This approach is often more
complex, and may require the additional complexity of re-writing your data, but has the advantage
of being I/O efficient. For an overview of this approach, see schema.casestudies.log-steroids.

45.5. Case Study - List Data


The following is an exchange from the user dist-list regarding a fairly common question: how to
handle per-user list data in Apache HBase.

• QUESTION *

We’re looking at how to store a large amount of (per-user) list data in HBase, and we were trying to
figure out what kind of access pattern made the most sense. One option is store the majority of the
data in a key, so we could have something like:

<FixedWidthUserName><FixedWidthValueId1>:"" (no value)


<FixedWidthUserName><FixedWidthValueId2>:"" (no value)
<FixedWidthUserName><FixedWidthValueId3>:"" (no value)

The other option we had was to do this entirely using:

<FixedWidthUserName><FixedWidthPageNum0>:<FixedWidthLength><FixedIdNextPageNum><ValueI
d1><ValueId2><ValueId3>...
<FixedWidthUserName><FixedWidthPageNum1>:<FixedWidthLength><FixedIdNextPageNum><ValueI
d1><ValueId2><ValueId3>...

where each row would contain multiple values. So in one case reading the first thirty values would
be:

scan { STARTROW => 'FixedWidthUsername' LIMIT => 30}

And in the second case it would be

get 'FixedWidthUserName\x00\x00\x00\x00'

The general usage pattern would be to read only the first 30 values of these lists, with infrequent
access reading deeper into the lists. Some users would have ⇐ 30 total values in these lists, and
some users would have millions (i.e. power-law distribution)

The single-value format seems like it would take up more space on HBase, but would offer some
improved retrieval / pagination flexibility. Would there be any significant performance advantages

182
to be able to paginate via gets vs paginating with scans?

My initial understanding was that doing a scan should be faster if our paging size is unknown (and
caching is set appropriately), but that gets should be faster if we’ll always need the same page size.
I’ve ended up hearing different people tell me opposite things about performance. I assume the
page sizes would be relatively consistent, so for most use cases we could guarantee that we only
wanted one page of data in the fixed-page-length case. I would also assume that we would have
infrequent updates, but may have inserts into the middle of these lists (meaning we’d need to
update all subsequent rows).

Thanks for help / suggestions / follow-up questions.

• ANSWER *

If I understand you correctly, you’re ultimately trying to store triples in the form "user, valueid,
value", right? E.g., something like:

"user123, firstname, Paul",


"user234, lastname, Smith"

(But the usernames are fixed width, and the valueids are fixed width).

And, your access pattern is along the lines of: "for user X, list the next 30 values, starting with
valueid Y". Is that right? And these values should be returned sorted by valueid?

The tl;dr version is that you should probably go with one row per user+value, and not build a
complicated intra-row pagination scheme on your own unless you’re really sure it is needed.

Your two options mirror a common question people have when designing HBase schemas: should I
go "tall" or "wide"? Your first schema is "tall": each row represents one value for one user, and so
there are many rows in the table for each user; the row key is user + valueid, and there would be
(presumably) a single column qualifier that means "the value". This is great if you want to scan over
rows in sorted order by row key (thus my question above, about whether these ids are sorted
correctly). You can start a scan at any user+valueid, read the next 30, and be done. What you’re
giving up is the ability to have transactional guarantees around all the rows for one user, but it
doesn’t sound like you need that. Doing it this way is generally recommended (see here
https://hbase.apache.org/book.html#schema.smackdown).

Your second option is "wide": you store a bunch of values in one row, using different qualifiers
(where the qualifier is the valueid). The simple way to do that would be to just store ALL values for
one user in a single row. I’m guessing you jumped to the "paginated" version because you’re
assuming that storing millions of columns in a single row would be bad for performance, which
may or may not be true; as long as you’re not trying to do too much in a single request, or do things
like scanning over and returning all of the cells in the row, it shouldn’t be fundamentally worse.
The client has methods that allow you to get specific slices of columns.

Note that neither case fundamentally uses more disk space than the other; you’re just "shifting"
part of the identifying information for a value either to the left (into the row key, in option one) or
to the right (into the column qualifiers in option 2). Under the covers, every key/value still stores

183
the whole row key, and column family name. (If this is a bit confusing, take an hour and watch Lars
George’s excellent video about understanding HBase schema design: http://www.youtube.com/
watch?v=_HLoH_PgrLk).

A manually paginated version has lots more complexities, as you note, like having to keep track of
how many things are in each page, re-shuffling if new values are inserted, etc. That seems
significantly more complex. It might have some slight speed advantages (or disadvantages!) at
extremely high throughput, and the only way to really know that would be to try it out. If you don’t
have time to build it both ways and compare, my advice would be to start with the simplest option
(one row per user+value). Start simple and iterate! :)

184
Chapter 46. Operational and Performance
Configuration Options
46.1. Tune HBase Server RPC Handling
• Set hbase.regionserver.handler.count (in hbase-site.xml) to cores x spindles for concurrency.

• Optionally, split the call queues into separate read and write queues for differentiated service.
The parameter hbase.ipc.server.callqueue.handler.factor specifies the number of call queues:

◦ 0 means a single shared queue

◦ 1 means one queue for each handler.

◦ A value between 0 and 1 allocates the number of queues proportionally to the number of
handlers. For instance, a value of .5 shares one queue between each two handlers.

• Use hbase.ipc.server.callqueue.read.ratio (hbase.ipc.server.callqueue.read.share in 0.98) to


split the call queues into read and write queues:

◦ 0.5 means there will be the same number of read and write queues

◦ < 0.5 for more read than write

◦ > 0.5 for more write than read

• Set hbase.ipc.server.callqueue.scan.ratio (HBase 1.0+) to split read call queues into small-read
and long-read queues:

◦ 0.5 means that there will be the same number of short-read and long-read queues

◦ < 0.5 for more short-read

◦ > 0.5 for more long-read

46.2. Disable Nagle for RPC


Disable Nagle’s algorithm. Delayed ACKs can add up to ~200ms to RPC round trip time. Set the
following parameters:

• In Hadoop’s core-site.xml:

◦ ipc.server.tcpnodelay = true

◦ ipc.client.tcpnodelay = true

• In HBase’s hbase-site.xml:

◦ hbase.ipc.client.tcpnodelay = true

◦ hbase.ipc.server.tcpnodelay = true

46.3. Limit Server Failure Impact


Detect regionserver failure as fast as reasonable. Set the following parameters:

185
• In hbase-site.xml, set zookeeper.session.timeout to 30 seconds or less to bound failure detection
(20-30 seconds is a good start).

◦ Note: Zookeeper clients negotiate a session timeout with the server during client init. Server
enforces this timeout to be in the range [minSessionTimeout, maxSessionTimeout] and both
these timeouts (measured in milliseconds) are configurable in Zookeeper service
configuration. If not configured, these default to 2 * tickTime and 20 * tickTime respectively
(tickTime is the basic time unit used by ZooKeeper, as measured in milliseconds. It is used to
regulate heartbeats, timeouts etc.). Refer to Zookeeper documentation for additional details.

• Detect and avoid unhealthy or failed HDFS DataNodes: in hdfs-site.xml and hbase-site.xml, set
the following parameters:

◦ dfs.namenode.avoid.read.stale.datanode = true

◦ dfs.namenode.avoid.write.stale.datanode = true

46.4. Optimize on the Server Side for Low Latency


Skip the network for local blocks when the RegionServer goes to read from HDFS by exploiting
HDFS’s Short-Circuit Local Reads facility. Note how setup must be done both at the datanode and on
the dfsclient ends of the conneciton — i.e. at the RegionServer and how both ends need to have
loaded the hadoop native .so library. After configuring your hadoop setting
dfs.client.read.shortcircuit to true and configuring the dfs.domain.socket.path path for the datanode
and dfsclient to share and restarting, next configure the regionserver/dfsclient side.

• In hbase-site.xml, set the following parameters:

◦ dfs.client.read.shortcircuit = true

◦ dfs.client.read.shortcircuit.skip.checksum = true so we don’t double checksum (HBase


does its own checksumming to save on i/os. See hbase.regionserver.checksum.verify for
more on this.

◦ dfs.domain.socket.path to match what was set for the datanodes.

◦ dfs.client.read.shortcircuit.buffer.size = 131072 Important to avoid OOME — hbase has a


default it uses if unset, see hbase.dfs.client.read.shortcircuit.buffer.size; its default is
131072.

• Ensure data locality. In hbase-site.xml, set hbase.hstore.min.locality.to.skip.major.compact =


0.7 (Meaning that 0.7 <= n <= 1)

• Make sure DataNodes have enough handlers for block transfers. In hdfs-site.xml, set the
following parameters:

◦ dfs.datanode.max.xcievers >= 8192

◦ dfs.datanode.handler.count = number of spindles

Check the RegionServer logs after restart. You should only see complaint if misconfiguration.
Otherwise, shortcircuit read operates quietly in background. It does not provide metrics so no
optics on how effective it is but read latencies should show a marked improvement, especially if
good data locality, lots of random reads, and dataset is larger than available cache.

186
Other advanced configurations that you might play with, especially if shortcircuit functionality is
complaining in the logs, include dfs.client.read.shortcircuit.streams.cache.size and
dfs.client.socketcache.capacity. Documentation is sparse on these options. You’ll have to read
source code.

RegionServer metric system exposes HDFS short circuit read metrics shortCircuitBytesRead. Other
HDFS read metrics, including totalBytesRead (The total number of bytes read from HDFS),
localBytesRead (The number of bytes read from the local HDFS DataNode), zeroCopyBytesRead (The
number of bytes read through HDFS zero copy) are available and can be used to troubleshoot short-
circuit read issues.

For more on short-circuit reads, see Colin’s old blog on rollout, How Improved Short-Circuit Local
Reads Bring Better Performance and Security to Hadoop. The HDFS-347 issue also makes for an
interesting read showing the HDFS community at its best (caveat a few comments).

46.5. JVM Tuning


46.5.1. Tune JVM GC for low collection latencies

• Use the CMS collector: -XX:+UseConcMarkSweepGC

• Keep eden space as small as possible to minimize average collection time. Example:

-XX:CMSInitiatingOccupancyFraction=70

• Optimize for low collection latency rather than throughput: -Xmn512m

• Collect eden in parallel: -XX:+UseParNewGC

• Avoid collection under pressure: -XX:+UseCMSInitiatingOccupancyOnly

• Limit per request scanner result sizing so everything fits into survivor space but doesn’t tenure.
In hbase-site.xml, set hbase.client.scanner.max.result.size to 1/8th of eden space (with -Xmn512m
this is ~51MB )

• Set max.result.size x handler.count less than survivor space

46.5.2. OS-Level Tuning

• Turn transparent huge pages (THP) off:

echo never > /sys/kernel/mm/transparent_hugepage/enabled


echo never > /sys/kernel/mm/transparent_hugepage/defrag

• Set vm.swappiness = 0

• Set vm.min_free_kbytes to at least 1GB (8GB on larger memory systems)

• Disable NUMA zone reclaim with vm.zone_reclaim_mode = 0

187
Chapter 47. Special Cases
47.1. For applications where failing quickly is better
than waiting
• In hbase-site.xml on the client side, set the following parameters:

◦ Set hbase.client.pause = 1000

◦ Set hbase.client.retries.number = 3

◦ If you want to ride over splits and region moves, increase hbase.client.retries.number
substantially (>= 20)

◦ Set the RecoverableZookeeper retry count: zookeeper.recovery.retry = 1 (no retry)

• In hbase-site.xml on the server side, set the Zookeeper session timeout for detecting server
failures: zookeeper.session.timeout ⇐ 30 seconds (20-30 is good).

47.2. For applications that can tolerate slightly out of


date information
HBase timeline consistency (HBASE-10070) With read replicas enabled, read-only copies of
regions (replicas) are distributed over the cluster. One RegionServer services the default or primary
replica, which is the only replica that can service writes. Other RegionServers serve the secondary
replicas, follow the primary RegionServer, and only see committed updates. The secondary replicas
are read-only, but can serve reads immediately while the primary is failing over, cutting read
availability blips from seconds to milliseconds. Phoenix supports timeline consistency as of 4.4.0
Tips:

• Deploy HBase 1.0.0 or later.

• Enable timeline consistent replicas on the server side.

• Use one of the following methods to set timeline consistency:

◦ Use ALTER SESSION SET CONSISTENCY = 'TIMELINE’

◦ Set the connection property Consistency to timeline in the JDBC connect string

47.3. More Information


See the Performance section perf.schema for more information about operational and performance
schema design options, such as Bloom Filters, Table-configured regionsizes, compression, and
blocksizes.

188
HBase and MapReduce
Apache MapReduce is a software framework used to analyze large amounts of data. It is provided
by Apache Hadoop. MapReduce itself is out of the scope of this document. A good place to get
started with MapReduce is https://hadoop.apache.org/docs/r2.6.0/hadoop-mapreduce-client/hadoop-
mapreduce-client-core/MapReduceTutorial.html. MapReduce version 2 (MR2)is now part of YARN.

This chapter discusses specific configuration steps you need to take to use MapReduce on data
within HBase. In addition, it discusses other interactions and issues between HBase and MapReduce
jobs. Finally, it discusses Cascading, an alternative API for MapReduce.

mapred and mapreduce


There are two mapreduce packages in HBase as in MapReduce itself:
org.apache.hadoop.hbase.mapred and org.apache.hadoop.hbase.mapreduce. The
former does old-style API and the latter the new mode. The latter has more facility
 though you can usually find an equivalent in the older package. Pick the package
that goes with your MapReduce deploy. When in doubt or starting over, pick
org.apache.hadoop.hbase.mapreduce. In the notes below, we refer to
o.a.h.h.mapreduce but replace with o.a.h.h.mapred if that is what you are using.

189
Chapter 48. HBase, MapReduce, and the
CLASSPATH
By default, MapReduce jobs deployed to a MapReduce cluster do not have access to either the HBase
configuration under $HBASE_CONF_DIR or the HBase classes.

To give the MapReduce jobs the access they need, you could add hbase-site.xml_to
_$HADOOP_HOME/conf and add HBase jars to the $HADOOP_HOME/lib directory. You would then
need to copy these changes across your cluster. Or you could edit $HADOOP_HOME/conf/hadoop-
env.sh and add hbase dependencies to the HADOOP_CLASSPATH variable. Neither of these approaches is
recommended because it will pollute your Hadoop install with HBase references. It also requires
you restart the Hadoop cluster before Hadoop can use the HBase data.

The recommended approach is to let HBase add its dependency jars and use HADOOP_CLASSPATH or
-libjars.

Since HBase 0.90.x, HBase adds its dependency JARs to the job configuration itself. The
dependencies only need to be available on the local CLASSPATH and from here they’ll be picked up
and bundled into the fat job jar deployed to the MapReduce cluster. A basic trick just passes the full
hbase classpath — all hbase and dependent jars as well as configurations — to the mapreduce job
runner letting hbase utility pick out from the full-on classpath what it needs adding them to the
MapReduce job configuration (See the source at
TableMapReduceUtil#addDependencyJars(org.apache.hadoop.mapreduce.Job) for how this is done).

The following example runs the bundled HBase RowCounter MapReduce job against a table named
usertable. It sets into HADOOP_CLASSPATH the jars hbase needs to run in an MapReduce context
(including configuration files such as hbase-site.xml). Be sure to use the correct version of the
HBase JAR for your system; replace the VERSION string in the below command line w/ the version
of your local hbase install. The backticks (` symbols) cause the shell to execute the sub-commands,
setting the output of hbase classpath into HADOOP_CLASSPATH. This example assumes you use a BASH-
compatible shell.

$ HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` \
${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/lib/hbase-mapreduce-VERSION.jar \
org.apache.hadoop.hbase.mapreduce.RowCounter usertable

The above command will launch a row counting mapreduce job against the hbase cluster that is
pointed to by your local configuration on a cluster that the hadoop configs are pointing to.

The main for the hbase-mapreduce.jar is a Driver that lists a few basic mapreduce tasks that ship
with hbase. For example, presuming your install is hbase 2.0.0-SNAPSHOT:

190
$ HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` \
${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/lib/hbase-mapreduce-2.0.0-SNAPSHOT.jar
An example program must be given as the first argument.
Valid program names are:
CellCounter: Count cells in HBase table.
WALPlayer: Replay WAL files.
completebulkload: Complete a bulk data load.
copytable: Export a table from local cluster to peer cluster.
export: Write table data to HDFS.
exportsnapshot: Export the specific snapshot to a given FileSystem.
import: Import data written by Export.
importtsv: Import data in TSV format.
rowcounter: Count rows in HBase table.
verifyrep: Compare the data from tables in two different clusters. WARNING: It
doesn't work for incrementColumnValues'd cells since the timestamp is changed after
being appended to the log.

You can use the above listed shortnames for mapreduce jobs as in the below re-run of the row
counter job (again, presuming your install is hbase 2.0.0-SNAPSHOT):

$ HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` \
${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/lib/hbase-mapreduce-2.0.0-SNAPSHOT.jar \
rowcounter usertable

You might find the more selective hbase mapredcp tool output of interest; it lists the minimum set of
jars needed to run a basic mapreduce job against an hbase install. It does not include configuration.
You’ll probably need to add these if you want your MapReduce job to find the target cluster. You’ll
probably have to also add pointers to extra jars once you start to do anything of substance. Just
specify the extras by passing the system propery -Dtmpjars when you run hbase mapredcp.

For jobs that do not package their dependencies or call TableMapReduceUtil#addDependencyJars, the
following command structure is necessary:

$ HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase mapredcp`:${HBASE_HOME}/conf hadoop jar


MyApp.jar MyJobMainClass -libjars $(${HBASE_HOME}/bin/hbase mapredcp | tr ':' ',') ...

191
The example may not work if you are running HBase from its build directory
rather than an installed location. You may see an error like the following:

java.lang.RuntimeException: java.lang.ClassNotFoundException:
org.apache.hadoop.hbase.mapreduce.RowCounter$RowCounterMapper

If this occurs, try modifying the command as follows, so that it uses the HBase JARs
 from the target/ directory within the build environment.

$ HADOOP_CLASSPATH=${HBASE_BUILD_HOME}/hbase-mapreduce/target/hbase-
mapreduce-VERSION-SNAPSHOT.jar:`${HBASE_BUILD_HOME}/bin/hbase
classpath` ${HADOOP_HOME}/bin/hadoop jar ${HBASE_BUILD_HOME}/hbase-
mapreduce/target/hbase-mapreduce-VERSION-SNAPSHOT.jar rowcounter
usertable

Notice to MapReduce users of HBase between 0.96.1 and 0.98.4


Some MapReduce jobs that use HBase fail to launch. The symptom is an exception
similar to the following:

192
Exception in thread "main" java.lang.IllegalAccessError: class
com.google.protobuf.ZeroCopyLiteralByteString cannot access its
superclass
com.google.protobuf.LiteralByteString
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:792)
at
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at

org.apache.hadoop.hbase.protobuf.ProtobufUtil.toScan(ProtobufUtil.java:
818)
at

org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.convertScanToStrin
g(TableMapReduceUtil.java:433)
at

org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob
(TableMapReduceUtil.java:186)
at

org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob
(TableMapReduceUtil.java:147)
at

org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob
(TableMapReduceUtil.java:270)
at

org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob
(TableMapReduceUtil.java:100)
...

This is caused by an optimization introduced in HBASE-9867 that inadvertently


introduced a classloader dependency.

This affects both jobs using the -libjars option and "fat jar," those which package
their runtime dependencies in a nested lib folder.

In order to satisfy the new classloader requirements, hbase-protocol.jar must be


included in Hadoop’s classpath. See HBase, MapReduce, and the CLASSPATH for

193
current recommendations for resolving classpath errors. The following is included
for historical purposes.

This can be resolved system-wide by including a reference to the hbase-


protocol.jar in Hadoop’s lib directory, via a symlink or by copying the jar into the
new location.

This can also be achieved on a per-job launch basis by including it in the


HADOOP_CLASSPATH environment variable at job submission time. When launching
jobs that package their dependencies, all three of the following job launching
commands satisfy this requirement:

$ HADOOP_CLASSPATH=/path/to/hbase-protocol.jar:/path/to/hbase/conf
hadoop jar MyJob.jar MyJobMainClass
$ HADOOP_CLASSPATH=$(hbase mapredcp):/path/to/hbase/conf hadoop jar
MyJob.jar MyJobMainClass
$ HADOOP_CLASSPATH=$(hbase classpath) hadoop jar MyJob.jar
MyJobMainClass

For jars that do not package their dependencies, the following command structure
is necessary:

$ HADOOP_CLASSPATH=$(hbase mapredcp):/etc/hbase/conf hadoop jar


MyApp.jar MyJobMainClass -libjars $(hbase mapredcp | tr ':' ',') ...

See also HBASE-10304 for further discussion of this issue.

194
Chapter 49. MapReduce Scan Caching
TableMapReduceUtil now restores the option to set scanner caching (the number of rows which are
cached before returning the result to the client) on the Scan object that is passed in. This
functionality was lost due to a bug in HBase 0.95 (HBASE-11558), which is fixed for HBase 0.98.5 and
0.96.3. The priority order for choosing the scanner caching is as follows:

1. Caching settings which are set on the scan object.

2. Caching settings which are specified via the configuration option hbase.client.scanner.caching,
which can either be set manually in hbase-site.xml or via the helper method
TableMapReduceUtil.setScannerCaching().

3. The default value HConstants.DEFAULT_HBASE_CLIENT_SCANNER_CACHING, which is set to 100.

Optimizing the caching settings is a balance between the time the client waits for a result and the
number of sets of results the client needs to receive. If the caching setting is too large, the client
could end up waiting for a long time or the request could even time out. If the setting is too small,
the scan needs to return results in several pieces. If you think of the scan as a shovel, a bigger cache
setting is analogous to a bigger shovel, and a smaller cache setting is equivalent to more shoveling
in order to fill the bucket.

The list of priorities mentioned above allows you to set a reasonable default, and override it for
specific operations.

See the API documentation for Scan for more details.

195
Chapter 50. Bundled HBase MapReduce Jobs
The HBase JAR also serves as a Driver for some bundled MapReduce jobs. To learn about the
bundled MapReduce jobs, run the following command.

$ ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-mapreduce-VERSION.jar


An example program must be given as the first argument.
Valid program names are:
copytable: Export a table from local cluster to peer cluster
completebulkload: Complete a bulk data load.
export: Write table data to HDFS.
import: Import data written by Export.
importtsv: Import data in TSV format.
rowcounter: Count rows in HBase table

Each of the valid program names are bundled MapReduce jobs. To run one of the jobs, model your
command after the following example.

$ ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-mapreduce-VERSION.jar rowcounter


myTable

196
Chapter 51. HBase as a MapReduce Job Data
Source and Data Sink
HBase can be used as a data source, TableInputFormat, and data sink, TableOutputFormat or
MultiTableOutputFormat, for MapReduce jobs. Writing MapReduce jobs that read or write HBase, it
is advisable to subclass TableMapper and/or TableReducer. See the do-nothing pass-through classes
IdentityTableMapper and IdentityTableReducer for basic usage. For a more involved example, see
RowCounter or review the org.apache.hadoop.hbase.mapreduce.TestTableMapReduce unit test.

If you run MapReduce jobs that use HBase as source or sink, need to specify source and sink table
and column names in your configuration.

When you read from HBase, the TableInputFormat requests the list of regions from HBase and
makes a map, which is either a map-per-region or mapreduce.job.maps map, whichever is smaller. If
your job only has two maps, raise mapreduce.job.maps to a number greater than the number of
regions. Maps will run on the adjacent TaskTracker/NodeManager if you are running a
TaskTracer/NodeManager and RegionServer per node. When writing to HBase, it may make sense
to avoid the Reduce step and write back into HBase from within your map. This approach works
when your job does not need the sort and collation that MapReduce does on the map-emitted data.
On insert, HBase 'sorts' so there is no point double-sorting (and shuffling data around your
MapReduce cluster) unless you need to. If you do not need the Reduce, your map might emit counts
of records processed for reporting at the end of the job, or set the number of Reduces to zero and
use TableOutputFormat. If running the Reduce step makes sense in your case, you should typically
use multiple reducers so that load is spread across the HBase cluster.

A new HBase partitioner, the HRegionPartitioner, can run as many reducers the number of existing
regions. The HRegionPartitioner is suitable when your table is large and your upload will not
greatly alter the number of existing regions upon completion. Otherwise use the default partitioner.

197
Chapter 52. Writing HFiles Directly During
Bulk Import
If you are importing into a new table, you can bypass the HBase API and write your content directly
to the filesystem, formatted into HBase data files (HFiles). Your import will run faster, perhaps an
order of magnitude faster. For more on how this mechanism works, see Bulk Loading.

198
Chapter 53. RowCounter Example
The included RowCounter MapReduce job uses TableInputFormat and does a count of all rows in the
specified table. To run it, use the following command:

$ ./bin/hadoop jar hbase-X.X.X.jar

This will invoke the HBase MapReduce Driver class. Select rowcounter from the choice of jobs
offered. This will print rowcounter usage advice to standard output. Specify the tablename, column
to count, and output directory. If you have classpath errors, see HBase, MapReduce, and the
CLASSPATH.

199
Chapter 54. Map-Task Splitting
54.1. The Default HBase MapReduce Splitter
When TableInputFormat is used to source an HBase table in a MapReduce job, its splitter will make
a map task for each region of the table. Thus, if there are 100 regions in the table, there will be 100
map-tasks for the job - regardless of how many column families are selected in the Scan.

54.2. Custom Splitters


For those interested in implementing custom splitters, see the method getSplits in
TableInputFormatBase. That is where the logic for map-task assignment resides.

200
Chapter 55. HBase MapReduce Examples
55.1. HBase MapReduce Read Example
The following is an example of using HBase as a MapReduce source in read-only manner.
Specifically, there is a Mapper instance but no Reducer, and nothing is being emitted from the
Mapper. The job would be defined as follows…

Configuration config = HBaseConfiguration.create();


Job job = new Job(config, "ExampleRead");
job.setJarByClass(MyReadJob.class); // class that contains mapper

Scan scan = new Scan();


scan.setCaching(500); // 1 is the default in Scan, which will be bad for
MapReduce jobs
scan.setCacheBlocks(false); // don't set to true for MR jobs
// set other scan attrs
...

TableMapReduceUtil.initTableMapperJob(
tableName, // input HBase table name
scan, // Scan instance to control CF and attribute selection
MyMapper.class, // mapper
null, // mapper output key
null, // mapper output value
job);
job.setOutputFormatClass(NullOutputFormat.class); // because we aren't emitting
anything from mapper

boolean b = job.waitForCompletion(true);
if (!b) {
throw new IOException("error with job!");
}

…and the mapper instance would extend TableMapper…

public static class MyMapper extends TableMapper<Text, Text> {

public void map(ImmutableBytesWritable row, Result value, Context context) throws


InterruptedException, IOException {
// process data for the row from the Result instance.
}
}

201
55.2. HBase MapReduce Read/Write Example
The following is an example of using HBase both as a source and as a sink with MapReduce. This
example will simply copy data from one table to another.

Configuration config = HBaseConfiguration.create();


Job job = new Job(config,"ExampleReadWrite");
job.setJarByClass(MyReadWriteJob.class); // class that contains mapper

Scan scan = new Scan();


scan.setCaching(500); // 1 is the default in Scan, which will be bad for
MapReduce jobs
scan.setCacheBlocks(false); // don't set to true for MR jobs
// set other scan attrs

TableMapReduceUtil.initTableMapperJob(
sourceTable, // input table
scan, // Scan instance to control CF and attribute selection
MyMapper.class, // mapper class
null, // mapper output key
null, // mapper output value
job);
TableMapReduceUtil.initTableReducerJob(
targetTable, // output table
null, // reducer class
job);
job.setNumReduceTasks(0);

boolean b = job.waitForCompletion(true);
if (!b) {
throw new IOException("error with job!");
}

An explanation is required of what TableMapReduceUtil is doing, especially with the reducer.


TableOutputFormat is being used as the outputFormat class, and several parameters are being set
on the config (e.g., TableOutputFormat.OUTPUT_TABLE), as well as setting the reducer output key to
ImmutableBytesWritable and reducer value to Writable. These could be set by the programmer on the
job and conf, but TableMapReduceUtil tries to make things easier.

The following is the example mapper, which will create a Put and matching the input Result and
emit it. Note: this is what the CopyTable utility does.

202
public static class MyMapper extends TableMapper<ImmutableBytesWritable, Put> {

public void map(ImmutableBytesWritable row, Result value, Context context) throws


IOException, InterruptedException {
// this example is just copying the data from the source table...
context.write(row, resultToPut(row,value));
}

private static Put resultToPut(ImmutableBytesWritable key, Result result) throws


IOException {
Put put = new Put(key.get());
for (Cell cell : result.listCells()) {
put.add(cell);
}
return put;
}
}

There isn’t actually a reducer step, so TableOutputFormat takes care of sending the Put to the target
table.

This is just an example, developers could choose not to use TableOutputFormat and connect to the
target table themselves.

55.3. HBase MapReduce Read/Write Example With


Multi-Table Output
TODO: example for MultiTableOutputFormat.

55.4. HBase MapReduce Summary to HBase Example


The following example uses HBase as a MapReduce source and sink with a summarization step.
This example will count the number of distinct instances of a value in a table and write those
summarized counts in another table.

203
Configuration config = HBaseConfiguration.create();
Job job = new Job(config,"ExampleSummary");
job.setJarByClass(MySummaryJob.class); // class that contains mapper and reducer

Scan scan = new Scan();


scan.setCaching(500); // 1 is the default in Scan, which will be bad for
MapReduce jobs
scan.setCacheBlocks(false); // don't set to true for MR jobs
// set other scan attrs

TableMapReduceUtil.initTableMapperJob(
sourceTable, // input table
scan, // Scan instance to control CF and attribute selection
MyMapper.class, // mapper class
Text.class, // mapper output key
IntWritable.class, // mapper output value
job);
TableMapReduceUtil.initTableReducerJob(
targetTable, // output table
MyTableReducer.class, // reducer class
job);
job.setNumReduceTasks(1); // at least one, adjust as required

boolean b = job.waitForCompletion(true);
if (!b) {
throw new IOException("error with job!");
}

In this example mapper a column with a String-value is chosen as the value to summarize upon.
This value is used as the key to emit from the mapper, and an IntWritable represents an instance
counter.

public static class MyMapper extends TableMapper<Text, IntWritable> {


public static final byte[] CF = "cf".getBytes();
public static final byte[] ATTR1 = "attr1".getBytes();

private final IntWritable ONE = new IntWritable(1);


private Text text = new Text();

public void map(ImmutableBytesWritable row, Result value, Context context) throws


IOException, InterruptedException {
String val = new String(value.getValue(CF, ATTR1));
text.set(val); // we can only emit Writables...
context.write(text, ONE);
}
}

In the reducer, the "ones" are counted (just like any other MR example that does this), and then

204
emits a Put.

public static class MyTableReducer extends TableReducer<Text, IntWritable,


ImmutableBytesWritable> {
public static final byte[] CF = "cf".getBytes();
public static final byte[] COUNT = "count".getBytes();

public void reduce(Text key, Iterable<IntWritable> values, Context context) throws


IOException, InterruptedException {
int i = 0;
for (IntWritable val : values) {
i += val.get();
}
Put put = new Put(Bytes.toBytes(key.toString()));
put.add(CF, COUNT, Bytes.toBytes(i));

context.write(null, put);
}
}

55.5. HBase MapReduce Summary to File Example


This very similar to the summary example above, with exception that this is using HBase as a
MapReduce source but HDFS as the sink. The differences are in the job setup and in the reducer.
The mapper remains the same.

205
Configuration config = HBaseConfiguration.create();
Job job = new Job(config,"ExampleSummaryToFile");
job.setJarByClass(MySummaryFileJob.class); // class that contains mapper and
reducer

Scan scan = new Scan();


scan.setCaching(500); // 1 is the default in Scan, which will be bad for
MapReduce jobs
scan.setCacheBlocks(false); // don't set to true for MR jobs
// set other scan attrs

TableMapReduceUtil.initTableMapperJob(
sourceTable, // input table
scan, // Scan instance to control CF and attribute selection
MyMapper.class, // mapper class
Text.class, // mapper output key
IntWritable.class, // mapper output value
job);
job.setReducerClass(MyReducer.class); // reducer class
job.setNumReduceTasks(1); // at least one, adjust as required
FileOutputFormat.setOutputPath(job, new Path("/tmp/mr/mySummaryFile")); // adjust
directories as required

boolean b = job.waitForCompletion(true);
if (!b) {
throw new IOException("error with job!");
}

As stated above, the previous Mapper can run unchanged with this example. As for the Reducer, it
is a "generic" Reducer instead of extending TableMapper and emitting Puts.

public static class MyReducer extends Reducer<Text, IntWritable, Text, IntWritable> {

public void reduce(Text key, Iterable<IntWritable> values, Context context) throws


IOException, InterruptedException {
int i = 0;
for (IntWritable val : values) {
i += val.get();
}
context.write(key, new IntWritable(i));
}
}

55.6. HBase MapReduce Summary to HBase Without


Reducer
It is also possible to perform summaries without a reducer - if you use HBase as the reducer.

206
An HBase target table would need to exist for the job summary. The Table method
incrementColumnValue would be used to atomically increment values. From a performance
perspective, it might make sense to keep a Map of values with their values to be incremented for
each map-task, and make one update per key at during the cleanup method of the mapper.
However, your mileage may vary depending on the number of rows to be processed and unique
keys.

In the end, the summary results are in HBase.

55.7. HBase MapReduce Summary to RDBMS


Sometimes it is more appropriate to generate summaries to an RDBMS. For these cases, it is possible
to generate summaries directly to an RDBMS via a custom reducer. The setup method can connect
to an RDBMS (the connection information can be passed via custom parameters in the context) and
the cleanup method can close the connection.

It is critical to understand that number of reducers for the job affects the summarization
implementation, and you’ll have to design this into your reducer. Specifically, whether it is
designed to run as a singleton (one reducer) or multiple reducers. Neither is right or wrong, it
depends on your use-case. Recognize that the more reducers that are assigned to the job, the more
simultaneous connections to the RDBMS will be created - this will scale, but only to a point.

public static class MyRdbmsReducer extends Reducer<Text, IntWritable, Text,


IntWritable> {

private Connection c = null;

public void setup(Context context) {


// create DB connection...
}

public void reduce(Text key, Iterable<IntWritable> values, Context context) throws


IOException, InterruptedException {
// do summarization
// in this example the keys are Text, but this is just an example
}

public void cleanup(Context context) {


// close db connection
}

In the end, the summary results are written to your RDBMS table/s.

207
Chapter 56. Accessing Other HBase Tables in
a MapReduce Job
Although the framework currently allows one HBase table as input to a MapReduce job, other
HBase tables can be accessed as lookup tables, etc., in a MapReduce job via creating an Table
instance in the setup method of the Mapper.

public class MyMapper extends TableMapper<Text, LongWritable> {


private Table myOtherTable;

public void setup(Context context) {


// In here create a Connection to the cluster and save it or use the Connection
// from the existing table
myOtherTable = connection.getTable("myOtherTable");
}

public void map(ImmutableBytesWritable row, Result value, Context context) throws


IOException, InterruptedException {
// process Result...
// use 'myOtherTable' for lookups
}

208
Chapter 57. Speculative Execution
It is generally advisable to turn off speculative execution for MapReduce jobs that use HBase as a
source. This can either be done on a per-Job basis through properties, or on the entire cluster.
Especially for longer running jobs, speculative execution will create duplicate map-tasks which will
double-write your data to HBase; this is probably not what you want.

See spec.ex for more information.

209
Chapter 58. Cascading
Cascading is an alternative API for MapReduce, which actually uses MapReduce, but allows you to
write your MapReduce code in a simplified way.

The following example shows a Cascading Flow which "sinks" data into an HBase cluster. The same
hBaseTap API could be used to "source" data as well.

// read data from the default filesystem


// emits two fields: "offset" and "line"
Tap source = new Hfs( new TextLine(), inputFileLhs );

// store data in an HBase cluster


// accepts fields "num", "lower", and "upper"
// will automatically scope incoming fields to their proper familyname, "left" or
"right"
Fields keyFields = new Fields( "num" );
String[] familyNames = {"left", "right"};
Fields[] valueFields = new Fields[] {new Fields( "lower" ), new Fields( "upper" ) };
Tap hBaseTap = new HBaseTap( "multitable", new HBaseScheme( keyFields, familyNames,
valueFields ), SinkMode.REPLACE );

// a simple pipe assembly to parse the input into fields


// a real app would likely chain multiple Pipes together for more complex processing
Pipe parsePipe = new Each( "insert", new Fields( "line" ), new RegexSplitter( new
Fields( "num", "lower", "upper" ), " " ) );

// "plan" a cluster executable Flow


// this connects the source Tap and hBaseTap (the sink Tap) to the parsePipe
Flow parseFlow = new FlowConnector( properties ).connect( source, hBaseTap, parsePipe
);

// start the flow, and block until complete


parseFlow.complete();

// open an iterator on the HBase table we stuffed data into


TupleEntryIterator iterator = parseFlow.openSink();

while(iterator.hasNext())
{
// print out each tuple from HBase
System.out.println( "iterator.next() = " + iterator.next() );
}

iterator.close();

210
Securing Apache HBase
Reporting Security Bugs

To protect existing HBase installations from exploitation, please


do not use JIRA to report security-related bugs. Instead, send your
 report to the mailing list [email protected], which allows
anyone to send messages, but restricts who can read them.
Someone on that list will contact you to follow up on your report.

HBase adheres to the Apache Software Foundation’s policy on reported
vulnerabilities, available at http://apache.org/security/.

If you wish to send an encrypted report, you can use the GPG details provided for
the general ASF security list. This will likely increase the response time to your
report.

211
Chapter 59. Web UI Security
HBase provides mechanisms to secure various components and aspects of HBase and how it relates
to the rest of the Hadoop infrastructure, as well as clients and resources outside Hadoop.

59.1. Using Secure HTTP (HTTPS) for the Web UI


A default HBase install uses insecure HTTP connections for Web UIs for the master and region
servers. To enable secure HTTP (HTTPS) connections instead, set hbase.ssl.enabled to true in hbase-
site.xml. This does not change the port used by the Web UI. To change the port for the web UI for a
given HBase component, configure that port’s setting in hbase-site.xml. These settings are:

• hbase.master.info.port

• hbase.regionserver.info.port

If you enable HTTPS, clients should avoid using the non-secure HTTP connection.
If you enable secure HTTP, clients should connect to HBase using the https:// URL.
Clients using the http:// URL will receive an HTTP response of 200, but will not
receive any data. The following exception is logged:

javax.net.ssl.SSLException: Unrecognized SSL message, plaintext


connection?

This is because the same port is used for HTTP and HTTPS.

HBase uses Jetty for the Web UI. Without modifying Jetty itself, it does not seem
possible to configure Jetty to redirect one port to another on the same host. See
Nick Dimiduk’s contribution on this Stack Overflow thread for more information.
If you know how to fix this without opening a second port for HTTPS, patches are
appreciated.

59.2. Using SPNEGO for Kerberos authentication with


Web UIs
Kerberos-authentication to HBase Web UIs can be enabled via configuring SPNEGO with the
hbase.security.authentication.ui property in hbase-site.xml. Enabling this authentication requires
that HBase is also configured to use Kerberos authentication for RPCs (e.g
hbase.security.authentication = kerberos).

212
<property>
<name>hbase.security.authentication.ui</name>
<value>kerberos</value>
<description>Controls what kind of authentication should be used for the HBase web
UIs.</description>
</property>
<property>
<name>hbase.security.authentication</name>
<value>kerberos</value>
<description>The Kerberos keytab file to use for SPNEGO authentication by the web
server.</description>
</property>

A number of properties exist to configure SPNEGO authentication for the web server:

<property>
<name>hbase.security.authentication.spnego.kerberos.principal</name>
<value>HTTP/[email protected]</value>
<description>Required for SPNEGO, the Kerberos principal to use for SPNEGO
authentication by the
web server. The _HOST keyword will be automatically substituted with the node's
hostname.</description>
</property>
<property>
<name>hbase.security.authentication.spnego.kerberos.keytab</name>
<value>/etc/security/keytabs/spnego.service.keytab</value>
<description>Required for SPNEGO, the Kerberos keytab file to use for SPNEGO
authentication by the
web server.</description>
</property>
<property>
<name>hbase.security.authentication.spnego.kerberos.name.rules</name>
<value></value>
<description>Optional, Hadoop-style `auth_to_local` rules which will be parsed and
used in the
handling of Kerberos principals</description>
</property>
<property>
<name>hbase.security.authentication.signature.secret.file</name>
<value></value>
<description>Optional, a file whose contents will be used as a secret to sign the
HTTP cookies
as a part of the SPNEGO authentication handshake. If this is not provided, Java's
`Random` library
will be used for the secret.</description>
</property>

213
59.3. Defining administrators of the Web UI
In the previous section, we cover how to enable authentication for the Web UI via SPNEGO.
However, some portions of the Web UI could be used to impact the availability and performance of
an HBase cluster. As such, it is desirable to ensure that only those with proper authority can
interact with these sensitive endpoints.

HBase allows the adminstrators to be defined via a list of usernames or groups in hbase-site.xml

<property>
<name>hbase.security.authentication.spnego.admin.users</name>
<value></value>
</property>
<property>
<name>hbase.security.authentication.spnego.admin.groups</name>
<value></value>
</property>

The usernames are those which the Kerberos identity maps to, given the Hadoop auth_to_local
rules in core-site.xml. The groups here are the Unix groups associated with the mapped usernames.

Consider the following scenario to describe how the configuration properties operate. Consider
three users which are defined in the Kerberos KDC:

[email protected]

[email protected]

[email protected]

The default Hadoop auth_to_local rules map these principals to the "shortname":

• alice

• bob

• charlie

Unix groups membership define that alice is a member of the group admins. bob and charlie are not
members of the admins group.

<property>
<name>hbase.security.authentication.spnego.admin.users</name>
<value>charlie</value>
</property>
<property>
<name>hbase.security.authentication.spnego.admin.groups</name>
<value>admins</value>
</property>

214
Given the above configuration, alice is allowed to access sensitive endpoints in the Web UI as she is
a member of the admins group. charlie is also allowed to access sensitive endpoints because he is
explicitly listed as an admin in the configuration. bob is not allowed to access sensitive endpoints
because he is not a member of the admins group nor is listed as an explicit admin user via
hbase.security.authentication.spnego.admin.users, but can still use any non-sensitive endpoints in
the Web UI.

If it doesn’t go without saying: non-authenticated users cannot access any part of the Web UI.

59.4. Other UI security-related configuration


While it is a clear anti-pattern for HBase developers, the developers acknowledge that the HBase
configuration (including Hadoop configuration files) may contain sensitive information. As such, a
user may find that they do not want to expose the HBase service-level configuration to all
authenticated users. They may configure HBase to require a user must be an admin to access the
service-level configuration via the HBase UI. This configuration is false by default (any
authenticated user may access the configuration).

Users who wish to change this would set the following in their hbase-site.xml:

<property>
<name>hbase.security.authentication.ui.config.protected</name>
<value>true</value>
</property>

215
Chapter 60. Secure Client Access to Apache
HBase
Newer releases of Apache HBase (>= 0.92) support optional SASL authentication of clients. See also
Matteo Bertozzi’s article on Understanding User Authentication and Authorization in Apache
HBase.

This describes how to set up Apache HBase and clients for connection to secure HBase resources.

60.1. Prerequisites
Hadoop Authentication Configuration
To run HBase RPC with strong authentication, you must set hbase.security.authentication to
kerberos. In this case, you must also set hadoop.security.authentication to kerberos in core-
site.xml. Otherwise, you would be using strong authentication for HBase but not for the
underlying HDFS, which would cancel out any benefit.

Kerberos KDC
You need to have a working Kerberos KDC.

60.2. Server-side Configuration for Secure Operation


First, refer to security.prerequisites and ensure that your underlying HDFS configuration is secure.

Add the following to the hbase-site.xml file on every server machine in the cluster:

<property>
<name>hbase.security.authentication</name>
<value>kerberos</value>
</property>
<property>
<name>hbase.security.authorization</name>
<value>true</value>
</property>
<property>
<name>hbase.coprocessor.region.classes</name>
<value>org.apache.hadoop.hbase.security.token.TokenProvider</value>
</property>

A full shutdown and restart of HBase service is required when deploying these configuration
changes.

60.3. Client-side Configuration for Secure Operation


First, refer to Prerequisites and ensure that your underlying HDFS configuration is secure.

216
Add the following to the hbase-site.xml file on every client:

<property>
<name>hbase.security.authentication</name>
<value>kerberos</value>
</property>

Before 2.2.0 version, the client environment must be logged in to Kerberos from KDC or keytab via
the kinit command before communication with the HBase cluster will be possible.

Since 2.2.0, client can specify the following configurations in hbase-site.xml:

<property>
<name>hbase.client.keytab.file</name>
<value>/local/path/to/client/keytab</value>
</property>

<property>
<name>hbase.client.keytab.principal</name>
<value>[email protected]</value>
</property>

Then application can automatically do the login and credential renewal jobs without client
interference.

It’s optional feature, client, who upgrades to 2.2.0, can still keep their login and credential renewal
logic already did in older version, as long as keeping hbase.client.keytab.file and
hbase.client.keytab.principal are unset.

Be advised that if the hbase.security.authentication in the client- and server-side site files do not
match, the client will not be able to communicate with the cluster.

Once HBase is configured for secure RPC it is possible to optionally configure encrypted
communication. To do so, add the following to the hbase-site.xml file on every client:

<property>
<name>hbase.rpc.protection</name>
<value>privacy</value>
</property>

This configuration property can also be set on a per-connection basis. Set it in the Configuration
supplied to Table:

217
Configuration conf = HBaseConfiguration.create();
Connection connection = ConnectionFactory.createConnection(conf);
conf.set("hbase.rpc.protection", "privacy");
try (Connection connection = ConnectionFactory.createConnection(conf);
Table table = connection.getTable(TableName.valueOf(tablename))) {
.... do your stuff
}

Expect a ~10% performance penalty for encrypted communication.

60.4. Client-side Configuration for Secure Operation -


Thrift Gateway
Add the following to the hbase-site.xml file for every Thrift gateway:

<property>
<name>hbase.thrift.keytab.file</name>
<value>/etc/hbase/conf/hbase.keytab</value>
</property>
<property>
<name>hbase.thrift.kerberos.principal</name>
<value>$USER/[email protected]</value>
<!-- TODO: This may need to be HTTP/_HOST@<REALM> and _HOST may not work.
You may have to put the concrete full hostname.
-->
</property>
<!-- Add these if you need to configure a different DNS interface from the default -->
<property>
<name>hbase.thrift.dns.interface</name>
<value>default</value>
</property>
<property>
<name>hbase.thrift.dns.nameserver</name>
<value>default</value>
</property>

Substitute the appropriate credential and keytab for $USER and $KEYTAB respectively.

In order to use the Thrift API principal to interact with HBase, it is also necessary to add the
hbase.thrift.kerberos.principal to the acl table. For example, to give the Thrift API principal,
thrift_server, administrative access, a command such as this one will suffice:

grant 'thrift_server', 'RWCA'

For more information about ACLs, please see the Access Control Labels (ACLs) section

218
The Thrift gateway will authenticate with HBase using the supplied credential. No authentication
will be performed by the Thrift gateway itself. All client access via the Thrift gateway will use the
Thrift gateway’s credential and have its privilege.

60.5. Configure the Thrift Gateway to Authenticate on


Behalf of the Client
Client-side Configuration for Secure Operation - Thrift Gateway describes how to authenticate a
Thrift client to HBase using a fixed user. As an alternative, you can configure the Thrift gateway to
authenticate to HBase on the client’s behalf, and to access HBase using a proxy user. This was
implemented in HBASE-11349 for Thrift 1, and HBASE-11474 for Thrift 2.

Limitations with Thrift Framed Transport

 If you use framed transport, you cannot yet take advantage of this feature, because
SASL does not work with Thrift framed transport at this time.

To enable it, do the following.

1. Be sure Thrift is running in secure mode, by following the procedure described in Client-side
Configuration for Secure Operation - Thrift Gateway.

2. Be sure that HBase is configured to allow proxy users, as described in REST Gateway
Impersonation Configuration.

3. In hbase-site.xml for each cluster node running a Thrift gateway, set the property
hbase.thrift.security.qop to one of the following three values:

◦ privacy - authentication, integrity, and confidentiality checking.

◦ integrity - authentication and integrity checking

◦ authentication - authentication checking only

4. Restart the Thrift gateway processes for the changes to take effect. If a node is running Thrift,
the output of the jps command will list a ThriftServer process. To stop Thrift on a node, run the
command bin/hbase-daemon.sh stop thrift. To start Thrift on a node, run the command
bin/hbase-daemon.sh start thrift.

60.6. Configure the Thrift Gateway to Use the doAs


Feature
Configure the Thrift Gateway to Authenticate on Behalf of the Client describes how to configure the
Thrift gateway to authenticate to HBase on the client’s behalf, and to access HBase using a proxy
user. The limitation of this approach is that after the client is initialized with a particular set of
credentials, it cannot change these credentials during the session. The doAs feature provides a
flexible way to impersonate multiple principals using the same client. This feature was
implemented in HBASE-12640 for Thrift 1, but is currently not available for Thrift 2.

To enable the doAs feature, add the following to the hbase-site.xml file for every Thrift gateway:

219
<property>
<name>hbase.regionserver.thrift.http</name>
<value>true</value>
</property>
<property>
<name>hbase.thrift.support.proxyuser</name>
<value>true/value>
</property>

To allow proxy users when using doAs impersonation, add the following to the hbase-site.xml file
for every HBase node:

<property>
<name>hadoop.security.authorization</name>
<value>true</value>
</property>
<property>
<name>hadoop.proxyuser.$USER.groups</name>
<value>$GROUPS</value>
</property>
<property>
<name>hadoop.proxyuser.$USER.hosts</name>
<value>$GROUPS</value>
</property>

Take a look at the demo client to get an overall idea of how to use this feature in your client.

60.7. Client-side Configuration for Secure Operation -


REST Gateway
Add the following to the hbase-site.xml file for every REST gateway:

<property>
<name>hbase.rest.keytab.file</name>
<value>$KEYTAB</value>
</property>
<property>
<name>hbase.rest.kerberos.principal</name>
<value>$USER/[email protected]</value>
</property>

Substitute the appropriate credential and keytab for $USER and $KEYTAB respectively.

The REST gateway will authenticate with HBase using the supplied credential.

In order to use the REST API principal to interact with HBase, it is also necessary to add the

220
hbase.rest.kerberos.principal to the acl table. For example, to give the REST API principal,
rest_server, administrative access, a command such as this one will suffice:

grant 'rest_server', 'RWCA'

For more information about ACLs, please see the Access Control Labels (ACLs) section

HBase REST gateway supports SPNEGO HTTP authentication for client access to the gateway. To
enable REST gateway Kerberos authentication for client access, add the following to the hbase-
site.xml file for every REST gateway.

<property>
<name>hbase.rest.support.proxyuser</name>
<value>true</value>
</property>
<property>
<name>hbase.rest.authentication.type</name>
<value>kerberos</value>
</property>
<property>
<name>hbase.rest.authentication.kerberos.principal</name>
<value>HTTP/[email protected]</value>
</property>
<property>
<name>hbase.rest.authentication.kerberos.keytab</name>
<value>$KEYTAB</value>
</property>
<!-- Add these if you need to configure a different DNS interface from the default -->
<property>
<name>hbase.rest.dns.interface</name>
<value>default</value>
</property>
<property>
<name>hbase.rest.dns.nameserver</name>
<value>default</value>
</property>

Substitute the keytab for HTTP for $KEYTAB.

HBase REST gateway supports different 'hbase.rest.authentication.type': simple, kerberos. You can
also implement a custom authentication by implementing Hadoop AuthenticationHandler, then
specify the full class name as 'hbase.rest.authentication.type' value. For more information, refer to
SPNEGO HTTP authentication.

60.8. REST Gateway Impersonation Configuration


By default, the REST gateway doesn’t support impersonation. It accesses the HBase on behalf of
clients as the user configured as in the previous section. To the HBase server, all requests are from

221
the REST gateway user. The actual users are unknown. You can turn on the impersonation support.
With impersonation, the REST gateway user is a proxy user. The HBase server knows the actual/real
user of each request. So it can apply proper authorizations.

To turn on REST gateway impersonation, we need to configure HBase servers (masters and region
servers) to allow proxy users; configure REST gateway to enable impersonation.

To allow proxy users, add the following to the hbase-site.xml file for every HBase server:

<property>
<name>hadoop.security.authorization</name>
<value>true</value>
</property>
<property>
<name>hadoop.proxyuser.$USER.groups</name>
<value>$GROUPS</value>
</property>
<property>
<name>hadoop.proxyuser.$USER.hosts</name>
<value>$GROUPS</value>
</property>

Substitute the REST gateway proxy user for $USER, and the allowed group list for $GROUPS.

To enable REST gateway impersonation, add the following to the hbase-site.xml file for every REST
gateway.

<property>
<name>hbase.rest.authentication.type</name>
<value>kerberos</value>
</property>
<property>
<name>hbase.rest.authentication.kerberos.principal</name>
<value>HTTP/[email protected]</value>
</property>
<property>
<name>hbase.rest.authentication.kerberos.keytab</name>
<value>$KEYTAB</value>
</property>

Substitute the keytab for HTTP for $KEYTAB.

222
Chapter 61. Simple User Access to Apache
HBase
Newer releases of Apache HBase (>= 0.92) support optional SASL authentication of clients. See also
Matteo Bertozzi’s article on Understanding User Authentication and Authorization in Apache
HBase.

This describes how to set up Apache HBase and clients for simple user access to HBase resources.

61.1. Simple versus Secure Access


The following section shows how to set up simple user access. Simple user access is not a secure
method of operating HBase. This method is used to prevent users from making mistakes. It can be
used to mimic the Access Control using on a development system without having to set up
Kerberos.

This method is not used to prevent malicious or hacking attempts. To make HBase secure against
these types of attacks, you must configure HBase for secure operation. Refer to the section Secure
Client Access to Apache HBase and complete all of the steps described there.

61.2. Prerequisites
None

61.3. Server-side Configuration for Simple User Access


Operation
Add the following to the hbase-site.xml file on every server machine in the cluster:

223
<property>
<name>hbase.security.authentication</name>
<value>simple</value>
</property>
<property>
<name>hbase.security.authorization</name>
<value>true</value>
</property>
<property>
<name>hbase.coprocessor.master.classes</name>
<value>org.apache.hadoop.hbase.security.access.AccessController</value>
</property>
<property>
<name>hbase.coprocessor.region.classes</name>
<value>org.apache.hadoop.hbase.security.access.AccessController</value>
</property>
<property>
<name>hbase.coprocessor.regionserver.classes</name>
<value>org.apache.hadoop.hbase.security.access.AccessController</value>
</property>

For 0.94, add the following to the hbase-site.xml file on every server machine in the cluster:

<property>
<name>hbase.rpc.engine</name>
<value>org.apache.hadoop.hbase.ipc.SecureRpcEngine</value>
</property>
<property>
<name>hbase.coprocessor.master.classes</name>
<value>org.apache.hadoop.hbase.security.access.AccessController</value>
</property>
<property>
<name>hbase.coprocessor.region.classes</name>
<value>org.apache.hadoop.hbase.security.access.AccessController</value>
</property>

A full shutdown and restart of HBase service is required when deploying these configuration
changes.

61.4. Client-side Configuration for Simple User Access


Operation
Add the following to the hbase-site.xml file on every client:

224
<property>
<name>hbase.security.authentication</name>
<value>simple</value>
</property>

For 0.94, add the following to the hbase-site.xml file on every server machine in the cluster:

<property>
<name>hbase.rpc.engine</name>
<value>org.apache.hadoop.hbase.ipc.SecureRpcEngine</value>
</property>

Be advised that if the hbase.security.authentication in the client- and server-side site files do not
match, the client will not be able to communicate with the cluster.

61.4.1. Client-side Configuration for Simple User Access Operation - Thrift


Gateway

The Thrift gateway user will need access. For example, to give the Thrift API user, thrift_server,
administrative access, a command such as this one will suffice:

grant 'thrift_server', 'RWCA'

For more information about ACLs, please see the Access Control Labels (ACLs) section

The Thrift gateway will authenticate with HBase using the supplied credential. No authentication
will be performed by the Thrift gateway itself. All client access via the Thrift gateway will use the
Thrift gateway’s credential and have its privilege.

61.4.2. Client-side Configuration for Simple User Access Operation - REST


Gateway

The REST gateway will authenticate with HBase using the supplied credential. No authentication
will be performed by the REST gateway itself. All client access via the REST gateway will use the
REST gateway’s credential and have its privilege.

The REST gateway user will need access. For example, to give the REST API user, rest_server,
administrative access, a command such as this one will suffice:

grant 'rest_server', 'RWCA'

For more information about ACLs, please see the Access Control Labels (ACLs) section

It should be possible for clients to authenticate with the HBase cluster through the REST gateway in
a pass-through manner via SPNEGO HTTP authentication. This is future work.

225
Chapter 62. Securing Access to HDFS and
ZooKeeper
Secure HBase requires secure ZooKeeper and HDFS so that users cannot access and/or modify the
metadata and data from under HBase. HBase uses HDFS (or configured file system) to keep its data
files as well as write ahead logs (WALs) and other data. HBase uses ZooKeeper to store some
metadata for operations (master address, table locks, recovery state, etc).

62.1. Securing ZooKeeper Data


ZooKeeper has a pluggable authentication mechanism to enable access from clients using different
methods. ZooKeeper even allows authenticated and un-authenticated clients at the same time. The
access to znodes can be restricted by providing Access Control Lists (ACLs) per znode. An ACL
contains two components, the authentication method and the principal. ACLs are NOT enforced
hierarchically. See ZooKeeper Programmers Guide for details.

HBase daemons authenticate to ZooKeeper via SASL and kerberos (See SASL Authentication with
ZooKeeper). HBase sets up the znode ACLs so that only the HBase user and the configured hbase
superuser (hbase.superuser) can access and modify the data. In cases where ZooKeeper is used for
service discovery or sharing state with the client, the znodes created by HBase will also allow
anyone (regardless of authentication) to read these znodes (clusterId, master address, meta
location, etc), but only the HBase user can modify them.

62.2. Securing File System (HDFS) Data


All of the data under management is kept under the root directory in the file system (
hbase.rootdir). Access to the data and WAL files in the filesystem should be restricted so that users
cannot bypass the HBase layer, and peek at the underlying data files from the file system. HBase
assumes the filesystem used (HDFS or other) enforces permissions hierarchically. If sufficient
protection from the file system (both authorization and authentication) is not provided, HBase level
authorization control (ACLs, visibility labels, etc) is meaningless since the user can always access
the data from the file system.

HBase enforces the posix-like permissions 700 (rwx------) to its root directory. It means that only
the HBase user can read or write the files in FS. The default setting can be changed by configuring
hbase.rootdir.perms in hbase-site.xml. A restart of the active master is needed so that it changes the
used permissions. For versions before 1.2.0, you can check whether HBASE-13780 is committed, and
if not, you can manually set the permissions for the root directory if needed. Using HDFS, the
command would be:

sudo -u hdfs hadoop fs -chmod 700 /hbase

You should change /hbase if you are using a different hbase.rootdir.

In secure mode, SecureBulkLoadEndpoint should be configured and used for properly handing of

226
users files created from MR jobs to the HBase daemons and HBase user. The staging directory in the
distributed file system used for bulk load (hbase.bulkload.staging.dir, defaults to /tmp/hbase-
staging) should have (mode 711, or rwx—x—x) so that users can access the staging directory created
under that parent directory, but cannot do any other operation. See Secure Bulk Load for how to
configure SecureBulkLoadEndPoint.

227
Chapter 63. Securing Access To Your Data
After you have configured secure authentication between HBase client and server processes and
gateways, you need to consider the security of your data itself. HBase provides several strategies for
securing your data:

• Role-based Access Control (RBAC) controls which users or groups can read and write to a given
HBase resource or execute a coprocessor endpoint, using the familiar paradigm of roles.

• Visibility Labels which allow you to label cells and control access to labelled cells, to further
restrict who can read or write to certain subsets of your data. Visibility labels are stored as tags.
See hbase.tags for more information.

• Transparent encryption of data at rest on the underlying filesystem, both in HFiles and in the
WAL. This protects your data at rest from an attacker who has access to the underlying
filesystem, without the need to change the implementation of the client. It can also protect
against data leakage from improperly disposed disks, which can be important for legal and
regulatory compliance.

Server-side configuration, administration, and implementation details of each of these features are
discussed below, along with any performance trade-offs. An example security configuration is given
at the end, to show these features all used together, as they might be in a real-world scenario.

All aspects of security in HBase are in active development and evolving rapidly.
Any strategy you employ for security of your data should be thoroughly tested. In
 addition, some of these features are still in the experimental stage of development.
To take advantage of many of these features, you must be running HBase 0.98+ and
using the HFile v3 file format.

Protecting Sensitive Files


Several procedures in this section require you to copy files between cluster nodes.
 When copying keys, configuration files, or other files containing sensitive strings,
use a secure method, such as ssh, to avoid leaking sensitive data.

Procedure: Basic Server-Side Configuration


1. Enable HFile v3, by setting hfile.format.version to 3 in hbase-site.xml. This is the default for
HBase 1.0 and newer.

<property>
<name>hfile.format.version</name>
<value>3</value>
</property>

2. Enable SASL and Kerberos authentication for RPC and ZooKeeper, as described in
security.prerequisites and SASL Authentication with ZooKeeper.

228
63.1. Tags
Tags are a feature of HFile v3. A tag is a piece of metadata which is part of a cell, separate from the
key, value, and version. Tags are an implementation detail which provides a foundation for other
security-related features such as cell-level ACLs and visibility labels. Tags are stored in the HFiles
themselves. It is possible that in the future, tags will be used to implement other HBase features.
You don’t need to know a lot about tags in order to use the security features they enable.

63.1.1. Implementation Details

Every cell can have zero or more tags. Every tag has a type and the actual tag byte array.

Just as row keys, column families, qualifiers and values can be encoded (see
data.block.encoding.types), tags can also be encoded as well. You can enable or disable tag encoding
at the level of the column family, and it is enabled by default. Use the
HColumnDescriptor#setCompressionTags(boolean compressTags) method to manage encoding settings
on a column family. You also need to enable the DataBlockEncoder for the column family, for
encoding of tags to take effect.

You can enable compression of each tag in the WAL, if WAL compression is also enabled, by setting
the value of hbase.regionserver.wal.tags.enablecompression to true in hbase-site.xml. Tag
compression uses dictionary encoding.

Coprocessors that run server-side on RegionServers can perform get and set operations on cell Tags.
Tags are stripped out at the RPC layer before the read response is sent back, so clients do not see
these tags. Tag compression is not supported when using WAL encryption.

63.2. Access Control Labels (ACLs)


63.2.1. How It Works

ACLs in HBase are based upon a user’s membership in or exclusion from groups, and a given
group’s permissions to access a given resource. ACLs are implemented as a coprocessor called
AccessController.

HBase does not maintain a private group mapping, but relies on a Hadoop group mapper, which
maps between entities in a directory such as LDAP or Active Directory, and HBase users. Any
supported Hadoop group mapper will work. Users are then granted specific permissions (Read,
Write, Execute, Create, Admin) against resources (global, namespaces, tables, cells, or endpoints).

With Kerberos and Access Control enabled, client access to HBase is authenticated
 and user data is private unless access has been explicitly granted.

HBase has a simpler security model than relational databases, especially in terms of client
operations. No distinction is made between an insert (new record) and update (of existing record),
for example, as both collapse down into a Put.

229
Understanding Access Levels

HBase access levels are granted independently of each other and allow for different types of
operations at a given scope.

• Read (R) - can read data at the given scope

• Write (W) - can write data at the given scope

• Execute (X) - can execute coprocessor endpoints at the given scope

• Create (C) - can create tables or drop tables (even those they did not create) at the given scope

• Admin (A) - can perform cluster operations such as balancing the cluster or assigning regions at
the given scope

The possible scopes are:

• Superuser - superusers can perform any operation available in HBase, to any resource. The user
who runs HBase on your cluster is a superuser, as are any principals assigned to the
configuration property hbase.superuser in hbase-site.xml on the HMaster.

• Global - permissions granted at global scope allow the admin to operate on all tables of the
cluster.

• Namespace - permissions granted at namespace scope apply to all tables within a given
namespace.

• Table - permissions granted at table scope apply to data or metadata within a given table.

• ColumnFamily - permissions granted at ColumnFamily scope apply to cells within that


ColumnFamily.

• Cell - permissions granted at cell scope apply to that exact cell coordinate (key, value,
timestamp). This allows for policy evolution along with data.

To change an ACL on a specific cell, write an updated cell with new ACL to the precise
coordinates of the original.

If you have a multi-versioned schema and want to update ACLs on all visible versions, you need
to write new cells for all visible versions. The application has complete control over policy
evolution.

The exception to the above rule is append and increment processing. Appends and increments
can carry an ACL in the operation. If one is included in the operation, then it will be applied to
the result of the append or increment. Otherwise, the ACL of the existing cell you are appending to
or incrementing is preserved.

The combination of access levels and scopes creates a matrix of possible access levels that can be
granted to a user. In a production environment, it is useful to think of access levels in terms of what
is needed to do a specific job. The following list describes appropriate access levels for some
common types of HBase users. It is important not to grant more access than is required for a given
user to perform their required tasks.

• Superusers - In a production system, only the HBase user should have superuser access. In a

230
development environment, an administrator may need superuser access in order to quickly
control and manage the cluster. However, this type of administrator should usually be a Global
Admin rather than a superuser.

• Global Admins - A global admin can perform tasks and access every table in HBase. In a typical
production environment, an admin should not have Read or Write permissions to data within
tables.

• A global admin with Admin permissions can perform cluster-wide operations on the cluster,
such as balancing, assigning or unassigning regions, or calling an explicit major compaction.
This is an operations role.

• A global admin with Create permissions can create or drop any table within HBase. This is more
of a DBA-type role.

In a production environment, it is likely that different users will have only one of Admin and
Create permissions.

In the current implementation, a Global Admin with Admin permission can


grant himself Read and Write permissions on a table and gain access to that
table’s data. For this reason, only grant Global Admin permissions to trusted
user who actually need them.

 Also be aware that a Global Admin with Create permission can perform a Put
operation on the ACL table, simulating a grant or revoke and circumventing the
authorization check for Global Admin permissions.

Due to these issues, be cautious with granting Global Admin privileges.

• Namespace Admins - a namespace admin with Create permissions can create or drop tables
within that namespace, and take and restore snapshots. A namespace admin with Admin
permissions can perform operations such as splits or major compactions on tables within that
namespace.

• Table Admins - A table admin can perform administrative operations only on that table. A table
admin with Create permissions can create snapshots from that table or restore that table from a
snapshot. A table admin with Admin permissions can perform operations such as splits or major
compactions on that table.

• Users - Users can read or write data, or both. Users can also execute coprocessor endpoints, if
given Executable permissions.

Table 9. Real-World Example of Access Levels

Job Title Scope Permissions Description

Senior Administrator Global Access, Create Manages the cluster


and gives access to
Junior Administrators.

Junior Administrator Global Create Creates tables and gives


access to Table
Administrators.

231
Job Title Scope Permissions Description

Table Administrator Table Access Maintains a table from


an operations point of
view.

Data Analyst Table Read Creates reports from


HBase data.

Web Application Table Read, Write Puts data into HBase


and uses HBase data to
perform operations.

ACL Matrix
For more details on how ACLs map to specific HBase operations and tasks, see appendix acl matrix.

Implementation Details

Cell-level ACLs are implemented using tags (see Tags). In order to use cell-level ACLs, you must be
using HFile v3 and HBase 0.98 or newer.

1. Files created by HBase are owned by the operating system user running the HBase process. To
interact with HBase files, you should use the API or bulk load facility.

2. HBase does not model "roles" internally in HBase. Instead, group names can be granted
permissions. This allows external modeling of roles via group membership. Groups are created
and manipulated externally to HBase, via the Hadoop group mapping service.

Server-Side Configuration

1. As a prerequisite, perform the steps in Procedure: Basic Server-Side Configuration.

2. Install and configure the AccessController coprocessor, by setting the following properties in
hbase-site.xml. These properties take a list of classes.

If you use the AccessController along with the VisibilityController, the


AccessController must come first in the list, because with both components
 active, the VisibilityController will delegate access control on its system tables
to the AccessController. For an example of using both together, see Security
Configuration Example.

232
<property>
<name>hbase.security.authorization</name>
<value>true</value>
</property>
<property>
<name>hbase.coprocessor.region.classes</name>
<value>org.apache.hadoop.hbase.security.access.AccessController,
org.apache.hadoop.hbase.security.token.TokenProvider</value>
</property>
<property>
<name>hbase.coprocessor.master.classes</name>
<value>org.apache.hadoop.hbase.security.access.AccessController</value>
</property>
<property>
<name>hbase.coprocessor.regionserver.classes</name>
<value>org.apache.hadoop.hbase.security.access.AccessController</value>
</property>
<property>
<name>hbase.security.exec.permission.checks</name>
<value>true</value>
</property>

Optionally, you can enable transport security, by setting hbase.rpc.protection to privacy. This
requires HBase 0.98.4 or newer.

3. Set up the Hadoop group mapper in the Hadoop namenode’s core-site.xml. This is a Hadoop file,
not an HBase file. Customize it to your site’s needs. Following is an example.

233
<property>
<name>hadoop.security.group.mapping</name>
<value>org.apache.hadoop.security.LdapGroupsMapping</value>
</property>

<property>
<name>hadoop.security.group.mapping.ldap.url</name>
<value>ldap://server</value>
</property>

<property>
<name>hadoop.security.group.mapping.ldap.bind.user</name>
<value>[email protected]</value>
</property>

<property>
<name>hadoop.security.group.mapping.ldap.bind.password</name>
<value>****</value>
</property>

<property>
<name>hadoop.security.group.mapping.ldap.base</name>
<value>dc=example-ad,dc=local</value>
</property>

<property>
<name>hadoop.security.group.mapping.ldap.search.filter.user</name>
<value>(&amp;(objectClass=user)(sAMAccountName={0}))</value>
</property>

<property>
<name>hadoop.security.group.mapping.ldap.search.filter.group</name>
<value>(objectClass=group)</value>
</property>

<property>
<name>hadoop.security.group.mapping.ldap.search.attr.member</name>
<value>member</value>
</property>

<property>
<name>hadoop.security.group.mapping.ldap.search.attr.group.name</name>
<value>cn</value>
</property>

4. Optionally, enable the early-out evaluation strategy. Prior to HBase 0.98.0, if a user was not
granted access to a column family, or at least a column qualifier, an AccessDeniedException
would be thrown. HBase 0.98.0 removed this exception in order to allow cell-level exceptional
grants. To restore the old behavior in HBase 0.98.0-0.98.6, set hbase.security.access.early_out to

234
true in hbase-site.xml. In HBase 0.98.6, the default has been returned to true.

5. Distribute your configuration and restart your cluster for changes to take effect.

6. To test your configuration, log into HBase Shell as a given user and use the whoami command to
report the groups your user is part of. In this example, the user is reported as being a member
of the services group.

hbase> whoami
service (auth:KERBEROS)
groups: services

Administration

Administration tasks can be performed from HBase Shell or via an API.

API Examples
Many of the API examples below are taken from source files hbase-
server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.j
ava and hbase-
 server/src/test/java/org/apache/hadoop/hbase/security/access/SecureTestUtil.java.

Neither the examples, nor the source files they are taken from, are part of the
public HBase API, and are provided for illustration only. Refer to the official API
for usage instructions.

1. User and Group Administration

Users and groups are maintained external to HBase, in your directory.

2. Granting Access To A Namespace, Table, Column Family, or Cell

There are a few different types of syntax for grant statements. The first, and most familiar, is as
follows, with the table and column family being optional:

grant 'user', 'RWXCA', 'TABLE', 'CF', 'CQ'

Groups and users are granted access in the same way, but groups are prefixed with an @ symbol.
In the same way, tables and namespaces are specified in the same way, but namespaces are
prefixed with an @ symbol.

It is also possible to grant multiple permissions against the same resource in a single statement,
as in this example. The first sub-clause maps users to ACLs and the second sub-clause specifies
the resource.

235
HBase Shell support for granting and revoking access at the cell level is for
testing and verification support, and should not be employed for production
 use because it won’t apply the permissions to cells that don’t exist yet. The
correct way to apply cell level permissions is to do so in the application code
when storing the values.

ACL Granularity and Evaluation Order


ACLs are evaluated from least granular to most granular, and when an ACL is reached that
grants permission, evaluation stops. This means that cell ACLs do not override ACLs at less
granularity.

236
Example 13. HBase Shell

◦ Global:

hbase> grant '@admins', 'RWXCA'

◦ Namespace:

hbase> grant 'service', 'RWXCA', '@test-NS'

◦ Table:

hbase> grant 'service', 'RWXCA', 'user'

◦ Column Family:

hbase> grant '@developers', 'RW', 'user', 'i'

◦ Column Qualifier:

hbase> grant 'service, 'RW', 'user', 'i', 'foo'

◦ Cell:

The syntax for granting cell ACLs uses the following syntax:

grant <table>, \
{ '<user-or-group>' => \
'<permissions>', ... }, \
{ <scanner-specification> }

◦ <user-or-group> is the user or group name, prefixed with @ in the case of a group.

◦ <permissions> is a string containing any or all of "RWXCA", though only R and W are
meaningful at cell scope.

◦ <scanner-specification> is the scanner specification syntax and conventions used by the


'scan' shell command. For some examples of scanner specifications, issue the following
HBase Shell command.

hbase> help "scan"

If you need to enable cell acl,the hfile.format.version option in hbase-site.xml should be

237
greater than or equal to 3,and the hbase.security.access.early_out option should be set
to false.This example grants read access to the 'testuser' user and read/write access to
the 'developers' group, on cells in the 'pii' column which match the filter.

hbase> grant 'user', \


{ '@developers' => 'RW', 'testuser' => 'R' }, \
{ COLUMNS => 'pii', FILTER => "(PrefixFilter ('test'))" }

The shell will run a scanner with the given criteria, rewrite the found cells with new
ACLs, and store them back to their exact coordinates.

Example 14. API

The following example shows how to grant access at the table level.

public static void grantOnTable(final HBaseTestingUtility util, final String


user,
final TableName table, final byte[] family, final byte[] qualifier,
final Permission.Action... actions) throws Exception {
SecureTestUtil.updateACLs(util, new Callable<Void>() {
@Override
public Void call() throws Exception {
try (Connection connection = ConnectionFactory.createConnection(util
.getConfiguration())) {
connection.getAdmin().grant(new UserPermission(user, Permission
.newBuilder(table)
.withFamily(family).withQualifier(qualifier).withActions(actions)
.build()),
false);
}
return null;
}
});
}

To grant permissions at the cell level, you can use the Mutation.setACL method:

Mutation.setACL(String user, Permission perms)


Mutation.setACL(Map<String, Permission> perms)

Specifically, this example provides read permission to a user called user1 on any cells
contained in a particular Put operation:

put.setACL(“user1”, new Permission(Permission.Action.READ))

238
3. Revoking Access Control From a Namespace, Table, Column Family, or Cell

The revoke command and API are twins of the grant command and API, and the syntax is
exactly the same. The only exception is that you cannot revoke permissions at the cell level. You
can only revoke access that has previously been granted, and a revoke statement is not the same
thing as explicit denial to a resource.

HBase Shell support for granting and revoking access is for testing and
verification support, and should not be employed for production use because it
 won’t apply the permissions to cells that don’t exist yet. The correct way to
apply cell-level permissions is to do so in the application code when storing the
values.

Example 15. Revoking Access To a Table

public static void revokeFromTable(final HBaseTestingUtility util, final String


user,
final TableName table, final byte[] family, final byte[] qualifier,
final Permission.Action... actions) throws Exception {
SecureTestUtil.updateACLs(util, new Callable<Void>() {
@Override
public Void call() throws Exception {
try (Connection connection = ConnectionFactory.createConnection(util
.getConfiguration())) {
connection.getAdmin().revoke(new UserPermission(user, Permission
.newBuilder(table)
.withFamily(family).withQualifier(qualifier).withActions(actions)
.build()));
}
return null;
}
});
}

4. Showing a User’s Effective Permissions

HBase Shell

hbase> user_permission 'user'

hbase> user_permission '.*'

hbase> user_permission JAVA_REGEX

239
Example 16. API

public static void verifyAllowed(User user, AccessTestAction action, int count)


throws Exception {
try {
Object obj = user.runAs(action);
if (obj != null && obj instanceof List&lt;?&gt;) {
List&lt;?&gt; results = (List&lt;?&gt;) obj;
if (results != null && results.isEmpty()) {
fail("Empty non null results from action for user '" ` user.getShortName()
` "'");
}
assertEquals(count, results.size());
}
} catch (AccessDeniedException ade) {
fail("Expected action to pass for user '" ` user.getShortName() ` "' but was
denied");
}
}

63.3. Visibility Labels


Visibility labels control can be used to only permit users or principals associated with a given label
to read or access cells with that label. For instance, you might label a cell top-secret, and only grant
access to that label to the managers group. Visibility labels are implemented using Tags, which are a
feature of HFile v3, and allow you to store metadata on a per-cell basis. A label is a string, and
labels can be combined into expressions by using logical operators (&, |, or !), and using
parentheses for grouping. HBase does not do any kind of validation of expressions beyond basic
well-formedness. Visibility labels have no meaning on their own, and may be used to denote
sensitivity level, privilege level, or any other arbitrary semantic meaning.

If a user’s labels do not match a cell’s label or expression, the user is denied access to the cell.

In HBase 0.98.6 and newer, UTF-8 encoding is supported for visibility labels and expressions. When
creating labels using the addLabels(conf, labels) method provided by the
org.apache.hadoop.hbase.security.visibility.VisibilityClient class and passing labels in
Authorizations via Scan or Get, labels can contain UTF-8 characters, as well as the logical operators
normally used in visibility labels, with normal Java notations, without needing any escaping
method. However, when you pass a CellVisibility expression via a Mutation, you must enclose the
expression with the CellVisibility.quote() method if you use UTF-8 characters or logical operators.
See TestExpressionParser and the source file hbase-
client/src/test/java/org/apache/hadoop/hbase/client/TestScan.java.

A user adds visibility expressions to a cell during a Put operation. In the default configuration, the
user does not need to have access to a label in order to label cells with it. This behavior is controlled
by the configuration option hbase.security.visibility.mutations.checkauths. If you set this option
to true, the labels the user is modifying as part of the mutation must be associated with the user, or

240
the mutation will fail. Whether a user is authorized to read a labelled cell is determined during a
Get or Scan, and results which the user is not allowed to read are filtered out. This incurs the same
I/O penalty as if the results were returned, but reduces load on the network.

Visibility labels can also be specified during Delete operations. For details about visibility labels and
Deletes, see HBASE-10885.

The user’s effective label set is built in the RPC context when a request is first received by the
RegionServer. The way that users are associated with labels is pluggable. The default plugin passes
through labels specified in Authorizations added to the Get or Scan and checks those against the
calling user’s authenticated labels list. When the client passes labels for which the user is not
authenticated, the default plugin drops them. You can pass a subset of user authenticated labels via
the Get#setAuthorizations(Authorizations(String,…)) and
Scan#setAuthorizations(Authorizations(String,…)); methods.

Groups can be granted visibility labels the same way as users. Groups are prefixed with an @
symbol. When checking visibility labels of a user, the server will include the visibility labels of the
groups of which the user is a member, together with the user’s own labels. When the visibility
labels are retrieved using API VisibilityClient#getAuths or Shell command get_auths for a user, we
will return labels added specifically for that user alone, not the group level labels.

Visibility label access checking is performed by the VisibilityController coprocessor. You can use
interface VisibilityLabelService to provide a custom implementation and/or control the way that
visibility labels are stored with cells. See the source file hbase-
server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithCustomVisLa
bService.java for one example.

Visibility labels can be used in conjunction with ACLs.

The labels have to be explicitly defined before they can be used in visibility labels.
 See below for an example of how this can be done.

There is currently no way to determine which labels have been applied to a cell.
 See HBASE-12470 for details.

 Visibility labels are not currently applied for superusers.

Table 10. Examples of Visibility Expressions

Expression Interpretation

fulltime Allow access to users associated with the


fulltime label.

!public Allow access to users not associated with the


public label.

( secret | topsecret ) & !probationary Allow access to users associated with either the
secret or topsecret label and not associated with
the probationary label.

241
63.3.1. Server-Side Configuration

1. As a prerequisite, perform the steps in Procedure: Basic Server-Side Configuration.

2. Install and configure the VisibilityController coprocessor by setting the following properties in
hbase-site.xml. These properties take a list of class names.

<property>
<name>hbase.security.authorization</name>
<value>true</value>
</property>
<property>
<name>hbase.coprocessor.region.classes</name>
<value>org.apache.hadoop.hbase.security.visibility.VisibilityController</value>
</property>
<property>
<name>hbase.coprocessor.master.classes</name>
<value>org.apache.hadoop.hbase.security.visibility.VisibilityController</value>
</property>

If you use the AccessController and VisibilityController coprocessors together,


the AccessController must come first in the list, because with both components
 active, the VisibilityController will delegate access control on its system tables
to the AccessController.

3. Adjust Configuration

By default, users can label cells with any label, including labels they are not associated with,
which means that a user can Put data that he cannot read. For example, a user could label a cell
with the (hypothetical) 'topsecret' label even if the user is not associated with that label. If you
only want users to be able to label cells with labels they are associated with, set
hbase.security.visibility.mutations.checkauths to true. In that case, the mutation will fail if it
makes use of labels the user is not associated with.

4. Distribute your configuration and restart your cluster for changes to take effect.

63.3.2. Administration

Administration tasks can be performed using the HBase Shell or the Java API. For defining the list of
visibility labels and associating labels with users, the HBase Shell is probably simpler.

242
API Examples
Many of the Java API examples in this section are taken from the source file hbase-
server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabels.j

 ava. Refer to that file or the API documentation for more context.

Neither these examples, nor the source file they were taken from, are part of the
public HBase API, and are provided for illustration only. Refer to the official API
for usage instructions.

1. Define the List of Visibility Labels

HBase Shell

hbase> add_labels [ 'admin', 'service', 'developer', 'test' ]

Example 17. Java API

public static void addLabels() throws Exception {


PrivilegedExceptionAction<VisibilityLabelsResponse> action = new
PrivilegedExceptionAction<VisibilityLabelsResponse>() {
public VisibilityLabelsResponse run() throws Exception {
String[] labels = { SECRET, TOPSECRET, CONFIDENTIAL, PUBLIC, PRIVATE,
COPYRIGHT, ACCENT,
UNICODE_VIS_TAG, UC1, UC2 };
try {
VisibilityClient.addLabels(conf, labels);
} catch (Throwable t) {
throw new IOException(t);
}
return null;
}
};
SUPERUSER.runAs(action);
}

2. Associate Labels with Users

HBase Shell

hbase> set_auths 'service', [ 'service' ]

hbase> set_auths 'testuser', [ 'test' ]

hbase> set_auths 'qa', [ 'test', 'developer' ]

243
hbase> set_auths '@qagroup', [ 'test' ]

Example 18. Java API

public void testSetAndGetUserAuths() throws Throwable {


final String user = "user1";
PrivilegedExceptionAction<Void> action = new PrivilegedExceptionAction<Void>
() {
public Void run() throws Exception {
String[] auths = { SECRET, CONFIDENTIAL };
try {
VisibilityClient.setAuths(conf, auths, user);
} catch (Throwable e) {
}
return null;
}
...

3. Clear Labels From Users

HBase Shell

hbase> clear_auths 'service', [ 'service' ]

hbase> clear_auths 'testuser', [ 'test' ]

hbase> clear_auths 'qa', [ 'test', 'developer' ]

hbase> clear_auths '@qagroup', [ 'test', 'developer' ]

Example 19. Java API

...
auths = new String[] { SECRET, PUBLIC, CONFIDENTIAL };
VisibilityLabelsResponse response = null;
try {
response = VisibilityClient.clearAuths(conf, auths, user);
} catch (Throwable e) {
fail("Should not have failed");
...
}

244
4. Apply a Label or Expression to a Cell

The label is only applied when data is written. The label is associated with a given version of the
cell.

HBase Shell

hbase> set_visibility 'user', 'admin|service|developer', { COLUMNS => 'i' }

hbase> set_visibility 'user', 'admin|service', { COLUMNS => 'pii' }

hbase> set_visibility 'user', 'test', { COLUMNS => [ 'i', 'pii' ], FILTER =>
"(PrefixFilter ('test'))" }

HBase Shell support for applying labels or permissions to cells is for testing
and verification support, and should not be employed for production use
 because it won’t apply the labels to cells that don’t exist yet. The correct way to
apply cell level labels is to do so in the application code when storing the
values.

Example 20. Java API

static Table createTableAndWriteDataWithLabels(TableName tableName, String...


labelExps)
throws Exception {
Configuration conf = HBaseConfiguration.create();
Connection connection = ConnectionFactory.createConnection(conf);
Table table = NULL;
try {
table = TEST_UTIL.createTable(tableName, fam);
int i = 1;
List<Put> puts = new ArrayList<Put>();
for (String labelExp : labelExps) {
Put put = new Put(Bytes.toBytes("row" + i));
put.add(fam, qual, HConstants.LATEST_TIMESTAMP, value);
put.setCellVisibility(new CellVisibility(labelExp));
puts.add(put);
i++;
}
table.put(puts);
} finally {
if (table != null) {
table.flushCommits();
}
}

245
63.3.3. Reading Cells with Labels

When you issue a Scan or Get, HBase uses your default set of authorizations to filter out cells that
you do not have access to. A superuser can set the default set of authorizations for a given user by
using the set_auths HBase Shell command or the VisibilityClient.setAuths() method.

You can specify a different authorization during the Scan or Get, by passing the AUTHORIZATIONS
option in HBase Shell, or the Scan.setAuthorizations() method if you use the API. This authorization
will be combined with your default set as an additional filter. It will further filter your results,
rather than giving you additional authorization.

HBase Shell

hbase> get_auths 'myUser'


hbase> scan 'table1', AUTHORIZATIONS => ['private']

Example 21. Java API

...
public Void run() throws Exception {
String[] auths1 = { SECRET, CONFIDENTIAL };
GetAuthsResponse authsResponse = null;
try {
VisibilityClient.setAuths(conf, auths1, user);
try {
authsResponse = VisibilityClient.getAuths(conf, user);
} catch (Throwable e) {
fail("Should not have failed");
}
} catch (Throwable e) {
}
List<String> authsList = new ArrayList<String>();
for (ByteString authBS : authsResponse.getAuthList()) {
authsList.add(Bytes.toString(authBS.toByteArray()));
}
assertEquals(2, authsList.size());
assertTrue(authsList.contains(SECRET));
assertTrue(authsList.contains(CONFIDENTIAL));
return null;
}
...

63.3.4. Implementing Your Own Visibility Label Algorithm

Interpreting the labels authenticated for a given get/scan request is a pluggable algorithm.

You can specify a custom plugin or plugins by using the property


hbase.regionserver.scan.visibility.label.generator.class. The output for the first

246
ScanLabelGenerator will be the input for the next one, until the end of the list.

The default implementation, which was implemented in HBASE-12466, loads two plugins,
FeedUserAuthScanLabelGenerator and DefinedSetFilterScanLabelGenerator. See Reading Cells with
Labels.

63.3.5. Replicating Visibility Tags as Strings

As mentioned in the above sections, the interface VisibilityLabelService could be used to


implement a different way of storing the visibility expressions in the cells. Clusters with replication
enabled also must replicate the visibility expressions to the peer cluster. If
DefaultVisibilityLabelServiceImpl is used as the implementation for VisibilityLabelService, all the
visibility expression are converted to the corresponding expression based on the ordinals for each
visibility label stored in the labels table. During replication, visible cells are also replicated with the
ordinal-based expression intact. The peer cluster may not have the same labels table with the same
ordinal mapping for the visibility labels. In that case, replicating the ordinals makes no sense. It
would be better if the replication occurred with the visibility expressions transmitted as strings. To
replicate the visibility expression as strings to the peer cluster, create a RegionServerObserver
configuration which works based on the implementation of the VisibilityLabelService interface.
The configuration below enables replication of visibility expressions to peer clusters as strings. See
HBASE-11639 for more details.

<property>
<name>hbase.security.authorization</name>
<value>true</value>
</property>
<property>
<name>hbase.coprocessor.regionserver.classes</name>

<value>org.apache.hadoop.hbase.security.visibility.VisibilityController$VisibilityRepl
ication</value>
</property>

63.4. Transparent Encryption of Data At Rest


HBase provides a mechanism for protecting your data at rest, in HFiles and the WAL, which reside
within HDFS or another distributed filesystem. A two-tier architecture is used for flexible and non-
intrusive key rotation. "Transparent" means that no implementation changes are needed on the
client side. When data is written, it is encrypted. When it is read, it is decrypted on demand.

63.4.1. How It Works

The administrator provisions a master key for the cluster, which is stored in a key provider
accessible to every trusted HBase process, including the HMaster, RegionServers, and clients (such
as HBase Shell) on administrative workstations. The default key provider is integrated with the Java
KeyStore API and any key management systems with support for it. Other custom key provider
implementations are possible. The key retrieval mechanism is configured in the hbase-site.xml

247
configuration file. The master key may be stored on the cluster servers, protected by a secure
KeyStore file, or on an external keyserver, or in a hardware security module. This master key is
resolved as needed by HBase processes through the configured key provider.

Next, encryption use can be specified in the schema, per column family, by creating or modifying a
column descriptor to include two additional attributes: the name of the encryption algorithm to use
(currently only "AES" is supported), and optionally, a data key wrapped (encrypted) with the cluster
master key. If a data key is not explicitly configured for a ColumnFamily, HBase will create a
random data key per HFile. This provides an incremental improvement in security over the
alternative. Unless you need to supply an explicit data key, such as in a case where you are
generating encrypted HFiles for bulk import with a given data key, only specify the encryption
algorithm in the ColumnFamily schema metadata and let HBase create data keys on demand. Per
Column Family keys facilitate low impact incremental key rotation and reduce the scope of any
external leak of key material. The wrapped data key is stored in the ColumnFamily schema
metadata, and in each HFile for the Column Family, encrypted with the cluster master key. After the
Column Family is configured for encryption, any new HFiles will be written encrypted. To ensure
encryption of all HFiles, trigger a major compaction after enabling this feature.

When the HFile is opened, the data key is extracted from the HFile, decrypted with the cluster
master key, and used for decryption of the remainder of the HFile. The HFile will be unreadable if
the master key is not available. If a remote user somehow acquires access to the HFile data because
of some lapse in HDFS permissions, or from inappropriately discarded media, it will not be possible
to decrypt either the data key or the file data.

It is also possible to encrypt the WAL. Even though WALs are transient, it is necessary to encrypt
the WALEdits to avoid circumventing HFile protections for encrypted column families, in the event
that the underlying filesystem is compromised. When WAL encryption is enabled, all WALs are
encrypted, regardless of whether the relevant HFiles are encrypted.

63.4.2. Server-Side Configuration

This procedure assumes you are using the default Java keystore implementation. If you are using a
custom implementation, check its documentation and adjust accordingly.

1. Create a secret key of appropriate length for AES encryption, using the keytool utility.

$ keytool -keystore /path/to/hbase/conf/hbase.jks \


-storetype jceks -storepass **** \
-genseckey -keyalg AES -keysize 128 \
-alias <alias>

Replace **** with the password for the keystore file and <alias> with the username of the HBase
service account, or an arbitrary string. If you use an arbitrary string, you will need to configure
HBase to use it, and that is covered below. Specify a keysize that is appropriate. Do not specify a
separate password for the key, but press Return when prompted.

2. Set appropriate permissions on the keyfile and distribute it to all the HBase servers.

The previous command created a file called hbase.jks in the HBase conf/ directory. Set the

248
permissions and ownership on this file such that only the HBase service account user can read
the file, and securely distribute the key to all HBase servers.

3. Configure the HBase daemons.

Set the following properties in hbase-site.xml on the region servers, to configure HBase daemons
to use a key provider backed by the KeyStore file or retrieving the cluster master key. In the
example below, replace **** with the password.

<property>
<name>hbase.crypto.keyprovider</name>
<value>org.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider</value>
</property>
<property>
<name>hbase.crypto.keyprovider.parameters</name>
<value>jceks:///path/to/hbase/conf/hbase.jks?password=****</value>
</property>

By default, the HBase service account name will be used to resolve the cluster master key.
However, you can store it with an arbitrary alias (in the keytool command). In that case, set the
following property to the alias you used.

<property>
<name>hbase.crypto.master.key.name</name>
<value>my-alias</value>
</property>

You also need to be sure your HFiles use HFile v3, in order to use transparent encryption. This is
the default configuration for HBase 1.0 onward. For previous versions, set the following
property in your hbase-site.xml file.

<property>
<name>hfile.format.version</name>
<value>3</value>
</property>

Optionally, you can use a different cipher provider, either a Java Cryptography Encryption (JCE)
algorithm provider or a custom HBase cipher implementation.

◦ JCE:

▪ Install a signed JCE provider (supporting AES/CTR/NoPadding mode with 128 bit keys)

▪ Add it with highest preference to the JCE site configuration file


$JAVA_HOME/lib/security/java.security.

▪ Update hbase.crypto.algorithm.aes.provider and hbase.crypto.algorithm.rng.provider


options in hbase-site.xml.

249
◦ Custom HBase Cipher:

▪ Implement org.apache.hadoop.hbase.io.crypto.CipherProvider.

▪ Add the implementation to the server classpath.

▪ Update hbase.crypto.cipherprovider in hbase-site.xml.

4. Configure WAL encryption.

Configure WAL encryption in every RegionServer’s hbase-site.xml, by setting the following


properties. You can include these in the HMaster’s hbase-site.xml as well, but the HMaster does
not have a WAL and will not use them.

<property>
<name>hbase.regionserver.hlog.reader.impl</name>
<value>org.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogReader</value>
</property>
<property>
<name>hbase.regionserver.hlog.writer.impl</name>
<value>org.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogWriter</value>
</property>
<property>
<name>hbase.regionserver.wal.encryption</name>
<value>true</value>
</property>

5. Configure permissions on the hbase-site.xml file.

Because the keystore password is stored in the hbase-site.xml, you need to ensure that only the
HBase user can read the hbase-site.xml file, using file ownership and permissions.

6. Restart your cluster.

Distribute the new configuration file to all nodes and restart your cluster.

63.4.3. Administration

Administrative tasks can be performed in HBase Shell or the Java API.

Java API
Java API examples in this section are taken from the source file hbase-
server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsckEncryption.java. .
 Neither these examples, nor the source files they are taken from, are part of the
public HBase API, and are provided for illustration only. Refer to the official API
for usage instructions.

Enable Encryption on a Column Family


To enable encryption on a column family, you can either use HBase Shell or the Java API. After

250
enabling encryption, trigger a major compaction. When the major compaction completes, the
HFiles will be encrypted.

Rotate the Data Key


To rotate the data key, first change the ColumnFamily key in the column descriptor, then trigger
a major compaction. When compaction is complete, all HFiles will be re-encrypted using the
new data key. Until the compaction completes, the old HFiles will still be readable using the old
key.

Switching Between Using a Random Data Key and Specifying A Key


If you configured a column family to use a specific key and you want to return to the default
behavior of using a randomly-generated key for that column family, use the Java API to alter the
HColumnDescriptor so that no value is sent with the key ENCRYPTION_KEY.

Rotate the Master Key


To rotate the master key, first generate and distribute the new key. Then update the KeyStore to
contain a new master key, and keep the old master key in the KeyStore using a different alias.
Next, configure fallback to the old master key in the hbase-site.xml file.

63.5. Secure Bulk Load


Bulk loading in secure mode is a bit more involved than normal setup, since the client has to
transfer the ownership of the files generated from the MapReduce job to HBase. Secure bulk
loading is implemented by a coprocessor, named SecureBulkLoadEndpoint, which uses a staging
directory configured by the configuration property hbase.bulkload.staging.dir, which defaults to
/tmp/hbase-staging/.

Secure Bulk Load Algorithm


• One time only, create a staging directory which is world-traversable and owned by the user
which runs HBase (mode 711, or rwx—x—x). A listing of this directory will look similar to the
following:

$ ls -ld /tmp/hbase-staging
drwx--x--x 2 hbase hbase 68 3 Sep 14:54 /tmp/hbase-staging

• A user writes out data to a secure output directory owned by that user. For example,
/user/foo/data.

• Internally, HBase creates a secret staging directory which is globally readable/writable (


-rwxrwxrwx, 777). For example, /tmp/hbase-staging/averylongandrandomdirectoryname. The
name and location of this directory is not exposed to the user. HBase manages creation and
deletion of this directory.

• The user makes the data world-readable and world-writable, moves it into the random staging
directory, then calls the SecureBulkLoadClient#bulkLoadHFiles method.

The strength of the security lies in the length and randomness of the secret directory.

251
To enable secure bulk load, add the following properties to hbase-site.xml.

<property>
<name>hbase.security.authorization</name>
<value>true</value>
</property>
<property>
<name>hbase.bulkload.staging.dir</name>
<value>/tmp/hbase-staging</value>
</property>
<property>
<name>hbase.coprocessor.region.classes</name>
<value>org.apache.hadoop.hbase.security.token.TokenProvider,

org.apache.hadoop.hbase.security.access.AccessController,org.apache.hadoop.hbase.secur
ity.access.SecureBulkLoadEndpoint</value>
</property>

63.6. Secure Enable


After hbase-2.x, the default 'hbase.security.authorization' changed. Before hbase-2.x, it defaulted to
true, in later HBase versions, the default became false. So to enable hbase authorization, the
following propertie must be configured in hbase-site.xml. See HBASE-19483;

<property>
<name>hbase.security.authorization</name>
<value>true</value>
</property>

252
Chapter 64. Security Configuration Example
This configuration example includes support for HFile v3, ACLs, Visibility Labels, and transparent
encryption of data at rest and the WAL. All options have been discussed separately in the sections
above.

253
Example 22. Example Security Settings in hbase-site.xml

<!-- HFile v3 Support -->


<property>
<name>hfile.format.version</name>
<value>3</value>
</property>
<!-- HBase Superuser -->
<property>
<name>hbase.superuser</name>
<value>hbase,admin</value>
</property>
<!-- Coprocessors for ACLs and Visibility Tags -->
<property>
<name>hbase.security.authorization</name>
<value>true</value>
</property>
<property>
<name>hbase.coprocessor.region.classes</name>
<value>org.apache.hadoop.hbase.security.access.AccessController,
org.apache.hadoop.hbase.security.visibility.VisibilityController,
org.apache.hadoop.hbase.security.token.TokenProvider</value>
</property>
<property>
<name>hbase.coprocessor.master.classes</name>
<value>org.apache.hadoop.hbase.security.access.AccessController,
org.apache.hadoop.hbase.security.visibility.VisibilityController</value>
</property>
<property>
<name>hbase.coprocessor.regionserver.classes</name>
<value>org.apache.hadoop.hbase.security.access.AccessController</value>
</property>
<!-- Executable ACL for Coprocessor Endpoints -->
<property>
<name>hbase.security.exec.permission.checks</name>
<value>true</value>
</property>
<!-- Whether a user needs authorization for a visibility tag to set it on a cell
-->
<property>
<name>hbase.security.visibility.mutations.checkauth</name>
<value>false</value>
</property>
<!-- Secure RPC Transport -->
<property>
<name>hbase.rpc.protection</name>
<value>privacy</value>
</property>
<!-- Transparent Encryption -->
<property>

254
<name>hbase.crypto.keyprovider</name>
<value>org.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider</value>
</property>
<property>
<name>hbase.crypto.keyprovider.parameters</name>
<value>jceks:///path/to/hbase/conf/hbase.jks?password=***</value>
</property>
<property>
<name>hbase.crypto.master.key.name</name>
<value>hbase</value>
</property>
<!-- WAL Encryption -->
<property>
<name>hbase.regionserver.hlog.reader.impl</name>
<value>org.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogReader</value>
</property>
<property>
<name>hbase.regionserver.hlog.writer.impl</name>
<value>org.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogWriter</value>
</property>
<property>
<name>hbase.regionserver.wal.encryption</name>
<value>true</value>
</property>
<!-- For key rotation -->
<property>
<name>hbase.crypto.master.alternate.key.name</name>
<value>hbase.old</value>
</property>
<!-- Secure Bulk Load -->
<property>
<name>hbase.bulkload.staging.dir</name>
<value>/tmp/hbase-staging</value>
</property>
<property>
<name>hbase.coprocessor.region.classes</name>
<value>org.apache.hadoop.hbase.security.token.TokenProvider,

org.apache.hadoop.hbase.security.access.AccessController,org.apache.hadoop.hbase.s
ecurity.access.SecureBulkLoadEndpoint</value>
</property>

255
Example 23. Example Group Mapper in Hadoop core-site.xml

Adjust these settings to suit your environment.

<property>
<name>hadoop.security.group.mapping</name>
<value>org.apache.hadoop.security.LdapGroupsMapping</value>
</property>
<property>
<name>hadoop.security.group.mapping.ldap.url</name>
<value>ldap://server</value>
</property>
<property>
<name>hadoop.security.group.mapping.ldap.bind.user</name>
<value>[email protected]</value>
</property>
<property>
<name>hadoop.security.group.mapping.ldap.bind.password</name>
<value>****</value> <!-- Replace with the actual password -->
</property>
<property>
<name>hadoop.security.group.mapping.ldap.base</name>
<value>dc=example-ad,dc=local</value>
</property>
<property>
<name>hadoop.security.group.mapping.ldap.search.filter.user</name>
<value>(&amp;(objectClass=user)(sAMAccountName={0}))</value>
</property>
<property>
<name>hadoop.security.group.mapping.ldap.search.filter.group</name>
<value>(objectClass=group)</value>
</property>
<property>
<name>hadoop.security.group.mapping.ldap.search.attr.member</name>
<value>member</value>
</property>
<property>
<name>hadoop.security.group.mapping.ldap.search.attr.group.name</name>
<value>cn</value>
</property>

256
Architecture

257
Chapter 65. Overview
65.1. NoSQL?
HBase is a type of "NoSQL" database. "NoSQL" is a general term meaning that the database isn’t an
RDBMS which supports SQL as its primary access language, but there are many types of NoSQL
databases: BerkeleyDB is an example of a local NoSQL database, whereas HBase is very much a
distributed database. Technically speaking, HBase is really more a "Data Store" than "Data Base"
because it lacks many of the features you find in an RDBMS, such as typed columns, secondary
indexes, triggers, and advanced query languages, etc.

However, HBase has many features which supports both linear and modular scaling. HBase
clusters expand by adding RegionServers that are hosted on commodity class servers. If a cluster
expands from 10 to 20 RegionServers, for example, it doubles both in terms of storage and as well
as processing capacity. An RDBMS can scale well, but only up to a point - specifically, the size of a
single database server - and for the best performance requires specialized hardware and storage
devices. HBase features of note are:

• Strongly consistent reads/writes: HBase is not an "eventually consistent" DataStore. This makes
it very suitable for tasks such as high-speed counter aggregation.

• Automatic sharding: HBase tables are distributed on the cluster via regions, and regions are
automatically split and re-distributed as your data grows.

• Automatic RegionServer failover

• Hadoop/HDFS Integration: HBase supports HDFS out of the box as its distributed file system.

• MapReduce: HBase supports massively parallelized processing via MapReduce for using HBase
as both source and sink.

• Java Client API: HBase supports an easy to use Java API for programmatic access.

• Thrift/REST API: HBase also supports Thrift and REST for non-Java front-ends.

• Block Cache and Bloom Filters: HBase supports a Block Cache and Bloom Filters for high volume
query optimization.

• Operational Management: HBase provides build-in web-pages for operational insight as well as
JMX metrics.

65.2. When Should I Use HBase?


HBase isn’t suitable for every problem.

First, make sure you have enough data. If you have hundreds of millions or billions of rows, then
HBase is a good candidate. If you only have a few thousand/million rows, then using a traditional
RDBMS might be a better choice due to the fact that all of your data might wind up on a single node
(or two) and the rest of the cluster may be sitting idle.

Second, make sure you can live without all the extra features that an RDBMS provides (e.g., typed
columns, secondary indexes, transactions, advanced query languages, etc.) An application built

258
against an RDBMS cannot be "ported" to HBase by simply changing a JDBC driver, for example.
Consider moving from an RDBMS to HBase as a complete redesign as opposed to a port.

Third, make sure you have enough hardware. Even HDFS doesn’t do well with anything less than 5
DataNodes (due to things such as HDFS block replication which has a default of 3), plus a
NameNode.

HBase can run quite well stand-alone on a laptop - but this should be considered a development
configuration only.

65.3. What Is The Difference Between HBase and


Hadoop/HDFS?
HDFS is a distributed file system that is well suited for the storage of large files. Its documentation
states that it is not, however, a general purpose file system, and does not provide fast individual
record lookups in files. HBase, on the other hand, is built on top of HDFS and provides fast record
lookups (and updates) for large tables. This can sometimes be a point of conceptual confusion.
HBase internally puts your data in indexed "StoreFiles" that exist on HDFS for high-speed lookups.
See the Data Model and the rest of this chapter for more information on how HBase achieves its
goals.

259
Chapter 66. Catalog Tables
The catalog table hbase:meta exists as an HBase table and is filtered out of the HBase shell’s list
command, but is in fact a table just like any other.

66.1. hbase:meta
The hbase:meta table (previously called .META.) keeps a list of all regions in the system, and the
location of hbase:meta is stored in ZooKeeper.

The hbase:meta table structure is as follows:

Key
• Region key of the format ([table],[region start key],[region id])

Values
• info:regioninfo (serialized HRegionInfo instance for this region)

• info:server (server:port of the RegionServer containing this region)