15. Hbase运维碎碎念 NUMA 关闭NUMA? vi /etc/grub.conf … Title Red Hat Enterprise Linux Server (2.6.18-164.el5) root (hd0,0) kernel /vmlinuz-2.6.18-164.el5 ro root=LABEL=/ numa=off console=tty0 console=ttyS1,115200 initrd /initrd-2.6.18-164.el5.img
22. Hbase运维碎碎念 DataNode dfs.datanode.max.xcievers 默认256,建议至少4096+,太小可能导致频繁报错:hdfs.DFSClient: Could not obtain block blk_XXXXXXXXXXXXXXXXXXXXXX_YYYYYYYY from any node: java.io.IOException: No live nodes contain current block.
47. Hbase运维碎碎念 压力测试 $ ./hbaseorg.apache.hadoop.hbase.PerformanceEvaluation Usage: java org.apache.hadoop.hbase.PerformanceEvaluation br /> [--miniCluster] [--nomapred] [--rows=ROWS] <command> <nclients> Options: miniCluster Run the test on an HBaseMiniCluster nomapred Run multiple clients using threads (rather than use mapreduce) rows Rows each client runs. Default: One million flushCommits Used to determine if the test should flush the table. Default: false writeToWAL Set writeToWAL on puts. Default: True Command: filterScan Run scan test using a filter to find a specific row based on it's value (make sure to use --rows=20) randomRead Run random read test randomSeekScan Run random seek and scan 100 test randomWrite Run random write test scan Run scan test (read every row) scanRange10 Run random seek scan with both start and stop row (max 10 rows) scanRange100 Run random seek scan with both start and stop row (max 100 rows) scanRange1000 Run random seek scan with both start and stop row (max 1000 rows) scanRange10000 Run random seek scan with both start and stop row (max 10000 rows) sequentialRead Run sequential read test sequentialWrite Run sequential write test Args: nclients Integer. Required. Total number of clients (and HRegionServers) running: 1 <= value <= 500 Examples: To run a single evaluation client: $ bin/hbaseorg.apache.hadoop.hbase.PerformanceEvaluationsequentialWrite 1