More Related Content
Similar to [C12]元気Hadoop! OracleをHadoopで分析しちゃうぜ by Daisuke Hirama
Similar to [C12]元気Hadoop! OracleをHadoopで分析しちゃうぜ by Daisuke Hirama (20)
More from Insight Technology, Inc.
More from Insight Technology, Inc. (20)
[C12]元気Hadoop! OracleをHadoopで分析しちゃうぜ by Daisuke Hirama
- 5. DBエンジニアだってHadoopを使いたい!
Super Hadoop 2013 !
•
•
•
•
•
•
•
4ノードHadoopクラスタ
Cloudera CDH4 4.4.0
Cloudera Manager Standard 4.7.2
Master (NameNode, JobTracker) 1台
Slave (DataNode, TaskTracker) 4台
(1台はMasterに同居)
DISKはSSD(アキバモデル)12枚
クラスタ間通信はInfiniBand
Copyright © 2013 Insight Technology, Inc. All Rights Reserved.
- 6. DBサーバのログを分析してみよう!
Super RAC 2013 !
•
•
•
•
•
Oracle Database 12c
4ノードRAC構成
3台のストレージノード(自作PC)
DISKはSSD(アキバモデル)18枚
ノード間通信はInfiniBand
SuperRACで実行させる処理
夜間バッチ処理:
午前1:00からTPC-Hを実行(10分程度)
日中のOLTP処理:
午前8:00からTPC-Cを実行(1時間)
Copyright © 2013 Insight Technology, Inc. All Rights Reserved.
- 7. その1:パフォーマンスログを分析してみよう
dstatでパフォーマンスログを取得
"Dstat 0.7.0 CSV output"
"Author:","Dag Wieers <dag@wieers.com>",,,,"URL:","http://dag.wieers.com/home-made/dstat/"
"Host:","iq-4node-db3",,,,"User:","root"
"Cmdline:","dstat -C 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23 --output /data01/logs/iq-4node-db3/dstat_cpu_
ヘッダと1行目を削除
"cpu0 usage",,,,,,"cpu1 usage",,,,,,"cpu2 usage",,,,,,"cpu3 usage",,,,,,"cpu4 usage",,,,,,"cpu5 usage",,,,,,"cpu6 usage",,,,,,"c
"usr","sys","idl","wai","hiq","siq","usr","sys","idl","wai","hiq","siq","usr","sys","idl","wai","hiq","siq","usr","sys","idl","w
1.086,0.600,98.202,0.106,0.0,0.005,1.825,0.491,97.593,0.086,0.0,0.005,0.677,0.225,99.070,0.017,0.0,0.011,0.417,0.140,99.427,0.01
0.0,0.990,99.010,0.0,0.0,0.0,0.0,1.0,99.0,0.0,0.0,0.0,0.0,0.0,100.0,0.0,0.0,0.0,0.0,0.0,100.0,0.0,0.0,0.0,0.0,0.0,100.0,0.0,0.0,
1.0,0.0,99.0,0.0,0.0,0.0,2.020,0.0,97.980,0.0,0.0,0.0,0.0,0.0,100.0,0.0,0.0,0.0,0.0,0.0,100.0,0.0,0.0,0.0,0.0,0.0,100.0,0.0,0.0,
1.0,1.0,98.0,0.0,0.0,0.0,3.0,0.0,97.0,0.0,0.0,0.0,0.0,0.0,100.0,0.0,0.0,0.0,0.0,0.0,100.0,0.0,0.0,0.0,0.0,0.0,100.0,0.0,0.0,0.0,
1.010,0.0,98.990,0.0,0.0,0.0,2.0,1.0,97.0,0.0,0.0,0.0,0.0,0.0,100.0,0.0,0.0,0.0,0.990,0.0,99.010,0.0,0.0,0.0,0.0,0.0,100.0,0.0,0
10.891,0.990,87.129,0.990,0.0,0.0,16.667,2.941,79.412,0.980,0.0,0.0,3.0,0.0,97.0,0.0,0.0,0.0,6.931,0.990,92.079,0.0,0.0,0.0,1.0,
先頭にサーバ名と日付・時刻を追加
tail -86400 $fn | cat -n | sed 's/¥s¥+/,/g' | sed "s/^/${SVRNAME},${YESTERDAY}/"
加工して
iq-4node-db3,20131030,1,0.0,0.990,99.010,0.0,0.0,0.0,0.0,1.0,99.0,0.0,0.0,0.0,0.0,0.0,100.0,0.0,0.0,0.0,0.0,0.0,100.0,0.0,0.0,0.
iq-4node-db3,20131030,2,1.0,0.0,99.0,0.0,0.0,0.0,2.020,0.0,97.980,0.0,0.0,0.0,0.0,0.0,100.0,0.0,0.0,0.0,0.0,0.0,100.0,0.0,0.0,0.
iq-4node-db3,20131030,3,1.0,1.0,98.0,0.0,0.0,0.0,3.0,0.0,97.0,0.0,0.0,0.0,0.0,0.0,100.0,0.0,0.0,0.0,0.0,0.0,100.0,0.0,0.0,0.0,0.
iq-4node-db3,20131030,4,1.010,0.0,98.990,0.0,0.0,0.0,2.0,1.0,97.0,0.0,0.0,0.0,0.0,0.0,100.0,0.0,0.0,0.0,0.990,0.0,99.010,0.0,0.0
iq-4node-db3,20131030,5,10.891,0.990,87.129,0.990,0.0,0.0,16.667,2.941,79.412,0.980,0.0,0.0,3.0,0.0,97.0,0.0,0.0,0.0,6.931,0.990
Hadoopへ
hadoop fs -put dstat_cpu_iq-4node-db3_20131030.csv dstat_cpu
Copyright © 2013 Insight Technology, Inc. All Rights Reserved.
- 9. CSVをテーブルとして定義
create external table dstat_cpu (
servername
string,
create_ymd
string,
create_second
int,
cpu0_user
DOUBLE,
cpu0_sys
DOUBLE,
cpu0_idle
DOUBLE,
page_in
DOUBLE,
page_out
DOUBLE,
system_int
DOUBLE,
system_csw
DOUBLE
)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
STORED AS TEXTFILE LOCATION '/user/root/dstat_cpu';
Copyright © 2013 Insight Technology, Inc. All Rights Reserved.
- 14. Impalaなら爆速! だけど…
TPC-H Q3 SF=10(GB)
(秒) 120
100
80
1/5以下!
60
40
20
0
Hive
Copyright © 2013 Insight Technology, Inc. All Rights Reserved.
Impala
- 17. その2:怪しいSQLを見つけられないかな?
select * from CUSTOMER
where C_LAST = ‘Hirama’;
これは通常の業務で
発行されるSQLなの?
Oracleの監査証跡
DB監査ツール
--CDBで実行
alter system set audit_trail=xml, extended sid='*'
scope=spfile;
--PDBで実行
AUDIT SELECT TABLE BY ACCESS;
AUDIT INSERT TABLE BY ACCESS;
AUDIT UPDATE TABLE BY ACCESS;
AUDIT DELETE TABLE BY ACCESS;
ログ量:1日64GB…
Copyright © 2013 Insight Technology, Inc. All Rights Reserved.
- 18. 監査ログからSQLを抜き出そう
<AuditRecord><Audit_Type>1</Audit_Type><Session_Id>140037</Session_Id>
<DBID>409456161</DBID>
<Sql_Text>select
l_returnflag,
l_linestatus,
sum(l_quantity) as sum_qty,
order by
l_returnflag,
l_linestatus</Sql_Text>
</AuditRecord>
select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty,
sum(l_extendedprice) as sum_base_price, sum(l_extendedprice * (1
- l_discount)) as sum_disc_price, sum(l_extendedprice * (1 l_discount) * (1 + l_tax)) as sum_charge, avg(l_quantity) as avg_qty,
avg(l_extendedprice) as avg_price, avg(l_discount) as avg_disc,
count(*) as count_order from lineitem where l_shipdate <= date
'1998-12-01' - interval '91' day (3) group
by l_returnflag, l_linestatus order by l_returnflag, l_linestatus
SQLのみ抜きだし、改行を削除して1行に
• Hadoop StreamingでXMLタグの抜き出し
hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce/contrib/streaming/hadoopstreaming-2.0.0-mr1-cdh4.4.0.jar ¥
-D mapred.reduce.tasks=0 ¥
-inputreader "StreamXmlRecordReader,begin=<Sql_Text>,end=</Sql_Text>" ¥
-input XmlSql ¥
-output Sql ¥
-mapper cutlftag.sh ¥
-file cutlftag.sh
• Hadoop Streamingなら、スクリプトでMapReduceが可能
#!/bin/sh
tr -d "¥n" | sed -e "s/¥t/¥ /g" |sed -e "s/<Sql_Text>//g" | sed -e "s/<¥/Sql_Text>/¥n/g"
Copyright © 2013 Insight Technology, Inc. All Rights Reserved.
- 21. データの分類、変換
• 訓練用データを人手で分類
tpch
trainSql
tpcc
suspicious
• シーケンスファイルに変換
$ mahout seqdirectory -i trainSql -o trainSeq
中身はこんな感じ
Key: /tpcc/part-00000: Value: SELECT /* N-07 */ s_quantity, s_dist_01, s_dist_02, s_dist_03, s_dist_04, s_dist_05,
s_dist_06, s_dist_07, s_dist_08, s_dist_09, s_dist_10, s_data FROM stock WHERE s_i_id = :1 AND s_w_id = :2 FOR
UPDATE
UPDATE /* N-08 */ stock SET s_quantity = :1 , s_ytd = s_ytd + :2 , s_order_cnt = s_order_cnt + 1, s_remote_cnt =
s_remote_cnt + :3 WHERE s_i_id = :4 AND s_w_id = :5
INSERT /* N-09 */ INTO order_line (ol_o_id, ol_d_id, ol_w_id, ol_number, ol_i_id, ol_supply_w_id, ol_delivery_d,
ol_quantity, ol_amount, ol_dist_info) VALUES (:1 , :2 , :3 , :4 , :5 , :6 , NULL, :7 , :8 , :9 )
• ベクトルデータに変換
$ mahout seq2sparse -i trainSeq -o trainSparse ¥
-a org.apache.lucene.analysis.WhitespaceAnalyzer
中身はこんな感じ
Key: /tpcc/part-00001: Value:
{543:26.124736785888672,542:36.76076126098633,541:51.987571716308594,539:116.10087585449219,538:82.09571075439453
,529:82.09571075439453,528:13.946792602539062,527:13.946792602539062,524:25.92806053161621,523:37.858341217041016
,522:25.92806053161621,521:11.3875093460083,520:11.3875093460083,519:11.3875093460083,518:25.92806053161621,517:2
6.76988983154297,516:65.2334213256836,514:37.858341217041016,513:37.858341217041016,512:53.53977966308594,501:53.
889198303222656,500:36.94595718383789,499:26.124736785888672,498:52.05316925048828,497:19.723743438720703,496:19.
Copyright © 2013 Insight Technology, Inc. All Rights Reserved.
- 22. モデルの構築・テスト
• 訓練用データとテスト用データに分割
$ mahout split -i trainSparse/tfidf-vectors --trainingOutput trainData ¥
--testOutput trainTestData --randomSelectionPct 50 ¥
--overwrite --sequenceFiles --method sequential
• 訓練してモデルを構築
$ mahout trainnb -i trainData -o trainModel –li trainIndex -ow -c -el
• モデルのテスト
$ mahout testnb -i trainTestData -o trainTestResult ¥
-m trainModel -l trainIndex -ow -c
テストデータ
TPC-H: 3件
select s_suppkey, s_name,
s_address, s_phone,
total_revenuefrom supplier,
revenue0where s_suppkey =
supplier_no and total_revenue = (
select
max(total_revenue) from
revenue0 )order by s_suppkey
TPC-C: 4件
SELECT /* N-07 */ s_quantity,
s_dist_01, s_dist_02, s_dist_03,
s_dist_04, s_dist_05, s_dist_06,
s_dist_07, s_dist_08, s_dist_09,
s_dist_10, s_data FROM stock
WHERE s_i_id = :1 AND s_w_id =
:2 FOR UPDATE
Copyright © 2013 Insight Technology, Inc. All Rights Reserved.
怪しいSQL: 1件
SELECT C_ID FROM
TPCC.CUSTOMER WHERE
C_ID=:B3 AND
C_D_ID=:B2 AND
C_W_ID=:B1