O slideshow foi denunciado.
Utilizamos seu perfil e dados de atividades no LinkedIn para personalizar e exibir anúncios mais relevantes. Altere suas preferências de anúncios quando desejar.

Load Data Fast!

4.094 visualizações

Publicada em

We all have tasks from time to time for bulk-loading external data into MySQL. What's the best way of doing this? That's the task I faced recently when I was asked to help benchmark a multi-terrabyte database. We had to find the most efficient method to reload test data repeatedly without taking days to do it each time. In my presentation, I'll show you several alternative methods for bulk data loading, and describe the practical steps to use them efficiently. I'll cover SQL scripts, the mysqlimport tool, MySQL Workbench import, the CSV storage engine, and the Memcached API. I'll also give MySQL tuning tips for data loading, and how to use multi-threaded clients.

Publicada em: Software
  • Seja o primeiro a comentar

Load Data Fast!

  1. 1. Load Data Fast! BILL KARWIN PERCONA LIVE OPEN SOURCE DATABASE CONFERENCE 2017
  2. 2. Bill Karwin Software developer, consultant, trainer Using MySQL since 2000 Senior Database Architect at SchoolMessenger SQL Antipatterns: Avoiding the Pitfalls of Database Programming https://pragprog.com/titles/bksqla/sql-antipatterns Oracle ACE Director
  3. 3. Load Data Fast! Common chores § Dump and restore § Import third-party data § Extract, Transfer, Load (ETL) § Test data that needs to be reloaded repeatedly https://commons.wikimedia.org/wiki/File:Kitten_with_laptop_-_278017185.jpg Is it done yet?
  4. 4. How to Speed This Up? 1. Query Solutions 2. Schema Solutions 3. Configuration Solutions 4. Parallel Execution Solutions
  5. 5. Example Table CREATE TABLE TestTable ( id INT UNSIGNED NOT NULL PRIMARY KEY, intCol INT UNSIGNED DEFAULT NULL, stringCol VARCHAR(100) DEFAULT NULL, textCol TEXT ) ENGINE=InnoDB; Let’s load 1 million rows!
  6. 6. Best Case Performance Running a test script to loop over 1 million rows, without inserting to a database. $ php test-bulk-insert.php --total-rows 1000000 --noop This should have a speed that is the upper bound for any subsequent test. Time: 2 seconds (00:00:02) 1000000 rows = 432435.24 rows/sec 1000000 stmt = 432435.24 stmt/sec 1000000 txns = 432435.24 txns/sec 1000000 conn = 432435.24 conn/sec
  7. 7. Worst Case Performance INSERT INTO TestTable (id, intCol, stringCol, textCol) VALUES (?, ?, ?, ?); Run a test script that executes one INSERT, commits, reconnects. $ php test-bulk-insert.php --total-rows 10000 Time: 34 seconds (00:00:34) 10000 rows = 290.29 rows/sec 10000 stmt = 290.29 stmt/sec 10000 txns = 290.29 txns/sec 10000 conn = 290.29 conn/sec
  8. 8. Inserting One Row: Overhead https://dev.mysql.com/doc/refman/8.0/en/insert-optimization.html 0 0.5 1 1.5 2 2.5 3 Connecting Sending query Parsing Inserting row Closing query
  9. 9. Query Solutions
  10. 10. Inserting One Row at a Time INSERT INTO TestTable (id, intCol, stringCol, textCol) VALUES (?, ?, ?, ?); Run a test script that executes one INSERT, commits using a single connection. $ php test-bulk-insert.php --total-rows 1000000 --txns-per-conn 1000000 Time: 527 seconds (00:08:47) 1000000 rows = 1894.67 rows/sec 1000000 stmt = 1894.67 stmt/sec 1000000 txns = 1894.67 txns/sec 1 conn = 0.00 conn/sec
  11. 11. Inserting One Row: Overhead 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Sending query Parsing Inserting row Closing query
  12. 12. Inserting Multiple Rows INSERT INTO TestTable (id, intCol, stringCol, textCol) VALUES (?, ?, ?, ?), (?, ?, ?, ?), (?, ?, ?, ?), (?, ?, ?, ?), (?, ?, ?, ?), (?, ?, ?, ?), (?, ?, ?, ?), (?, ?, ?, ?), (?, ?, ?, ?); Q: How many rows can you insert in one statement? A: As many as fit in max_allowed_packet bytes.
  13. 13. Inserting Multiple Rows: Overhead 0 1 2 3 4 5 6 7 8 Sending query Parsing Inserting row Closing query
  14. 14. Inserting Multiple Rows: Results $ php Test-bulk-insert.php --total-rows 1000000 --rows-per-stmt 100 --txns-per-conn 10000 Time: 85 seconds (00:01:25) 1000000 rows = 11680.98 rows/sec 10000 stmt = 116.81 stmt/sec 10000 txns = 116.81 txns/sec 1 conn = 0.01 conn/sec
  15. 15. Transactions BEGIN TRANSACTION; INSERT INTO TestTable … INSERT INTO TestTable … INSERT INTO TestTable … INSERT INTO TestTable … INSERT INTO TestTable … INSERT INTO TestTable … COMMIT; Q: How many statements can you do in one transaction? A: In theory this is constrained by undo log segments, but it's a lot.
  16. 16. Transactions: Results $ php test-bulk-insert.php --total-rows 1000000 --rows-per-stmt 100 --stmts-per-txn 100 --txns-per-conn 100 Time: 63 seconds (00:01:03) 1000000 rows = 15744.53 rows/sec 10000 stmt = 157.45 stmt/sec 100 txns = 1.57 txns/sec 1 conn = 0.02 conn/sec
  17. 17. Inserting with Prepared Queries BEGIN TRANSACTION; PREPARE INSERT INTO TestTable … EXECUTE … EXECUTE … EXECUTE … EXECUTE … COMMIT; Q: How many times can you execute a given prepared statement? A: There is no limit, as far as I can tell.
  18. 18. 0 1 2 3 4 5 6 7 8 Sending query Parsing Inserting row Inserting row Inserting row Inserting row Closing query Prepared Queries: Overhead
  19. 19. Prepared Queries: Results $ php test-bulk-insert.php --total-rows 1000000 --rows-per-stmt 100 --stmts-per-txn 100 --txns-per-conn 100 $ php test-bulk-insert.php --total-rows 1000000 --rows-per-stmt 100 --stmts-per-txn 100 --txns-per-conn 100 --emulate-prepares Time: 95 seconds (00:01:35) 1000000 rows = 10518.97 rows/sec Time: 63 seconds (00:01:03) 1000000 rows = 15744.53 rows/sec
  20. 20. Load Data in File: Results mysql> LOAD DATA LOCAL INFILE 'TestTable.csv' INTO TABLE TestTable; https://dev.mysql.com/doc/refman/8.0/en/load-data.html Flat-file data load in a single transaction. Works with replication.
  21. 21. Overhead: Load Data Infile 0 50 100 150 200 250 Sending query Parsing LOAD DATA INFILE Closing query
  22. 22. Load Data in File: Results $ php test-bulk-insert.php --total-rows 1000000 --load-data Time: 25 seconds (00:00:25) 1000000 rows = 39563.53 rows/sec 1 stmt = 0.04 stmt/sec 1 txns = 0.04 txns/sec 1 conn = 0.04 conn/sec
  23. 23. Load XML in File: Results LOAD XML LOCAL INFILE 'TestTable.xml' INTO TABLE TestTable; https://dev.mysql.com/doc/refman/8.0/en/load-xml.html $ php test-bulk-insert.php --total-rows 1000000 --load-xml Time: 77 seconds (00:01:17) 1000000 rows = 12858.16 rows/sec 1 stmt = 0.01 stmt/sec 1 txns = 0.01 txns/sec 1 conn = 0.01 conn/sec
  24. 24. What about Load JSON in File? Sorry, the hypothetical LOAD JSON INFILE is not supported by MySQL yet. 😭 But it has been proposed as a feature request: https://bugs.mysql.com/bug.php?id=79209 Go vote for it! Or better yet, implement it and contribute a patch!
  25. 25. Schema Solutions
  26. 26. Indexes How much overhead for one index? Two indexes? 1. mysql> ALTER TABLE TestTable ADD INDEX (intCol); 2. mysql> ALTER TABLE TextTable ADD INDEX (stringCol);
  27. 27. Indexes: Overhead 0 1 2 3 4 5 6 7 8 Sending query Parsing Inserting row Inserting indexes Closing query
  28. 28. Indexes: Results $ php test-bulk-insert.php --total-rows 1000000 --rows-per-stmt 100 --stmts-per-txn 100 --txns-per-conn 100 $ php test-bulk-insert.php --total-rows 1000000 --rows-per-stmt 100 --stmts-per-txn 100 --txns-per-conn 100 --indexes 1 $ php test-bulk-insert.php --total-rows 1000000 --rows-per-stmt 100 --stmts-per-txn 100 --txns-per-conn 100 --indexes 2 Time: 71 seconds (00:01:11) 1000000 rows = 13993.81 rows/sec Time: 63 seconds (00:01:03) 1000000 rows = 15744.53 rows/sec Time: 95 seconds (00:01:35) 1000000 rows = 10473.64 rows/sec
  29. 29. Index Deferral What if we insert with no indexes, and build indexes at the end? § Thi is what Percona’s mysqldump --innodb-optimize-keys does. § Load time is like when you have no indexes: Then create indexes after data load. This reduces the effective rate of rows/second: mysql> ALTER TABLE TestTable ADD INDEX (intCol); Query OK, 0 rows affected (7.02 sec) mysql> ALTER TABLE TestTable ADD INDEX (stringCol); Query OK, 0 rows affected (8.54 sec) Time: 63 seconds (00:01:03) 1000000 rows = 15744.53 rows/sec Time: 63 + 7 + 8.5 seconds (00:01:35) 1000000 rows = 12738.85 rows/sec effective data load rate
  30. 30. Triggers How much overhead for a trigger? mysql> CREATE TRIGGER TestTrigger BEFORE INSERT ON TestTable FOR EACH ROW SET NEW.stringCol = UPPER(NEW.stringCol); This is a very simple trigger. If you have more complex code, like subordinate INSERT statements, the cost will be higher.
  31. 31. Triggers: Results $ php test-bulk-insert.php --total-rows 1000000 --rows-per-stmt 100 --stmts-per-txn 100 --txns-per-conn 100 --trigger Time: 69 seconds (00:01:09) 1000000 rows = 14296.91 rows/sec 10000 stmt = 142.97 stmt/sec 100 txns = 1.43 txns/sec 1 conn = 0.01 conn/sec
  32. 32. CSV Storage Engine mysql> CREATE TABLE TestTable ( id INT UNSIGNED NOT NULL, intCol INT UNSIGNED NOT NULL, stringCol VARCHAR(100) NOT NULL, textCol TEXT NOT NULL ) ENGINE=CSV; # ls -l /usr/local/mysql/data/test total 24 -rw-r----- 1 _mysql _mysql 5824 Apr 22 20:10 TestTable_429.SDI -rw-r----- 1 _mysql _mysql 35 Apr 22 20:10 testtable.CSM -rw-r----- 1 _mysql _mysql 0 Apr 22 20:10 testtable.CSV
  33. 33. CSV Storage Engine Move CSV file into datadir: # time cp data.csv /usr/local/mysql/data/test/testtable.CSV real 0m8.359s # ls -l /usr/local/mysql/data/test/ total 6350872 -rw-r----- 1 _mysql _mysql 5824 Apr 22 20:18 TestTable_431.SDI -rw-r----- 1 _mysql _mysql 35 Apr 22 20:18 testtable.CSM -rw-r----- 1 _mysql _mysql 3251630334 Apr 22 20:19 testtable.CSV Time: 8.359 (00:00:08) 1000000 rows = 119631.53 rows/sec
  34. 34. CSV into InnoDB Storage Engine Use CSV storage engine, then alter to InnoDB table (and add a primary key): ALTER TABLE TestTable ADD PRIMARY KEY (id), ENGINE=InnoDB; Query OK, 1000000 rows affected (1 min 37.73 sec) Time: 8.359 + 97.73 seconds (00:01:46) 1000000 rows = 9426.05 rows/sec effective data load rate
  35. 35. Partitioning
  36. 36. Transportable Tablespaces
  37. 37. Configuration Solutions
  38. 38. Increase Buffering, Decrease Durability innodb_buffer_pool_size = 4G (default 128M) innodb_log_buffer_size = 1G (default 16M) innodb_log_file_size = 4G (default 48M) innodb_flush_log_at_trx_commit = 0 (default 1) # log-bin = mysql-bin Time: 56 seconds (00:00:56) 1000000 rows = 17697.29 rows/sec
  39. 39. Increase Buffering, Decrease Durability Same, but at least flush the log buffer: innodb_flush_log_at_trx_commit = 2 (default 1) Time: 60 seconds (00:01:00) 1000000 rows = 16564.26 rows/sec
  40. 40. Tuning + Load Data $ php test-bulk-insert.php --total-rows 1000000 --load-data Time: 22 seconds (00:00:22) 1000000 rows = 43873.50 rows/sec
  41. 41. Config for More Buffering Innodb_buffer_pool_size=4G (default 128M) Time: 82 seconds (00:01:22) 1000000 rows = 12161.69 rows/sec Innodb_change_buffering=none (default all) Innodb_log_buffer_size=1G (default 16M) Time: 81 seconds (00:01:21) 1000000 rows = 12291.17 rows/sec Binlog_cache_size=256K) (default 32K)
  42. 42. Config for Greater Throughput Innodb_log_file_size=4G (default 48M) Time: 80 seconds (00:01:20) 1000000 rows = 12488.30 rows/sec Innodb_io_capacity=2000 (default 200) Time: 80 seconds (00:01:20) 1000000 rows = 12432.38 rows/sec Innodb_lru_scan_depth=8192 (default 1024) Time: 81 seconds (00:01:21) 1000000 rows = 12269.61 rows/sec
  43. 43. Config for Lower Durability Innodb_doublewrite=OFF (default ON) Time: 85 seconds (00:01:25) 1000000 rows = 11740.06 rows/sec Innodb_flush_log_at_trx_commit=0 (default 1) Time: 84 seconds (00:01:24) 1000000 rows = 11768.51 rows/sec # Log_bin Time: 82 seconds (00:01:22) 1000000 rows = 12087.97 rows/sec Sync_binlog=0 (default 1) Time: 83 seconds (00:01:23) 1000000 rows = 11906.84 rows/sec
  44. 44. Config for Fewer Checks Innodb_checksum_algorithm=none (default crc32) Time: 84 seconds (00:01:24) 1000000 rows = 11807.99 rows/sec Innodb_log_checksums=OFF (default ON) Time: 84 seconds (00:01:24) 1000000 rows = 11893.64 rows/sec Foreign_key_checks=0 (default 1) Unique_checks=0 (default 1)
  45. 45. Parallel Execution Solutions
  46. 46. Parallel Import Like LOAD DATA INFILE but supports multi-threaded import: $ mysqlimport --local --use-threads 4 dbname table1 table2 table3 table4 Runs a fixed number of threads, imports one table per thread. If an import finishes and there are more tables, first available thread does it. https://dev.mysql.com/doc/refman/8.0/en/mysqlimport.html
  47. 47. Parallel Import Connecting to localhost Connecting to localhost Connecting to localhost Connecting to localhost Selecting database test Selecting database test Selecting database test Selecting database test Loading data from LOCAL file: TestTable2.csv into TestTable2 Loading data from LOCAL file: TestTable3.csv into TestTable3 Loading data from LOCAL file: TestTable1.csv into TestTable1 Loading data from LOCAL file: TestTable4.csv into TestTable4 test.TestTable3: Records: 250000 Deleted: 0 Skipped: 0 Warnings: 0 Disconnecting from localhost test.TestTable1: Records: 250000 Deleted: 0 Skipped: 0 Warnings: 0 Disconnecting from localhost test.TestTable2: Records: 250000 Deleted: 0 Skipped: 0 Warnings: 0 Disconnecting from localhost test.TestTable4: Records: 250000 Deleted: 0 Skipped: 0 Warnings: 0 Disconnecting from localhost
  48. 48. MysqlImport: Results $ php test-bulk-insert.php --total-rows 1000000 --load-data --use-threads 4 Time: 31 seconds (00:00:31) 1000000 rows = 32205.28 rows/sec 4 stmt = 0.13 stmt/sec 4 txns = 0.13 txns/sec 4 conn = 0.13 conn/sec
  49. 49. Conclusions
  50. 50. 0 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000 Rows per Second why are you still doing this?
  51. 51. Want to Try The Tests Yourself? The test-bulk-insert.php script is available here: https://github.com/billkarwin/bk-tools
  52. 52. One Last Thing… What Was Our Solution? We cheated: § Load database once. § Take a filesystem snapshot. § Run tests. § Restore from snapshot. § Re-run tests. § etc. This is not a good solution for everyone. It worked for one specific use case.
  53. 53. License and Copyright Copyright 2017 Bill Karwin http://www.slideshare.net/billkarwin Released under a Creative Commons 3.0 License: http://creativecommons.org/licenses/by-nc-nd/3.0/ You are free to share—to copy, distribute, and transmit this work, under the following conditions: Attribution. You must attribute this work to Bill Karwin. Noncommercial. You may not use this work for commercial purposes. No Derivative Works. You may not alter, transform, or build upon this work.

×