Mais conteúdo relacionado Semelhante a AWS S3 Best Practices and Tuning for Hadoop/Spark in the Cloud Semelhante a AWS S3 Best Practices and Tuning for Hadoop/Spark in the Cloud (20) Mais de Noritaka Sekiyama Mais de Noritaka Sekiyama (14) AWS S3 Best Practices and Tuning for Hadoop/Spark in the Cloud1. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Noritaka Sekiyama
Senior Cloud Support Engineer, Amazon Web Services Japan
2019.03.14
Amazon S3 Best Practice and
Tuning for Hadoop/Spark in the Cloud
2. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Noritaka Sekiyama
Senior Cloud Support Engineer
- Engineer in AWS Support
- Speciality: Big Data
(EMR, Glue, Athena, …)
- SME of AWS Glue
- Apache Spark lover ;)
Who I am...
@moomindani
3. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Question
• Are you already using S3 on Hadoop/Spark?
• Will you start using Hadoop/Spark on S3 in the future?
• Are you just interested in using cloud storage in Hadoop/Spark?
About today’s session
4. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Relationship between Hadoop/Spark and S3
Difference between HDFS and S3, and use-case
Detailed behavior of S3 from the viewpoint of Hadoop/Spark
Well-known pitfalls and tunings
Service updates on AWS/S3 related to Hadoop/Spark
Recent activities in Hadoop/Spark community related to S3
Agenda
5. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Relationship between
Hadoop/Spark and S3
6. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Hadoop/Spark processes large data and write output to
destination
Possible to locate input/output data on various file systems like
HDFS
Hadoop/Spark accesses various file system via Hadoop
FileSystem API
Data operation on Hadoop/Spark
FileSystem
App
FileSystem API
7. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Hadoop FileSystem API
• Interface to operate Hadoop file system
⎼ open: Open input stream
⎼ create: Create files
⎼ append: Append files
⎼ getFileBlockLocations: Get block locations
⎼ rename: Rename files
⎼ mkdir: Create directories
⎼ listFiles: List files
⎼ delete: Delete files
• Possible to use various file system like HDFS when using various
implementation of FileSystem API
⎼ LocalFileSystem, S3AFileSystem, EmrFileSystem
Hadoop/Spark and file system
8. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
HDFS is
• Distributed file system that enables high throughput data access
• Optimized to large files (100MB+, GB, TB)
⎼ Not good for lots of small files
• There are NameNode which manages metadata, and DataNode which
manages/stores data blocks
• In Hadoop 3.x, there are many features (e.g. Erasure Coding, Router
based federation, Tiered storage, Ozone) that are actively developed.
How to access
• Hadoop FileSystem API
• $ hadoop fs ...
• HDFS Web UI
HDFS: Hadoop Distributed File System
9. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Amazon S3 is
• Object storage service that achieves high scalability, availability,
durability, security, and performance
• Pricing is mainly based on data size and requests
• Maximum size of single object: 5TB
• Objects have unique keys under bucket
⎼ There are no directories in S3 although S3 console emulates directories.
⎼ S3 is not a file system
How to access
• REST API (GET, SELECT, PUT, POST, COPY, LIST, DELETE, CANCEL, ...)
• AWS CLI
• AWS SDK
• S3 Console
Amazon S3
10. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Enables to handle S3 like HDFS from Hadoop/Spark
History of S3 FileSystem API implementation
• S3: S3FileSystem
• S3N: NativeS3FileSystem
• S3A: S3AFileSystem
• EMRFS: EmrFileSystem
S3’s implementation of Hadoop FileSystem API
Cluster S3
HDFS
App
11. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
HADOOP-574: want FileSystem implementation for Amazon S3
Developed to use S3 as file system in 2006
• Object data on S3 = Block data(≠ File data)
• Blocks are stored on S3 directly
• Limited to read/write from S3FileSystem
URL prefix: s3://
Deprecated in 2016
https://issues.apache.org/jira/browse/HADOOP-574
S3: S3FileSystem
12. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
HADOOP-931: Make writes to S3FileSystem world visible only on
completion
Developed to solve issues in S3FileSystem in 2008
• Object data on S3 =File data(≠Block data)
• Empty directories are represented with empty object “xyz_$folder$“
• Limited to use files which does not exceed 5GB
Uses jets3t to access S3 (not use AWS SDK)
URL prefix: s3n://
https://issues.apache.org/jira/browse/HADOOP-931
S3N: NativeS3FileSystem
13. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
HADOOP-10400: Incorporate new S3A FileSystem implementation
Developed to solve issues in NativeS3FileSystem in 2014
• Support parallel copy and rename
• Compatible with S3 console about empty directories ("xyz_$folder$“-
>”xyz/”)
• Support IAM role authentication
• Support 5GB+ files and multipart uploads
• Support S3 server side encryption
• Improve recovery from error
Uses AWS SDK for Java to access S3
URL prefix: s3a://
Amazon EMR does not support S3A officially
https://issues.apache.org/jira/browse/HADOOP-10400
S3A: S3AFileSystem
14. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
FileSystem implementation at Amazon EMR (Limited to use on
EMR)
Developed by Amazon (optimized for S3 specification)
• Support IAM role authentication
• Support 5GB+ files and multipart uploads
• Support both S3 server-side/client-side encryption
• Support EMRFS S3-optimized Committer
• Support pushdown with S3 SELECT
Uses AWS SDK for Java to access S3
URL prefix: s3:// (or s3n://)
https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-fs.html
EMRFS: EmrFileSystem
15. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
S3A
• On-premise
• Other cloud
• Hadoop/Spark on EC2
EMRFS
• Amazon EMR
How to choose S3A or EMRFS
EMR ClusterS3
HDFS
App
HDFS
App
S3A EMRFS
16. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Choice at AWS
• EMR: Covers most of use-cases
• Hadoop/Spark on EC2: Good for specific use-case
⎼ Multi-master (Coming Soon in EMR)
https://www.slideshare.net/AmazonWebServices/a-deep-dive-into-whats-new-with-amazon-emr-ant340r1-aws-reinvent-2018/64
⎼ Needs combination of applications/versions which are not supported in EMR
⎼ Needs specific distribution of Hadoop/Spark
• Glue: Can be used as serverless Spark
Hadoop/Spark and AWS
17. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Difference between HDFS
and S3, and use-case
18. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Possible to access via Hadoop FileSystem API
Can be changed based on URL prefix (“hdfs://”, “s3://”, “s3a://”)
Common features in both HDFS and S3
19. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Extremely high I/O performance
Frequent data access
Temporary data
High consistency
• Cannot accept S3 consistency model and EMRFS consistent view, S3 Guard
Fixed cost for both storage and I/O is preferred
The use-case where data locality work well
(network bandwidth between nodes < 1G)
Physical location of data needs to be controlled
Work load and data where HDFS is better
20. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Extremely high durability and availability
• Durability: 99.999999999%
• Availability:99.99%
Cold data is stored for long-term use
https://aws.amazon.com/s3/storage-classes/
Lower cost for data size is preferred
• External blog post said it is less than 1/5 of HDFS
Data size is huge and incrementally increasing
Work load and data where S3 is better (1/2)
21. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Wants to separate storage from computing cluster
• Data on S3 remains after terminating clusters
Multiple clusters and applications share the same file system
• Multiple Hadoop/Spark clusters
• EMR, Glue, Athena, Redshift Spectrum, Hadoop/Spark on EC2, etc.
Centerized security is preferred (including other components than Hadoop)
• IAM, S3 bucket policy, VPC Endpoint, Glue Data Catalog, etc.
• Will be improved by AWS LakeFormation
Note: S3 cannot be used as default file system (fs.defaultFS)
https://aws.amazon.com/premiumsupport/knowledge-center/configure-emr-s3-hadoop-storage/
Work load and data where S3 is better (2/2)
22. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Detailed behavior of S3
from the viewpoint of
Hadoop/Spark
23. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Read
Write
S3 (EMRFS/S3A)
Master Node
Name Node
Cluster Driver
Disk
Slave Node
Data Node
Cluster Worker
Disk
Slave Node
Data Node
Cluster Worker
Disk
Slave Node
Data Node
Cluster Worker
Spark
Driver
Disk
Slave Node
Data Node
Cluster Worker
Spark
Executor
Spark
Executor
Spark
Executor
Spark
Executor
Spark
Executor
Spark
Executor
Spark
Executor
S3
Client
Spark
Client
Cluster
S3 API
Endpoint
24. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Split is
• Data chunk which is generated by splitting target data so that Hadoop/Spark can
process
• Splittable format(e.g. bzip2) data is splitted based on pre-defined size
Well-known issues
• Increased overhead due to lots of splits from lots of small files
• Out of memory due to large unsplittable file
Default split size
• HDFS: 128MB (Recommendation:HDFS Block size = Split size)
• S3 EMRFS: 64MB (fs.s3.block.size)
• S3 S3A: 32MB (fs.s3a.block.size)
Request for unsplittable files
• S3 GetObject API with specifying content length in Range parameter
• Status code: 206 (Partial Content)
S3 (EMRFS/S3A): Split
25. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Well-known pitfalls and tunings
- S3 consistency model
26. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
PUTs of new objects: read-after-write consistency
• Consistent result is retrieved when you get object just after putting it.
HEAD/GET of non-existing objects: eventual consistency
• If you make a HEAD or GET request to the key name (to find if the object exists)
before creating the object, S3 provides eventual consistency for read-after-write.
PUTs/DELETEs for existing objects: eventual consistency
• A process replaces an existing object and immediately attempts to read it. Until
the change is fully propagated, S3 might return the prior data
• A process deletes an existing object and immediately attempts to read it. Until
the deletion is fully propagated, S3 might return the deleted data.
LIST of objects: eventual consistency
• A process writes a new object to S3 and immediately lists keys within its bucket.
Until the change is fully propagated, the object might not appear in the list.
• A process deletes an existing object and immediately lists keys within its bucket.
Until the deletion is fully propagated, S3 might list the deleted object.
https://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#ConsistencyModel
S3 consistency model
27. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Example:ETL data pipeline where there are multiple steps
• Step 1: Transforming and converting input data
• Step 2: Statistic processing of converted data
Expected issue
• Step 2 will get object list without some of data written in step 1
Workload where impact is expected
• Requires immediate, incremental, consistent processing consists of
multiple steps
Impact to Hadoop/Spark due to S3 consistency
28. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Write into HDFS, then write into S3
• Write data into HDFS as a intermediate storage, then move data from
HDFS to S3.
• DiscCP or S3DistCp can be used to move data from HDFS to S3.
• Cons: Overhead in moving data, adding intermediate process, delay to
reflect the latest data
Mitigating consistency impact in Hadoop/Spark
Cluster S3
HDFS
App
Input
Output
Backup
Restore
29. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
S3 Guard (S3A), EMRFS Consistent view (EMRFS)
• Mechanism to check S3 consistency (especially LIST consistency)
• Use DynamoDB to manage S3 object metadata
• Provide the latest view to compare results returned from S3 and
DynamoDB
https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-consistent-view.html
Mitigating consistency impact in Hadoop/Spark
Cluster
S3
HDFS
App
Temp
data
DynamoDB
Object PUT/GET
Metadata PUT/GET
30. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Update data on S3 from outside clusters
• There would be difference between metadata on DynamoDB and data
on S3 if you write data not using S3A or EMRFS
→ Limit basic operations only from inside cluster
→ Sync metadata when updating S3 data from outside cluster
EMRFS CLI $ emrfs ...
S3Guard CLI $ hadoop s3guard ...
Common troubles and workarounds
Cluster
S3
HDFS
App
Temp
data
DynamoDB
Object PUT/GET
Metadata PUT/GET
Client
PUT
31. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
DynamoDB I/O throttling
• Fail to get/update metadata if there is not enough capacity in DynamoDB table.
→ Provision enough capacity, or use on-demand mode instead
→ Retry I/O to mitigate impact
S3A: fs.s3a.s3guard.ddb.max.retries,fs.s3a.s3guard.ddb.throttle.retry.interval,..
→ Notify when there is inconsistency
EMRFS: fs.s3.consistent.notification.CloudWatch, etc.
Common troubles and workarounds
Cluster
S3
HDFS
App
Temp
data
DynamoDB
Object PUT/GET
Metadata PUT/GET
32. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Well-known pitfalls and tunings
- S3 multipart uploads
33. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Multipart uploads are used when Hadoop/Spark uploads
large data to S3
• Both S3A and EMRFS supports S3 multipart uploads.
• Size threshold can be set in parameters
EMRFS: fs.s3n.multipart.uploads.split.size, etc.
S3A: fs.s3a.multipart.threshold, etc.
Case of EMR: Multipart uploads are always used when EMRFS S3-
optimized Commiter is used
Case of OSS Hadoop/Spark: Multipart uploads are always used
when S3A committer is used
Hadoop/Spark and S3 multipart uploads
34. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Multipart Upload Initiation
• When you send a request to initiate a multipart upload, S3 returns a response
with an upload ID, which is a unique identifier for your multipart upload.
Parts Upload
• When uploading a part,you must specify a part number and the upload ID.
• Only after you either complete or abort a multipart upload will S3 free up the
parts storage and stop charging you for the parts storage.
Multipart Upload Completion (or Abort)
• When you complete a multipart upload, S3 creates an object by concatenating
the parts in ascending order based on the part number.
• If any object metadata was provided in the initiate multipart upload request, S3
associates that metadata with the object.
• After a successful complete request, the parts no longer exist.
Steps in multipart uploads
35. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Remaining multipart uploads
• Part might remain when jobs are aborted, clusters are terminated
unexpectedly.
• S3 console does not show remaining parts which are not completed
→ Delete remaining parts periodically based on S3 life cicle
→ Configure multipart related parameters
EMRFS: fs.s3.multipart.clean.enabled, etc.
S3A: fs.s3a.multipart.purge, etc.
→ You can check if there are remaining parts or not via AWS CLI
Common troubles and workarounds
$ aws s3api list-multipart-uploads --bucket bucket-name
36. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Well-known pitfalls and tunings
- S3 request performance
37. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Request performance (per prefix)
• 3,500 PUT/POST/DELETE requests per second
• 5,500 GET requests per second
• ”HTTP 503 Slowdown” might be returned if some condition is met
https://www.slideshare.net/AmazonWebServices/best-practices-for-amazon-s3-and-amazon-glacier-stg203r2-aws-reinvent-2018/50
In case that it is difficult to split the prefixes due to use-case
• Query over multiple prefixes (e.g. query with ‘*’ not specifying ‘US’/’CA’)
→ Please reach out to AWS support to get proactive support
S3 performance and throttling
s3://MyBucket/customers/dt=yyyy-mm-dd/0000001.csv
s3://MyBucket/customers/US/dt=yyyy-mm-dd/0000001.csv
s3://MyBucket/customers/CA/dt=yyyy-mm-dd/0000002.csv
Performance will be improved if prefixes are splitted into
38. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
S3 connections
• Configure the number of connections in order to adjust concurrency of
S3 requests
EMRFS: fs.s3.maxConnections, etc.
S3A: fs.s3a.connection.maximum, etc.
S3 request retries
• Configure request retry behavior in order to address request throttling
EMRFS: fs.s3.retryPeriodSeconds (EMR 5.14 or later), fs.s3.maxRetries (EMR 5.12or
later), etc.
S3A: fs.s3a.retry.throttle.limit, fs.s3a.retry.throttle.interval, etc.
Tuning S3 requests
39. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Well-known pitfalls and tunings
- Others
40. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Common troubles
• Performance degrade when writing data into Hive tables on S3
⎼ Lack of parallelism of write I/O
⎼ Writing not only output data, but also intermediate data to S3
Workarounds
• Parallelism
⎼ hive.mv.files.threads
• Intermediate data
⎼ hive.blobstore.use.blobstore.as.scratchdir = false
⎼ There is an example that achieves 10 times faster performance.
https://issues.apache.org/jira/browse/HIVE-14269
https://issues.apache.org/jira/browse/HIVE-14270
Hive Write performance tuning
41. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Common troubles
• Slow file upload via S3A
• Consuming too much disk space or memory when uploading data
Workarounds
• Tuning S3A Fast Upload related parameters
⎼ fs.s3a.fast.upload.buffer: disk, array, bytebuffer
⎼ fs.s3a.fast.upload.active.blocks
Tuning S3A Fast Upload
42. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Service updates on AWS/S3
related to Hadoop/Spark
43. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Previous request performance
• 100 PUT/LIST/DELETE requests per second
• 300 GET requests per second
Current request performance (per prefix)
• 3,500 PUT/POST/DELETE requests per second
• 5,500 GET requests per second
2018.7: S3 request performance improvement
44. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
S3 SELECT is
• A feature to enable querying
required data from object
• Support queries from API, S3
console
• Possible to retrieve max 40MB
record from max 128 MB source
file
Supported formats
• CSV
• JSON
• Parquet <-New!
https://aws.amazon.com/jp/about-aws/whats-new/2018/09/amazon-s3-announces-
2018.9: S3 SELECT supports Parquet format
45. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
EMRFS supports pushdown by using S3 SELECT queries
• Expected outcome: performance improvement, faster data transfer
• Supported applications: Hive, Spark, Presto
• How to use: Configure per application
Note: EMRFS does not decide if it uses S3 SELECT or not automatically based on
workload.
• Guidelines to determine if your application is a candidate for S3 Select:
⎼ Your query filters out more than half of the original data set.
⎼ Your network connection between S3 and the EMR cluster has good transfer
speed and available bandwidth.
⎼ Your query filter predicates use columns that have a data type supported by
both S3 Select and application (Hive/Spark/Presto)
→ Recommend to do benchmark to ensure if S3 SELECT is better for your
workloads
Hive: https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-hive-s3select.html
Spark: https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-spark-s3select.html
Presto: https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-presto-s3select.html
2018.10: EMRFS supports pushdown by S3 SELECT
46. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
EMRFS S3-optimized committer is
• Output committer introduced in EMR 5.19.0 or later (Default in
5.20.0 or later)
• Used when you use Spark SQL / DataFrames / Datasets to write
Parquet file
• Based on S3 multipart uploads
Pros
• Improve performance by avoiding S3 LIST/RENAME during job/task
commit phase.
• Improve correctness of job with failed tasks by avoiding S3 consistency
impact during job/task commit phase
https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-spark-s3-optimized-committer.html
https://aws.amazon.com/blogs/big-data/improve-apache-spark-write-performance-on-apache-parquet-formats-with-the-emrfs-s3-optimized-committer/
2018.11: EMRFS S3-optimized committer
47. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Difference between FileOutputCommitter and EMRFS S3-optimized
committer
• FileOutputCommitterV1: 2 phases RENAME
⎼ RENAME to commit individual task output
⎼ RENAME to commit whole job output from completed/succeeded tasks
• FileOutputCommitterV2: 1 phase RENAME
⎼ RENAME to commit files to final destination.
⎼ Note: Intermediate data would be visible before completing jobs.
(Both versions have RENAME operations to write data into intermediate location.)
• EMRFS S3-optimized committer
⎼ Avoid RENAME to take advantage of S3 multipart uploads
The reason to focus on RENAME
• HDFS RENAME:Metadata only operation. Fast.
• S3 RENAME:N times data copy and deletion. Slow.
2018.11: EMRFS S3-optimized committer
48. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Performance comparison
• EMR 5.19.0 (master m5d.2xlarge / core m5d.2xlarge x 8台)
• Input data: 15 GB (100 Parquet files)
2018.11: EMRFS S3-optimized committer
EMRFS consistent view is disabled EMRFS consistent view is enabled
INSERT OVERWRITE DIRECTORY ‘s3://${bucket}/perf-test/${trial_id}’
USING PARQUET SELECT * FROM range(0, ${rows}, 1, ${partitions});
49. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Provisioned
• Configure provisioned capacity for Read/Write I/O
Ondemand
• No need to configure capacity (Auto-scale based on workloads)
• EMRFS consistent view, S3 Guard can take advantage of this.
https://aws.amazon.com/jp/blogs/news/amazon-dynamodb-on-demand-no-capacity-planning-and-pay-per-request-pricing/
2018.11: DynamoDB Ondemand
50. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Recent activities in
Hadoop/Spark community
related to S3
51. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
HADOOP-16132: Support multipart download in S3AFileSystem
• Improve download performance to refer AWS CLI implementation.
https://issues.apache.org/jira/browse/HADOOP-16132
HADOOP-15364: Add support for S3 Select to S3A
• S3A supports S3 SELECT
https://issues.apache.org/jira/browse/HADOOP-15364
JIRA – S3A
52. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
HADOOP-15999: S3Guard: Better support for out-of-band operations
• Improve handling of files updated from outside S3Guard
https://issues.apache.org/jira/browse/HADOOP-15999
HADOOP-15837: DynamoDB table Update can fail S3A FS init
• Improve S3Guard initiation when DynamoDB AutoScaling is enabled
https://issues.apache.org/jira/browse/HADOOP-15837
HADOOP-15619: Über-JIRA: S3Guard Phase IV: Hadoop 3.3 features
• Hadoop 3.3 S3Guard related parent JIRA (Hadoop 3.0, 3.1, 3.2 have its own specific JIRA)
https://issues.apache.org/jira/browse/HADOOP-15619
HADOOP-15426: Make S3guard client resilient to DDB throttle events and network
failures
• Improve S3Guard CLI behavior when there are throttling in DynamoDB
https://issues.apache.org/jira/browse/HADOOP-15426
HADOOP-15349: S3Guard DDB retryBackoff to be more informative on limits
exceeded
• Improve S3Guard behavior when there are throttling in DynamoDB
JIRA – S3Guard
53. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
HADOOP-15281: Distcp to add no-rename copy option
• DistCp adds new option without RENAME (mainly for S3)
https://issues.apache.org/jira/browse/HADOOP-15281
HIVE-20517: Creation of staging directory and Move operation is
taking time in S3
• Change Hive behavior to write data into final destination to avoid
RENAME operations.
https://issues.apache.org/jira/browse/HIVE-20517
JIRA – Others
54. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Issue
• Spark writes intermediate files and RENAMEs then when writing data
• Even intermediate data is written into S3. It caused slow performance.
• HIVE-14270 related Issue
Approach
• Divide location (HDFS for intermediate files, S3 for final destination)
• Expected outcome: performance improvement, S3 cost reduction.
Current status
• My implementation is 2 times faster, but still in testing phase.
https://issues.apache.org/jira/browse/SPARK-21514
SPARK-21514:Hive has updated with new support for
S3 and InsertIntoHiveTable.scala should update also
55. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Conclusion
56. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Relationship between Hadoop/Spark and S3
Difference between HDFS and S3, and use-case
Detailed behavior of S3 from the viewpoint of Hadoop/Spark
Well-known pitfalls and tunings
Service updates on AWS/S3 related to Hadoop/Spark
Recent activities in Hadoop/Spark community related to S3
Conclusion
57. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Appendix
58. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Transient clusters
• Batch job
• One-time data conversion
• Machine learning
• ETL into other DWH or data lake
Persistent clusters
• Ad-hoc jobs
• Streaming processing
• Continuous data conversion
• Notebook
• Experiments
Major use-case of Hadoop/Spark on AWS
59. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Resource/Daemon logs
• Name Node logs
• Data Node logs
• HDFS block reports
Request logs
• HDFS audit logs
Metrics
• Hadoop Metrics v2
Useful information in troubleshooting HDFS
60. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Request logs
• S3 Access logs
⎼ Logs are written when you configure S3 bucket in advance.
• CloudTrail
⎼ Management events:Records control plane operations
⎼ Data events:Records data plane operations (Need to be configured)
Metrics
• CloudWatch S3 metrics
⎼ Storage metrics
– There are 2 types of metrics; bucket size and the number of objects
– Updated once a day
⎼ Request metrics (Need to be configured)
– 16 types of metrics including request counts (GET, PUT, HEAD, …) and 4XX/5XX errors
– Updated once a minute
Useful information in troubleshooting S3