Mais conteúdo relacionado

Big data and Hadoop

  1. Hadoop:
  2. Computerworld:
  3. AshishTushoo:
  4. Big data:
  5. Chukwa:
  6. What is Hadoop
  7. HDFS
  8. MapReduce
  9. HBase
  10. PIG
  11. HIVE
  12. Chukwa
  13. ZooKeeper
  14. Agents to collect and process logs
  15. Monitoring and analysis
  16. Affordable Storage/Compute
  17. Structured or Unstructured
  18. Resilient Auto Scalability
  19. Relational Databases
  20. Interactive response times
  21. ACID
  22. Structured data
  23. Cost/Scale prohibitive

Notas do Editor

  1. Analyzing large amounts of data is the top predicted skill required!
  2. Pool commodity servers in a single hierarchical namespace.Designed for large files that are written once and read many times.Example here shows what happens with a replication factor of 3, each data block is present in at least 3 separate data nodes.Typical Hadoop node is eight cores with 16GB ram and four 1TB SATA disks.Default block size is 64MB, though most folks now set it to 128MB
  3. Example flow as at Facebook
  4. Aircraft is refined, very fast, and has a lot of addons/features. But it is pricey on a per bit basis and is expensive to maintainCargo train is rough, missing a lot of “luxury”, slow to accelerate, but it can carry almost anything and once it gets going it can move a lot of stuff very economically