Hadoop for Administrators

Course Number:

N/A

Apache Hadoop is the most popular framework for processing Big Data on clusters of servers. In this three-days (optionally four-day) course, attendees will learn about the business benefits and use cases for Hadoop and its ecosystem; how to plan cluster deployment and growth; and how to install, maintain, monitor, troubleshoot and optimize Hadoop. They will also practice cluster bulk data load, gain familiarity with various Hadoop distributions and practice installing and managing Hadoop ecosystem tools. The course finishes off with discussion of securing cluster with Kerberos.

This class can be extended to four days with the optional addition of Cloudera/Hortonworks specialization.

Audience:

This class is designed especially for Hadoop administrators.
Course Duration:
3 to 4 days

Prerequisites:

Participants should be comfortable with basic Linux system administration and possess basic scripting skills. Knowledge of Hadoop and distributed computing is not required as it will be introduced and explained in the course.

Course Objectives:
Course Outline:
  • Bulleted Introduction
    • Hadoop History, Concepts
    • Ecosystem
    • Distributions
    • High-Level Architecture
    • Hadoop Myths
    • Hadoop Challenges (Hardware / Software)

 

  • Planning and Installation
    • Selecting Software, Hadoop Distributions
    • Sizing The Cluster, Planning for Growth
    • Selecting Hardware and Network
    • Rack Topology
    • Installation
    • Multi-Tenancy
    • Directory Structure, Logs
    • Benchmarking

 

  • HDFS Operations
    • Concepts (Horizontal Scaling, Replication, Data Locality, Rack Awareness)
    • Nodes and Daemons (NameNode, Secondary NameNode, HA Standby NameNode, DataNode)
    • Health Monitoring
    • Command-Line and Browser-Based Administration
    • Adding Storage, Replacing Defective Drives
    • Data Ingestion
    • Flume for Logs and Other Data Ingestion into HDFS
    • Sqoop for Importing from SQL Databases to HDFS and Exporting Back to SQL
    • Hadoop Data Warehousing with Hive
    • Copying Data Between Clusters (distcp)
    • Using S3 as Complementary to HDFS
    • Data Ingestion Best Practices and Architectures

 

  • MapReduce Operations and Administration
    • Parallel Computing Before MapReduce: Compare HPC vs Hadoop administration
    • MapReduce Cluster Loads
    • Nodes and Daemons (JobTracker, TaskTracker)
    • MapReduce UI Walk Through
    • MapReduce Configuration
    • Job Config
    • Optimizing MapReduce
    • Fool-Proofing MR: What to Tell Your Programmers

 

  • YARN: New Architecture and New Capabilities
    • YARN Design Goals and Implementation Architecture
    • New Actors: ResourceManager, NodeManager, Application Master
    • Installing YARN
    • Job Scheduling Under YARN

 

  • Advanced Topics
    • Hardware Monitoring
    • Cluster Monitoring
    • Adding and Removing Servers, Upgrading Hadoop
    • Backup, Recovery and Business Continuity Planning
    • Oozie Job Workflows
    • Hadoop High Availability (HA)
    • Hadoop Federation
    • Securing Your Cluster with Kerberos

 

  • Optional Tracks
    • Cloudera Manager for Cluster Administration, Monitoring and Routine Tasks; Installation; and Use
      • In This Track, All Exercises and Labs Are Performed Within the Cloudera Distribution Environment (CDH5)
    • Ambari for Cluster Administration, Monitoring and Routine Tasks; Installation; and Use
      • In This Track, All Exercises And Labs Are Performed Within the Ambari Cluster Manager and Hortonworks Data Platform (HDP 2.0)

Related Posts

About Us

IT Training, Agile Ways of Working and High Impact Talent Development Strategies

Let Us Come to You!

Classes recently delivered in: Atlanta, Boston, Chicago, Columbus, Dallas, Detroit, Indianapolis, Jerusalem, London, Milan, New York, Palo Alto, Phoenix, Pittsburgh, Portland, Raleigh, San Antonio, San Diego, San Francisco, San Jose, Seattle, Springfield, Mass., St. Louis, Tampa and more!