Cloudera - Administrator for Apache Hadoop Training

Live Online & Classroom Enterprise Certification Training

A certified course on Cloudera Administrator for Apache Hadoop proves that you have explicitly demonstrated your technical knowledge, skills, and ability to configure, deploy, manage, and secure Apache Hadoop cluster.

Looking for a private batch ?

REQUEST A CALLBACK
Key Features
  • Lifetime Access

  • CloudLabs

  • 24x7 Support

  • Real-time code analysis and feedback

  • 100% Money Back Guarantee

PDP BG 1
SpringPeople Logo

What is Cloudera Hadoop Admin training about?

Starting from the very basic, this advanced course on Apache Hadoop Administration teaches you the functions and major goals of HDFS, explains how YARN works, and then covers Hadoop cluster planning, installation and administration. This course also teaches Resource management, logging and monitoring.

 

What are the objectives of Cloudera Hadoop Admin training?

Understand Hadoop Distributed File System Understand YARN Learn Hadoop Cluster Planning, Installation and Administration

Available Training Modes

Live Online Training

18 Hours

Classroom Training

 

3 Days
PDP BG 2

Who is Cloudera Hadoop Admin training for?

  • Anyone who wants to add Cloudera - Administrator for Apache Hadoop skills to their profile
  • Teams getting started on Cloudera - Administrator for Apache Hadoop projects
  • What are the prerequisites for Cloudera Hadoop Admin training?

    Basic Networking Concepts and operating system knowledge

    Course Outline

    • HDFS
      • Describe the function of HDFS daemons
      • Describe the normal operation of an Apache Hadoop cluster, both in data storage and in data processing
      • Identify current features of computing systems that motivate a system like Apache Hadoop
      • Classify major goals of HDFS Design
      • Given a scenario, identify appropriate use case for HDFS Federation
      • Identify components and daemon of an HDFS HA-Quorum cluster
      • Analyze the role of HDFS security (Kerberos)
      • Determine the best data serialization choice for a given scenario
      • Describe file read and write paths
      • Identify the commands to manipulate files in the Hadoop File System Shell
    • YARN
      • Understand how to deploy core ecosystem components, including Spark, Impala, and Hive
      • Understand how to deploy MapReduce v2 (MRv2 / YARN), including all YARN daemons
      • Understand basic design strategy for YARN and Hadoop
      • Determine how YARN handles resource allocations
      • Identify the workflow of job running on YARN
      • Determine which files you must change and how in order to migrate a cluster from MapReduce version 1 (MRv1) to MapReduce version 2 (MRv2) running on YARN
    • Hadoop Cluster Planning
      • Principal points to consider in choosing the hardware and operating systems to host an Apache Hadoop cluster
      • Analyze the choices in selecting an OS
      • Understand kernel tuning and disk swapping
      • Given a scenario and workload pattern, identify a hardware configuration appropriate to the scenario
      • Given a scenario, determine the ecosystem components your cluster needs to run in order to fulfill the SLA
      • Cluster sizing: given a scenario and frequency of execution, identify the specifics for the workload, including CPU, memory, storage, disk I/O
      • Disk Sizing and Configuration, including JBOD vs RAID, SANs, virtualization, and disk sizing requirements in a cluster
      • Network Topologies: understand network usage in Hadoop (for both HDFS and MapReduce) and propose or identify key network design components for a given scenario
    • Hadoop Cluster Installation and Administration
      • Given a scenario, identify how the cluster will handle disk and machine failures
      • Analyze a logging configuration and logging configuration file format
      • Understand the basics of Hadoop metrics and cluster health monitoring
      • Identify the function and purpose of available tools for cluster monitoring
      • Be able to install all the ecosystem components in CDH 5, including (but not limited to): Impala,
      • Flume, Oozie, Hue, Cloudera Manager, Sqoop, Hive, and Pig
      • Identify the function and purpose of available tools for managing the Apache Hadoop file system
    • Resource Management
      • Understand the overall design goals of each of Hadoop schedulers
      • Given a scenario, determine how the FIFO Scheduler allocates cluster resources
      • Given a scenario, determine how the Fair Scheduler allocates cluster resources under YARN
      • Given a scenario, determine how the Capacity Scheduler allocates cluster resources
    • Monitoring and Logging
      • Understand the functions and features of Hadoop's metric collection abilities
      • Analyze the NameNode and JobTracker Web UIs
      • Understand how to monitor cluster daemons
      • Identify and monitor CPU usage on master nodes
      • Describe how to monitor swap and memory allocation on all nodes
      • Identify how to view and manage Hadoop's log files
      • Interpret a log file

    Who is the instructor for this training?

    The trainer for this Cloudera - Administrator for Apache Hadoop has extensive experience in this domain, including years of experience training & mentoring professionals.

    Cloudera - Administrator for Apache Hadoop - Certification & Exam

    SpringPeople works with top industry experts to identify the leading certification bodies on different technologies - which are well respected in the industry and globally accepted as clear evidence of a professional’s “proven” expertise in the technology. As such, these certification are a high value-add to the CVs and can give a massive boost to professionals in their career/professional growth.


    Our certification courses are fully aligned to these high-profile certification exams;... Read More

    Reviews