CCDH Hadoop Developer Practice Exam

This course at My-Classes provides access to CCDH practice exam for candidates to self-assess their exam-readiness against the full course of certification objectives focusing on engineering data solutions in MapReduce and understanding the Hadoop ecosystem (including Hive, Pig, Sqoop, Oozie, Crunch, and Flume).

This Hadoop Developer Practice Exam is the most effective way to work towards CCDH:

  • 50 questions simulating live Cloudera Certification exam with 90 minute time limit.
  • Rich pool of 90+ questions based on many interviews and authentic course-ware.
  • Overview of correct/incorrect answers to help you identify your mistakes.
  • Mobile-enabled for smartphones and tablets — study anytime, anywhere.
  • Unlimited retakes.

Take this course and measure your readiness for the real certification exam at: My-Classes

Exam Sections

Infrastructure: Hadoop components that are outside the concerns of a particular MapReduce job that a developer needs to master (25%)
Data Management: Developing, implementing, and executing commands to properly manage the full data lifecycle of a Hadoop job (30%)
Job Mechanics: The processes and commands for job control and execution with an emphasis on the process rather than the data (25%)
Querying: Extracting information from data (20%)

  1. Infrastructure Objectives
    • Recognize and identify Apache Hadoop daemons and how they function both in data storage and processing.
    • Understand how Apache Hadoop exploits data locality.
    • Identify the role and use of both MapReduce v1 (MRv1) and MapReduce v2 (MRv2 / YARN) daemons.
    • Analyze the benefits and challenges of the HDFS architecture.
    • Analyze how HDFS implements file sizes, block sizes, and block abstraction.
    • Understand default replication values and storage requirements for replication.
    • Determine how HDFS stores, reads, and writes files.
    • Identify the role of Apache Hadoop Classes, Interfaces, and Methods.
    • Understand how Hadoop Streaming might apply to a job workflow.
  2. Data Management Objectives
    • Import a database table into Hive using Sqoop.
    • Create a table using Hive (during Sqoop import).
    • Successfully use key and value types to write functional MapReduce jobs.
    • Given a MapReduce job, determine the lifecycle of a Mapper and the lifecycle of a Reducer.
    • Analyze and determine the relationship of input keys to output keys in terms of both type and number, the sorting of keys, and the sorting of values.
    • Given sample input data, identify the number, type, and value of emitted keys and values from the Mappers as well as the emitted data from each Reducer and the number and contents of the output file(s).
    • Understand implementation and limitations and strategies for joining datasets in MapReduce.
    • Understand how partitioners and combiners function, and recognize appropriate use cases for each.
    • Recognize the processes and role of the the sort and shuffle process.
    • Understand common key and value types in the MapReduce framework and the interfaces they implement.
    • Use key and value types to write functional MapReduce jobs.
  3. Job Mechanics Objectives
    • Construct proper job configuration parameters and the commands used in job submission.
    • Analyze a MapReduce job and determine how input and output data paths are handled.
    • Given a sample job, analyze and determine the correct InputFormat and OutputFormat to select based on job requirements.
    • Analyze the order of operations in a MapReduce job.
    • Understand the role of the RecordReader, and of sequence files and compression.
    • Use the distributed cache to distribute data to MapReduce job tasks.
    • Build and orchestrate a workflow with Oozie.
  4. Querying Objectives
    • Write a MapReduce job to query data stored in Hive.
    • Write a MapReduce job to query data stored in HDFS.

Comments