0
For best deals, Call us now
Use code: UY10 for 10% Flat discount
Buy 1 Get 2 Certifications free with Exam

Apache Spark Scala Certification Training (Self-Paced Learning)

> Spark is popular in leading companies including Microsoft, Amazon and IBM. LinkedIn, Twitter and Netflix are few companies using Scala
> Global Spark market revenue will grow to $4.2 billion by 2022 with a CAGR of 67% – Marketanalysis.com
> The average pay stands at $108,366 - Indeed.com

USD 299 USD 499

Course Overview

Apache Spark and Scala Certification Training is designed to prepare you for the Cloudera Hadoop and Spark Developer Certification Exam (CCA175). You will gain in-depth knowledge on Apache Spark and the Spark Ecosystem, which includes Spark RDD, Spark SQL, Spark MLlib and Spark Streaming. You will get comprehensive knowledge on Scala Programming language, HDFS, Sqoop, Flume, Spark GraphX and Messaging System such as Kafka.

Key Highlights

  • 36 hours of Online self-paced learning
  • Real-life Case Studies
  • Assessments
  • Lifetime Access
  • 24 x 7 Expert Support
  • Certification
  • Community forum
  • Apache Spark & Scala Projects

What You'll Learn

  • Scala programming language
  • Spark installation process
  • Resilient Distributed Datasets RDD
  • SparkSQL
  • Spark Streaming features
  • Spark ML programming
  • GraphX programming
     

Career Benefits

  • It is one of the top ten in-demand technical skills
  • Better career opportunities
  • Higher salary

Who Can Attend

  • Any individual aspiring for a career in Big Data
  • Analysts
  • Researchers
  • IT developers 
  • Project managers

Exam Formats

No exam included.

Course Delivery

This course is available in the following formats:

  • Self-Paced Learning Duration: 36 Hrs

Course Syllabus


Introduction to Big Data Hadoop and Spark

  • Learning Objectives: Understand Big Data and its components such as HDFS. You will learn about the Hadoop Cluster Architecture, Introduction to Spark and the difference between batch processing and real-time processing.
  • Topics:
  • What is Big Data?
  • Big Data Customer Scenarios
  • Limitations and Solutions of Existing Data Analytics Architecture with Uber Use Case
  • How Hadoop Solves the Big Data Problem?
  • What is Hadoop? Preview
  • Hadoop’s Key Characteristics
  • Hadoop Ecosystem and HDFS
  • Hadoop Core Components
  • Rack Awareness and Block Replication
  • YARN and its Advantage
  • Hadoop Cluster and its Architecture
  • Hadoop: Different Cluster Modes
  • Hadoop Terminal Commands Preview
  • Big Data Analytics with Batch & Real-time Processing
  • Why Spark is needed?
  • What is Spark?
  • How Spark differs from other frameworks?
  • Spark at Yahoo!

Introduction to Scala for Apache Spark

Learning Objectives: Learn the basics of Scala that are required for programming Spark applications. You will also learn about the basic constructs of Scala such as variable types, control structures, collections such as Array, ArrayBuffer, Map, Lists, and many more.

  • Topics:
  • What is Scala? Preview
  • Why Scala for Spark?
  • Scala in other Frameworks
  • Introduction to Scala REPL
  • Basic Scala Operations
  • Variable Types in Scala
  • Control Structures in Scala Preview
  • Foreach loop, Functions and Procedures
  • Collections in Scala- Array
  • ArrayBuffer, Map, Tuples, Lists, and more
  • Hands-on:
  • Scala REPL Detailed Demo

Functional Programming and OOPs Concepts in Scala

Learning Objectives: In this module, you will learn about object-oriented programming and functional programming techniques in Scala.

  • Topics:
  • Functional Programming
  • Higher Order Functions
  • Anonymous Functions
  • Class in Scala Preview
  • Getters and Setters
  • Custom Getters and Setters
  • Properties with only Getters
  • Auxiliary Constructor and Primary Constructor
  • Singletons
  • Extending a Class Preview
  • Overriding Methods
  • Traits as Interfaces and Layered Traits
  • Hands-on:
  • OOPs Concepts
  • Functional Programming

Deep Dive into Apache Spark Framework

Learning Objectives: Understand Apache Spark and learn how to develop Spark applications. At the end, you will learn how to perform data ingestion using Sqoop.

  • Topics:
  • Spark’s Place in Hadoop Ecosystem
  • Spark Components & its Architecture Preview
  • Spark Deployment Modes
  • Introduction to Spark Shell
  • Writing your first Spark Job Using SBT
  • Submitting Spark Job
  • Spark Web UI
  • Data Ingestion using Sqoop Preview
  • Hands-on:
  • Building and Running Spark Application
  • Spark Application Web UI
  • Configuring Spark Properties
  • Data ingestion using Sqoop

Playing with Spark RDDs

Learning Objectives: Get an insight of Spark - RDDs and other RDD related manipulations for implementing business logics (Transformations, Actions, and Functions performed on RDD).

  • Topics:
  • Challenges in Existing Computing Methods
  • Probable Solution & How RDD Solves the Problem
  • What is RDD, It’s Operations, Transformations & Actions Preview
  • Data Loading and Saving Through RDDs Preview
  • Key-Value Pair RDDs
  • Other Pair RDDs, Two Pair RDDs
  • RDD Lineage
  • RDD Persistence
  • WordCount Program Using RDD Concepts
  • RDD Partitioning & How It Helps Achieve Parallelization
  • Passing Functions to Spark
  • Hands-on:
  • Loading data in RDDs
  • Saving data through RDDs
  • RDD Transformations
  • RDD Actions and Functions
  • RDD Partitions
  • WordCount through RDDs

DataFrames and Spark SQL

Learning Objectives: In this module, you will learn about SparkSQL which is used to process structured data with SQL queries, data-frames and datasets in Spark SQL along with different kind of SQL operations performed on the data-frames. You will also learn about Spark and Hive integration.

  • Topics:
  • Need for Spark SQL
  • What is Spark SQL? Preview
  • Spark SQL Architecture
  • SQL Context in Spark SQL
  • User Defined Functions
  • Data Frames & Datasets Preview
  • Interoperating with RDDs
  • JSON and Parquet File Formats
  • Loading Data through Different Sources
  • Spark – Hive Integration
  • Hands-on:
  • Spark SQL – Creating Data Frames
  • Loading and Transforming Data through Different Sources
  • Stock Market Analysis
  • Spark-Hive Integration

Machine Learning using Spark MLlib

Learning Objectives: Learn why machine learning is needed, different Machine Learning techniques/algorithms, and SparK MLlib.

Topics:

  • Why Machine Learning?
  • What is Machine Learning? Preview
  • Where Machine Learning is Used?
  • Face Detection: USE CASE
  • Different Types of Machine Learning Techniques Preview
  • Introduction to MLlib
  • Features of MLlib and MLlib Tools
  • Various ML algorithms supported by MLlib

Deep Dive into Spark MLlib

Learning Objectives: Implement various algorithms supported by MLlib such as Linear Regression, Decision Tree, Random Forest and many more.

Topics:

  • Supervised Learning - Linear Regression, Logistic Regression, Decision Tree, Random Forest Preview
  • Unsupervised Learning - K-Means Clustering & How It Works with MLlib Preview
  • Analysis on US Election Data using MLlib (K-Means)
  • Hands-on:
  • Machine Learning MLlib
  • K- Means Clustering
  • Linear Regression
  • Logistic Regression
  • Decision Tree
  • Random Forest

Understanding Apache Kafka and Apache Flume

Learning Objectives: Understand Kafka and its Architecture. Also, learn about Kafka Cluster, how to configure different types of Kafka Cluster. Get introduced to Apache Flume, its architecture and how it is integrated with Apache Kafka for event processing. In the end, learn how to ingest streaming data using flume.

Topics:

  • Need for Kafka
  • What is Kafka? Preview
  • Core Concepts of Kafka
  • Kafka Architecture
  • Where is Kafka Used?
  • Understanding the Components of Kafka Cluster
  • Configuring Kafka Cluster
  • Kafka Producer and Consumer Java API
  • Need of Apache Flume
  • What is Apache Flume? Preview
  • Basic Flume Architecture
  • Flume Sources
  • Flume Sinks
  • Flume Channels
  • Flume Configuration Preview
  • Integrating Apache Flume and Apache Kafka
  • Hands-on:
  • Configuring Single Node Single Broker Cluster
  • Configuring Single Node Multi Broker Cluster
  • Producing and consuming messages
  • Flume Commands
  • Setting up Flume Agent
  • Streaming Twitter Data into HDFS

Apache Spark Streaming - Processing Multiple Batches

Learning Objectives: Work on Spark streaming which is used to build scalable fault-tolerant streaming applications. Also, learn about DStreams and various Transformations performed on the streaming data. You will get to know about commonly used streaming operators such as Sliding Window Operators and Stateful Operators.

Topics:

  • Drawbacks in Existing Computing Methods
  • Why Streaming is Necessary?
  • What is Spark Streaming? Preview
  • Spark Streaming Features
  • Spark Streaming Workflow Preview
  • How Uber Uses Streaming Data
  • Streaming Context & DStreams
  • Transformations on DStreams
  • Describe Windowed Operators and Why it is Useful
  • Important Windowed Operators
  • Slice, Window and ReduceByWindow Operators
  • Stateful Operators

Apache Spark Streaming - Data Sources

Learning Objectives: In this module, you will learn about the different streaming data sources such as Kafka and flume. At the end of the module, you will be able to create a spark streaming application.

Topics:

  • Apache Spark Streaming: Data Sources
  • Streaming Data Source Overview Preview
  • Apache Flume and Apache Kafka Data Sources
  • Example: Using a Kafka Direct Data Source
  • Perform Twitter Sentimental Analysis Using Spark Streaming Preview
  • Hands-on:
  • Different Streaming Data Sources

FAQ's


What if I miss a class of Apache Spark training?

"You will never miss a lecture at Upskill Yourself! You can choose either of the two options:

> View the recorded session of the class available in your LMS.

> You can attend the missed session, in any other live batch."

What if I have queries after I complete this course?

Your access to the Support Team is for lifetime and will be available 24/7. The team will help you in resolving queries, during and after the course.

Can I attend a demo session before enrollment?

We have limited number of participants in a live session to maintain the Quality Standards. So, unfortunately participation in a live class without enrollment is not possible. However, you can go through the sample class recording and it would give you a clear insight about how are the classes conducted, quality of instructors and the level of interaction in a class.

Who are the instructors of online Spark training?

All the instructors at edureka are practitioners from the Industry with minimum 10-12 yrs of relevant IT experience. They are subject matter experts and are trained by Upskill Yourself for providing an awesome learning experience to the participants.

How to learn Scala For Spark?

Scala stands for Scalable languages. Upskill Yourself's Spark and Scala training program is what you need if you are looking to master Spark with Scala. Our course module starts from the beginning and covers every module necessary. With our instructor led sessions and a 24x7 support system, we make sure that you achieve your learning objectives.

How to learn Apache Spark?

Our vast repository of guides, tutorials and full-fledged course will not only help you in understanding Spark, but also in mastering it. You can check out our blogs to get started with Spark and have basic foundational knowledge. Our tutorials will then help you in taking a deeper dive and understanding the underlying concepts. After this, our Spark and Scala certification training will help you in truly mastering the technology with instructor led sessions and real-word hands-on.

Mike Williams, Direct Consultant