Taming Big Data with Apache Spark 3 and Python - Hands On!
- Offered byUDEMY
Taming Big Data with Apache Spark 3 and Python - Hands On! at UDEMY Overview
Duration | 7 hours |
Total fee | ₹599 |
Mode of learning | Online |
Difficulty level | Intermediate |
Official Website | Go to Website |
Credential | Certificate |
Taming Big Data with Apache Spark 3 and Python - Hands On! at UDEMY Highlights
- Compatible on Mobile and TV
- Earn a Cerificate on successful completion
- Get Full Lifetime Access
- Learn from Sundog Education by Frank Kane
Taming Big Data with Apache Spark 3 and Python - Hands On! at UDEMY Course details
- People with some software development background who want to learn the hottest technology in big data analysis will want to check this out. This course focuses on Spark from a software development standpoint; we introduce some machine learning and data mining concepts along the way, but that's not the focus. If you want to learn how to use Spark to carve up huge datasets and extract meaning from them, then this course is for you.
- If you've never written a computer program or a script before, this course isn't for you - yet. I suggest starting with a Python course first, if programming is new to you.
- If your software development job involves, or will involve, processing large amounts of data, you need to know about Spark.
- If you're training for a new career in data science or big data, Spark is an important part of it.
- Use DataFrames and Structured Streaming in Spark 3
- Frame big data analysis problems as Spark problems
- Use Amazon's Elastic MapReduce service to run your job on a cluster with Hadoop YARN
- Install and run Apache Spark on a desktop computer or on a cluster
- Use Spark's Resilient Distributed Datasets to process and analyze large data sets across many CPU's
- Implement iterative algorithms such as breadth-first-search using Spark
- Use the MLLib machine learning library to answer common data mining questions
- Understand how Spark SQL lets you work with structured data
- Understand how Spark Streaming lets your process continuous streams of data in real time
- Tune and troubleshoot large jobs running on a cluster
- Share information between nodes on a Spark cluster using broadcast variables and accumulators
- Understand how the GraphX library helps with network analysis problems
- New! Updated for Spark 3 and with a hands-on structured streaming example. Big data" analysis is a hot and highly valuable skill and this course will teach you the hottest technology in big data: Apache Spark . Employers including Amazon , EBay , NASA JPL , and Yahoo all use Spark to quickly extract meaning from massive data sets across a fault-tolerant Hadoop cluster. You'll learn those same techniques, using your own Windows system right at home. It's easier than you might think. Learn and master the art of framing data analysis problems as Spark problems through over 15 hands-on examples, and then scale them up to run on cloud computing services in this course. You'll be learning from an ex-engineer and senior manager from Amazon and IMDb. Learn the concepts of Spark's Resilient Distributed Datastores Develop and run Spark jobs quickly using Python Translate complex analysis problems into iterative or multi-stage Spark scripts Scale up to larger data sets using Amazon's Elastic MapReduce service Understand how Hadoop YARN distributes Spark across computing clusters Learn about other Spark technologies, like Spark SQL, Spark Streaming, and GraphX By the end of this course, you'll be running code that analyzes gigabytes worth of information in the cloud in a matter of minutes. This course uses the familiar Python programming language ; if you'd rather use Scala to get the best performance out of Spark, see my "Apache Spark with Scala - Hands On with Big Data" course instead. We'll have some fun along the way. You'll get warmed up with some simple examples of using Spark to analyze movie ratings data and text in a book. Once you've got the basics under your belt, we'll move to some more complex and interesting tasks. We'll use a million movie ratings to find movies that are similar to each other, and you might even discover some new movies you might like in the process! We'll analyze a social graph of superheroes, and learn who the most popular" superhero is and develop a system to find degrees of separation" between superheroes. Are all Marvel superheroes within a few degrees of being connected to The Incredible Hulk? You'll find the answer. This course is very hands-on; you'll spend most of your time following along with the instructor as we write, analyze, and run real code together both on your own system, and in the cloud using Amazon's Elastic MapReduce service. 5 hours of video content is included, with over 15 real examples of increasing complexity you can build, run and study yourself. Move through them at your own pace, on your own schedule. The course wraps up with an overview of other Spark-based technologies, including Spark SQL, Spark Streaming, and GraphX. Wrangling big data with Apache Spark is an important skill in today's technical world. Enroll now! " I studied "Taming Big Data with Apache Spark and Python" with Frank Kane, and helped me build a great platform for Big Data as a Service for my company. I recommend the course! " - Cleuton Sampaio De Melo Jr.
Taming Big Data with Apache Spark 3 and Python - Hands On! at UDEMY Curriculum
Getting Started with Spark
Introduction
How to Use This Course
Udemy 101: Getting the Most From This Course
[Activity]Getting Set Up: Installing Python, a JDK, Spark, and its Dependencies.
[Activity] Installing Spark
[Activity] Installing the MovieLens Movie Rating Dataset
[Activity] Run your first Spark program! Ratings histogram example.
Spark Basics and Simple Examples
What's new in Spark 3?
Introduction to Spark
The Resilient Distributed Dataset (RDD)
Ratings Histogram Walkthrough
Key/Value RDD's, and the Average Friends by Age Example
[Activity] Running the Average Friends by Age Example
Filtering RDD's, and the Minimum Temperature by Location Example
[Activity]Running the Minimum Temperature Example, and Modifying it for Maximums
[Activity] Running the Maximum Temperature by Location Example
[Activity] Counting Word Occurrences using flatmap()
[Activity] Improving the Word Count Script with Regular Expressions
[Activity] Sorting the Word Count Results
[Exercise] Find the Total Amount Spent by Customer
[Excercise] Check your Results, and Now Sort them by Total Amount Spent.
Check Your Sorted Implementation and Results Against Mine.
Advanced Examples of Spark Programs
[Activity] Find the Most Popular Movie
[Activity] Use Broadcast Variables to Display Movie Names Instead of ID Numbers
Find the Most Popular Superhero in a Social Graph
[Activity] Run the Script - Discover Who the Most Popular Superhero is!
Superhero Degrees of Separation: Introducing Breadth-First Search
Superhero Degrees of Separation: Accumulators, and Implementing BFS in Spark
[Activity] Superhero Degrees of Separation: Review the Code and Run it
Item-Based Collaborative Filtering in Spark, cache(), and persist()
[Activity] Running the Similar Movies Script using Spark's Cluster Manager
[Exercise] Improve the Quality of Similar Movies
Running Spark on a Cluster
Introducing Elastic MapReduce
[Activity] Setting up your AWS / Elastic MapReduce Account and Setting Up PuTTY
Partitioning
Create Similar Movies from One Million Ratings - Part 1
[Activity] Create Similar Movies from One Million Ratings - Part 2
Create Similar Movies from One Million Ratings - Part 3
Troubleshooting Spark on a Cluster
More Troubleshooting, and Managing Dependencies
SparkSQL, DataFrames, and DataSets
Introducing SparkSQL
Executing SQL commands and SQL-style functions on a DataFrame
Using DataFrames instead of RDD's
Other Spark Technologies and Libraries
Introducing MLLib
[Activity] Using MLLib to Produce Movie Recommendations
Analyzing the ALS Recommendations Results
Using DataFrames with MLLib
[Activity] Spark SQL
Spark Streaming
[Activity] Structured Streaming in Python
GraphX
You Made It! Where to Go from Here.
Learning More about Spark and Data Science
Bonus Lecture: Discounts on my other courses!
Bonus Lecture: More courses to explore!