IBM - Fundamentals of Scalable Data Science
- Offered byCoursera
Fundamentals of Scalable Data Science at Coursera Overview
Duration | 20 hours |
Start from | Start Now |
Total fee | Free |
Mode of learning | Online |
Difficulty level | Beginner |
Official Website | Explore Free Course |
Credential | Certificate |
Fundamentals of Scalable Data Science at Coursera Highlights
- 52% started a new career after completing these courses.
- 43% got a tangible career benefit from this course.
- Earn a shareable certificate upon completion.
Fundamentals of Scalable Data Science at Coursera Course details
- Apache Spark is the de-facto standard for large scale data processing. This is the first course of a series of courses towards the IBM Advanced Data Science Specialization. We strongly believe that is is crucial for success to start learning a scalable data science platform since memory and CPU constraints are to most limiting factors when it comes to building advanced machine learning models.
- In this course we teach you the fundamentals of Apache Spark using python and pyspark. We'll introduce Apache Spark in the first two weeks and learn how to apply it to compute basic exploratory and data pre-processing tasks in the last two weeks. Through this exercise you'll also be introduced to the most fundamental statistical measures and data visualization technologies.
- This gives you enough knowledge to take over the role of a data engineer in any modern environment. But it gives you also the basis for advancing your career towards data science.
- Please have a look at the full specialization curriculum:
- https://www.coursera.org/specializations/advanced-data-science-ibm
- If you choose to take this course and earn the Coursera course certificate, you will also earn an IBM digital badge. To find out more about IBM digital badges follow the link ibm.biz/badging.
- After completing this course, you will be able to:
- ? Describe how basic statistical measures, are used to reveal patterns within the data
- ? Recognize data characteristics, patterns, trends, deviations or inconsistencies, and potential outliers.
- ? Identify useful techniques for working with big data such as dimension reduction and feature selection methods
- ? Use advanced tools and charting libraries to:
- o improve efficiency of analysis of big-data with partitioning and parallel analysis
- o Visualize the data in an number of 2D and 3D formats (Box Plot, Run Chart, Scatter Plot, Pareto Chart, and Multidimensional Scaling)
- For successful completion of the course, the following prerequisites are recommended:
- ? Basic programming skills in python
- ? Basic math
- ? Basic SQL (you can get it easily from https://www.coursera.org/learn/sql-data-science if needed)
- In order to complete this course, the following technologies will be used:
- (These technologies are introduced in the course as necessary so no previous knowledge is required.)
- ? Jupyter notebooks (brought to you by IBM Watson Studio for free)
- ? ApacheSpark (brought to you by IBM Watson Studio for free)
- ? Python
- We've been reported that some of the material in this course is too advanced. So in case you feel the same, please have a look at the following materials first before starting this course, we've been reported that this really helps.
- Of course, you can give this course a try first and then in case you need, take the following courses / materials. It's free...
- https://cognitiveclass.ai/learn/spark
- https://dataplatform.cloud.ibm.com/analytics/notebooks/v2/f8982db1-5e55-46d6-a272-fd11b670be38/view?access_token=533a1925cd1c4c362aabe7b3336b3eae2a99e0dc923ec0775d891c31c5bbbc68
- This course takes four weeks, 4-6h per week
Fundamentals of Scalable Data Science at Coursera Curriculum
Introduction the course and grading environment
Course Overview and a warm welcome
Overview of technology used within the course
Intro to Apache Spark
Assignment and Exercise Environment Setup
IMPORTANT: How to submit your programming assignments
Challenges, terminology, methods and technology
Tools that support BigData solutions
Data storage solutions
Parallel data processing strategies of Apache Spark
Programming language options on ApacheSpark
Functional programming basics
Introduction of Cloudant
Resilient Distributed Dataset and DataFrames - ApacheSparkSQL
OPTIONAL: Test Data Generator (data is provided for you already)
Apache Parquet (optional)
Create the data on your own (optional)
Data storage solutions, and ApacheSpark
Programming language options and functional programming
ApacheSparkSQL and Cloudant
Scaling Math for Statistics on Apache Spark
Overview of the week...
Averages
Standard deviation
Skewness
Kurtosis
Covariance, Covariance matrices, correlation
Multidimensional vector spaces
Exercise 2
Averages and standard deviation
Skewness and kurtosis
Covariance, correlation and multidimensional Vector Spaces
Data Visualization of Big Data
Overview of the week
Plotting with ApacheSpark and python's matplotlib
Dimensionality reduction
PCA
Exercise on Plotting
Exercise on PCA
Visualization and dimension reduction
Fundamentals of Scalable Data Science at Coursera Admission Process
Important Dates
Other courses offered by Coursera
Fundamentals of Scalable Data Science at Coursera Students Ratings & Reviews
- 4-52