UDEMY
UDEMY Logo

Data Science: Transformers for Natural Language Processing 

  • Offered byUDEMY

Data Science: Transformers for Natural Language Processing
 at 
UDEMY 
Overview

ChatGPT, GPT-4, BERT, Deep Learning, Machine Learning, & NLP with Hugging Face, Attention in Python, Tensorflow, PyTorch

Duration

18 hours

Total fee

3,099

Mode of learning

Online

Credential

Certificate

Data Science: Transformers for Natural Language Processing
 at 
UDEMY 
Highlights

  • 30-Day Money-Back Guarantee
  • Certificate of completion
  • Full lifetime access
  • Learn from 1 article
Read more
Details Icon

Data Science: Transformers for Natural Language Processing
 at 
UDEMY 
Course details

Skills you will learn
What are the course deliverables?
  • Apply transformers to real-world tasks with just a few lines of code
  • Fine-tune transformers on your own datasets with transfer learning
  • Sentiment analysis, spam detection, text classification
  • NER (named entity recognition), parts-of-speech tagging
  • Build your own article spinner for SEO
  • Generate believable human-like text
  • Neural machine translation and text summarization
  • Question-answering (e.g. SQuAD)
  • Zero-shot classification
  • Understand self-attention and in-depth theory behind transformers
  • Implement transformers from scratch
  • Use transformers with both Tensorflow and PyTorch
  • Understand BERT, GPT, GPT-2, and GPT-3, and where to apply them
  • Understand encoder, decoder, and seq2seq architectures
  • Master the Hugging Face Python library
  • Understand important foundations for OpenAI ChatGPT, GPT-4, DALL-E, Midjourney, and Stable Diffusion
Read more
More about this course
  • Ever wondered how AI technologies like OpenAI ChatGPT, GPT-4, DALL-E, Midjourney, and Stable Diffusion really work? In this course, you will learn the foundations of these groundbreaking applications.Hello friends!Welcome to Data Science: Transformers for Natural Language Processing.Ever since Transformers arrived on the scene, deep learning hasn't been the same.Machine learning is able to generate text essentially indistinguishable from that created by humans We've reached new state-of-the-art performance in many NLP tasks, such as machine translation, question-answering, entailment, named entity recognition, and more've created multi-modal (text and image) models that can generate amazing art using only a text prompt We've solved a longstanding problem in molecular biology known as "protein structure prediction" In this course, you will learn very practical skills for applying transformers, and if you want, detailed theory behind how transformers and attention work. This is different from most other resources, which only cover the former. The course is split into 3 major parts: Using TransformersFine-Tuning TransformersTransformers In-DepthPART 1: Using TransformersIn this section, you will learn how to use transformers which were trained for you. This costs millions of dollars to do, so it's not something you want to try by yourself! We'll see how these prebuilt models can already be used for a wide array of tasks, including: text classification (e.g. spam detection, sentiment analysis, document categorization)named entity recognitiontext summarizationmachine translationquestion-answeringgenerating (believable) textmasked language modeling (article spinning)zero-shot classificationThis is already very practical. If you need to do sentiment analysis, document categorization, entity recognition, translation, summarization, etc. on documents at your workplace or for your clients - you already have the most powerful state-of-the-art models at your fingertips with very few lines of code. One of the most amazing applications is "zero-shot classification", where you will observe that a pre-trained model can categorize your documents, even without any training at all.PART 2: Fine-Tuning TransformersIn this section, you will learn how to improve the performance of transformers on your own custom datasets. By using "transfer learning", you can leverage the millions of dollars of training that have already gone into making transformers work very well. You'll see that you can fine-tune a transformer with relatively little work (and little cost). We'll cover how to fine-tune transformers for the most practical tasks in the real world, like text classification (sentiment analysis, spam detection), entity recognition, and machine translation.PART 3: Transformers In-DepthIn this section, you will learn how transformers really work. The previous sections are nice, but a little too nice. Libraries are OK for people who just want to get the job done, but they don't work if you want to do anything new or interesting. Let's be clear: this is very practical. How practical, you might ask? Well, this is where the big bucks are. Those who have a deep understanding of these models and can do things no one has ever done before are in a position to command higher salaries and prestigious titles. Machine learning is a competitive field and a deep understanding of how things work can be the edge you need to come out on top. We'll look at the inner workings of encoders, decoders, encoder-decoders, BERT, GPT, GPT-2, GPT-3, GPT-3.5, ChatGPT, and GPT-4 (for the latter, we are limited to what OpenAI has revealed). We'll also look at how to implement transformers from scratch. As the great Richard Feynman once said, "What I cannot create, I do not understand".SUGGESTED PREREQUISITES: Decent Python coding skillsDeep learning with CNNs and RNNs useful but not requiredDeep learning with Seq2Seq models useful but not required for the in-depth section: understanding the theory behind CNNs, RNNs, and seq2seq is very usefulUNIQUE FEATURESEvery line of code explained in detail - email me any time if you disagree wasted time "typing" on the keyboard like other courses - let's be honest, nobody can really write code worth learning about in just 20 minutes from scratch not afraid of university-level math - get important details about algorithms that other courses leave thank you for reading and I hope to see you soon!
Read more

Data Science: Transformers for Natural Language Processing
 at 
UDEMY 
Curriculum

Welcome

Introduction

Outline

Getting Setup

Get Your Hands Dirty, Practical Coding Experience, Data Links

How to use Github & Extra Coding Tips (Optional)

Where to get the code, notebooks, and data

Are You Beginner, Intermediate, or Advanced? All are OK!

How to Succeed in This Course

Temporary 403 Errors

Beginner's Corner

Beginner's Corner Section Introduction

From RNNs to Attention and Transformers - Intuition

Sentiment Analysis

Sentiment Analysis in Python

Text Generation

Text Generation in Python

Masked Language Modeling (Article Spinner)

Masked Language Modeling (Article Spinner) in Python

Named Entity Recognition (NER)

Named Entity Recognition (NER) in Python

Text Summarization

Text Summarization in Python

Neural Machine Translation

Neural Machine Translation in Python

Question Answering

Question Answering in Python

Zero-Shot Classification

Zero-Shot Classification in Python

Beginner's Corner Section Summary

Suggestion Box

Fine-Tuning (Intermediate)

Fine-Tuning Section Introduction

Text Preprocessing and Tokenization Review

Models and Tokenizers

Models and Tokenizers in Python

Transfer Learning & Fine-Tuning (pt 1)

Transfer Learning & Fine-Tuning (pt 2)

Transfer Learning & Fine-Tuning (pt 3)

Fine-Tuning Sentiment Analysis and the GLUE Benchmark

Fine-Tuning Sentiment Analysis in Python

Fine-Tuning Transformers with Custom Dataset

Hugging Face AutoConfig

Fine-Tuning with Multiple Inputs (Textual Entailment)

Fine-Tuning Transformers with Multiple Inputs in Python

Fine-Tuning Section Summary

Named Entity Recognition (NER) and POS Tagging (Intermediate)

Token Classification Section Introduction

Data & Tokenizer (Code Preparation)

Data & Tokenizer (Code)

Target Alignment (Code Preparation)

Create Tokenized Dataset (Code Preparation)

Target Alignment (Code)

Data Collator (Code Preparation)

Data Collator (Code)

Metrics (Code Preparation)

Metrics (Code)

Model and Trainer (Code Preparation)

Model and Trainer (Code)

POS Tagging & Custom Datasets (Exercise Prompt)

POS Tagging & Custom Datasets (Solution)

Token Classification Section Summary

Seq2Seq and Neural Machine Translation (Intermediate)

Translation Section Introduction

Data & Tokenizer (Code Preparation)

Things Move Fast

Data & Tokenizer (Code)

Aside: Seq2Seq Basics (Optional)

Model Inputs (Code Preparation)

Model Inputs (Code)

Translation Metrics (BLEU Score & BERT Score) (Code Preparation)

Translation Metrics (BLEU Score & BERT Score) (Code)

Train & Evaluate (Code Preparation)

Train & Evaluate (Code)

Translation Section Summary

Question-Answering (Advanced)

Question-Answering Section Introduction

Exploring the Dataset (SQuAD)

Exploring the Dataset (SQuAD) in Python

Using the Tokenizer

Using the Tokenizer in Python

Aligning the Targets

Aligning the Targets in Python

Applying the Tokenizer

Applying the Tokenizer in Python

Question-Answering Metrics

Question-Answering Metrics in Python

From Logits to Answers

From Logits to Answers in Python

Computing Metrics

Computing Metrics in Python

Train and Evaluate

Train and Evaluate in Python

Question-Answering Section Summary

Transformers and Attention Theory (Advanced)

Theory Section Introduction

Basic Self-Attention

Self-Attention & Scaled Dot-Product Attention

Attention Efficiency

Attention Mask

Multi-Head Attention

Transformer Block

Positional Encodings

Encoder Architecture

Decoder Architecture

Encoder-Decoder Architecture

BERT

GPT

GPT-2

GPT-3

ChatGPT

GPT-4

Theory Section Summary

Implement Transformers From Scratch (Advanced)

Implementation Section Introduction

Encoder Implementation Plan & Outline

How to Implement Multihead Attention From Scratch

How to Implement the Transformer Block From Scratch

How to Implement Positional Encoding From Scratch

How to Implement Transformer Encoder From Scratch

Train and Evaluate Encoder From Scratch

How to Implement Causal Self-Attention From Scratch

How to Implement a Transformer Decoder (GPT) From Scratch

How to Train a Causal Language Model From Scratch

Implement a Seq2Seq Transformer From Scratch for Language Translation (pt 1)

Implement a Seq2Seq Transformer From Scratch for Language Translation (pt 2)

Implement a Seq2Seq Transformer From Scratch for Language Translation (pt 3)

Implementation Section Summary

Extras

Data Links

Setting Up Your Environment FAQ

Anaconda Environment Setup

How to install Numpy, Scipy, Matplotlib, Pandas, IPython, Theano, and TensorFlow

Extra Help With Python Coding for Beginners FAQ

How to Code by Yourself (part 1)

How to Code by Yourself (part 2)

Proof that using Jupyter Notebook is the same as not using it

Effective Learning Strategies for Machine Learning FAQ

How to Succeed in this Course (Long Version)

Is this for Beginners or Experts? Academic or Practical? Fast or slow-paced?

Machine Learning and AI Prerequisite Roadmap (pt 1)

Machine Learning and AI Prerequisite Roadmap (pt 2)

Appendix / FAQ Finale

What is the Appendix?

BONUS

Faculty Icon

Data Science: Transformers for Natural Language Processing
 at 
UDEMY 
Faculty details

Lazy Programmer Team
Designation : Artificial Intelligence and Machine Learning Engineer
Lazy Programmer Inc.
Designation : Artificial intelligence and machine learning engineer

Other courses offered by UDEMY

549
50 hours
– / –
3 K
10 hours
– / –
549
4 hours
– / –
599
10 hours
– / –
View Other 2344 CoursesRight Arrow Icon
qna

Data Science: Transformers for Natural Language Processing
 at 
UDEMY 

Student Forum

chatAnything you would want to ask experts?
Write here...