DeepLearning.AI - Generative AI with Large Language Models
- Offered byCoursera
Generative AI with Large Language Models at Coursera Overview
Duration | 16 hours |
Start from | Start Now |
Total fee | Free |
Mode of learning | Online |
Difficulty level | Intermediate |
Official Website | Explore Free Course |
Credential | Certificate |
Generative AI with Large Language Models at Coursera Highlights
- Flexible deadlines Reset deadlines in accordance to your schedule.
- Shareable Certificate Earn a Certificate upon completion
- 100% online Start instantly and learn at your own schedule.
- Intermediate Level This is an intermediate course, so you should have some experience coding in Python to get the most out of it.
- Approx. 16 hours to complete
- English Subtitles: English
Generative AI with Large Language Models at Coursera Course details
- In Generative AI with Large Language Models (LLMs), you will learn the fundamentals of how generative AI works, and how to deploy it in real-world applications.
- By taking this course, you'll learn to:
- - Deeply understand generative AI, describing the key steps in a typical LLM-based generative AI lifecycle, from data gathering and model selection, to performance evaluation and deployment
- - Describe in detail the transformer architecture that powers LLMs, how they are trained, and how fine-tuning enables LLMs to be adapted to a variety of specific use cases
- - Use empirical scaling laws to optimize the model's objective function across dataset size, compute budget, and inference requirements
- - Apply state-of-the art training, tuning, inference, tools, and deployment methods to maximize the performance of models within the specific constraints of your project
- - Discuss the challenges and opportunities that generative AI creates for businesses after hearing stories from industry researchers and practitioners
- Developers who have a good foundational understanding of how LLMs work, as well the best practices behind training and deploying them, will be able to make good decisions for their companies and more quickly build working prototypes. This course will support learners in building practical intuition about how to best utilize this exciting new technology.
- This is an intermediate course, so you should have some experience coding in Python to get the most out of it. You should also be familiar with the basics of machine learning, such as supervised and unsupervised learning, loss functions, and splitting data into training, validation, and test sets. If you have taken the Machine Learning Specialization or Deep Learning Specialization from DeepLearning.AI, you will be ready to take this course and dive deeper into the fundamentals of generative AI.
Generative AI with Large Language Models at Coursera Curriculum
Week 1
Course Introduction
Introduction - Week 1
Generative AI & LLMs
LLM use cases and tasks
Text generation before transformers
Transformers architecture
Generating text with transformers
Prompting and prompt engineering
Generative configuration
Generative AI project lifecycle
Introduction to AWS labs
Lab 1 walkthrough
Pre-training large language models
Computational challenges of training LLMs
Optional video: Efficient multi-GPU compute strategies
Scaling laws and compute-optimal models
Pre-training for domain adaptation
Contributor Acknowledgments
Transformers: Attention is all you need
Domain-specific training: BloombergGPT
Week 1 resources
Lecture Notes Week 1
Week 2
Introduction - Week 2
Instruction fine-tuning
Fine-tuning on a single task
Multi-task instruction fine-tuning
Model evaluation
Benchmarks
Parameter efficient fine-tuning (PEFT)
PEFT techniques 1: LoRA
PEFT techniques 2: Soft prompts
Lab 2 walkthrough
Scaling instruct models
Week 2 Resources
Lecture Notes Week 2
Week 3
Introduction - Week 3
Aligning models with human values
Reinforcement learning from human feedback (RLHF)
RLHF: Obtaining feedback from humans
RLHF: Reward model
RLHF: Fine-tuning with reinforcement learning
Optional video: Proximal policy optimization
RLHF: Reward hacking
Scaling human feedback
Lab 3 walkthrough
Model optimizations for deployment
Generative AI Project Lifecycle Cheat Sheet
Using the LLM in applications
Interacting with external applications
Helping LLMs reason and plan with chain-of-thought
Program-aided language models (PAL)
ReAct: Combining reasoning and action
LLM application architectures
Optional video: AWS Sagemaker JumpStart
Responsible AI
Course conclusion
KL divergence
ReAct: Reasoning and action
Week 3 resources
Lecture Notes Week 3
Acknowledgments
(Optional) Opportunity to Mentor Other Learners