Generative Pre-trained Transformers (GPT)
- Offered byCoursera
Generative Pre-trained Transformers (GPT) at Coursera Overview
Duration | 12 hours |
Start from | Start Now |
Total fee | Free |
Mode of learning | Online |
Difficulty level | Intermediate |
Official Website | Explore Free Course |
Credential | Certificate |
Generative Pre-trained Transformers (GPT) at Coursera Highlights
- Earn a certificate from University of Glasgow
- Add to your LinkedIn profile
Generative Pre-trained Transformers (GPT) at Coursera Course details
- What you'll learn
- You will learn the foundations, implementations, applications, and risks of GPT.
- Large language models such as GPT-3.5, which powers ChatGPT, are changing how humans interact with computers and how computers can process text. This course will introduce the fundamental ideas of natural language processing and language modelling that underpin these large language models. We will explore the basics of how language models work, and the specifics of how newer neural-based approaches are built. We will examine the key innovations that have enabled Transformer-based large language models to become dominant in solving various language tasks. Finally, we will examine the challenges in applying these large language models to various problems including the ethical problems involved in their construction and use. Through hands-on labs, we will learn about the building blocks of Transformers and apply them for generating new text. These Python exercises step you through the process of applying a smaller language model and understanding how it can be evaluated and applied to various problems. Regular practice quizzes will help reinforce the knowledge and prepare you for the graded assessments.
Generative Pre-trained Transformers (GPT) at Coursera Curriculum
Language Modeling
Course and Instructor Introductions
What is a Language Model?
N-gram Language Models
Evaluating Language Models
Generating Text
Summary of Module
Optional Reading
Lesson Quiz
Lesson Quiz
Lesson Quiz
Module 1 Quiz
Building Your Own N-Gram Language Model
Calculating Perplexity from an N-Gram Language Model
Building a Simple Text Generator
Transformers and GPT
Introduction to Week
Representing text with numerical vectors
Words, tokens or sub-tokens
Neural language models
The Transformer architecture
How are Transformers trained?
GPT and BERT
Using GPT
The ethical challenges of large language models
Wrap-up of the module
On the Dangers of Stochastic Parrots
Lesson quiz
Lesson quiz
Lesson quiz
Module 2 Quiz
Tokenization
Language Modelling with Transformers
Calculating perplexity
Language Generation with Transformers
Applications and Implications
Introduction to the Module
Hallucinations in LLMs
Chatbots, RLHF, and ChatGPT
Academic conduct
Creativity and copyright
Regulation and risks
Course Recap
Optional: ChatGPT Lawyer details
Required: OpenAI blog post about ChatGPT and RLHF
Required: Research about AI detectors and non-native speakers
Optional: Ouyang et al. (2022) paper about InstructGPT
Optional: News stories about the humans in RLHF
Optional: Russell Group and University of Glasgow Principles on AI in Education
Required: Scientific American article on AI risks
Optional: AI-generated e-books and journalism
Optional: Generative AI, copyright, and Hollywood strikes
Optional: Open letters
Optional: Information on draft legislation
Optional: One Hundred Year Study on Artificial Intelligence
Lesson Quiz
Lesson Quiz
Lesson Quiz
Module 3 Quiz
Basic Chatbot