Course Syllabus and Timelines
Last updated
Last updated
By the end of this course, you will:
Be proficient in developing LLM-based applications for production applications from day 0.
Have a clear understanding of LLM architecture and pipeline.
Be able to perform prompt engineering to use generative AI tools such as ChatGPT.
Create an open-source project on a real-time stream of data or static data.
Once the problem statements and the hands-on development module are released on the 20th of June 2024, the project submissions will be open till the 9th of July 2024, Sunday 11:59 pm CEST.
Throughout the bootcamp, you'll see some modules or links labeled as bonus resources. These are not compulsory for building a project by the end of the bootcamp or attempting the quizzes.
Nonetheless, they are relevant resources that could enhance your understanding, although they might require additional prerequisites. Depending on your starting point and the pace you're progressing through the bootcamp, you can explore or park these bonus materials.
1 – Basics of LLMs
What is generative AI and how it's different
Understanding LLMs
Advantages and Common Industry Applications
Bonus section: Google Gemini and Multimodal LLMs
--- Released
2 – Word Vectors
What are word vectors and word-vector relationships?
Role of context
Transforming vectors in LLM responses
Overview of Transformers Architecture
Bonus Resource: Transformers Architecture, Self-attention, Multi-head attention, and Vision Transformers
Bonus Resource: Talk on Future of LLMs by the Co-Creator of ChatGPT, Łukasz Kaiser
--- Release date: 13 June, Tuesday
3 – Prompt Engineering
Introduction and in-context learning
Best practices to follow: Few Shot Prompting and more
Token Limits
Prompt Engineering Exercise (Ungraded)
--
Release date: 18 June, Thursday (Revised)
Refresher Module
Overview of learnings so far sent over registered email address.
Release of bootcamp keynote session(s).
--
Will be sent via email to registered email IDs on 20th June.
4 – RAG and LLM Architecture
Introduction to RAG
LLM Architecture Used by Enterprises
RAG vs Fine-Tuning and Prompt Engineering
Key Benefits of RAG for Realtime Applications
Bonus: Similarity Search for Efficient Information Retrieval
Bonus: Use of LSH + kNN and Incremental Indexing
Bonus: Forgetting in LLMs and Stream Data Processing (archived live interactions)
-- Release date: 20 June, Thursday (revised)
5 – Hands-on Development of Realtime LLM Applications
Installing Dependencies and Pre-requisites
Building a Dropbox RAG App using open-source
Building Realtime Discounted Products Fetcher for Amazon Users
Building RAG applications with local models
Leveraging Pathway with LlamaIndex/Langchain (Bonus)
Problem Statements for Projects
Project Submission
-- Release date: 22 June, Saturday
6 – Project Development
Problem Statements Release for the Projects
Window for Sharing/Reviewing Project Ideas via Discord Channel
Online Office Hours
Projects Submission
Project Feedback (after the submissions deadline)
--
Release date: 22–9 July