Primer to RAG: Pre-trained and Fine-Tuned LLMs
Last updated
Last updated
Welcome back to our module on LLM Architecture and RAG!
Up next is a series of learning resources created by Anup Surendran that sets the stage for your journey ahead. This video serves as a primer, acquainting you with key concepts such as pre-training, RLHF (Reinforcement Learning from Human Feedback), fine-tuning, and in-context learning.
Note: In certain cases, you may notice that the video is broken down into a few components across modules to enhance your learning journey. Please don't worry if you see the same video in multiple modules if their respective timestamps are different. It's not a bug, it's a feature.
These aren't just buzzwords; they're your toolkit for unlocking the full potential of Large Language Models. Understanding these terms will be crucial as they lay the groundwork for our upcoming module, which delves into 'In-Context Learning.' So, stay tuned!