Home
What is LoRA? Low-Rank Adaptation for finetuning LLMs EXPLAINED
AI Coffee Break with Letitia
Sep 18, 2023
42,285 views
LoRA explained (and a bit about precision and quantization)
LoRA & QLoRA Fine-tuning Explained In-Depth
LoRA: Low-Rank Adaptation of Large Language Models - Explained visually + PyTorch code from scratch
How positional encoding in transformers works?
What is Retrieval-Augmented Generation (RAG)?
The moment we stopped understanding AI [AlexNet]
Sparse LLMs at inference: 6x faster transformers! | DEJAVU paper explained
Insights from Finetuning LLMs with Low-Rank Adaptation
"I want Llama3 to perform 10x with my private knowledge" - Local Agentic RAG w/ llama3
Fine-tuning Large Language Models (LLMs) | w/ Example Code
MAMBA and State Space Models explained | SSM explained
Transformers explained | The architecture behind LLMs
QLoRA paper explained (Efficient Finetuning of Quantized LLMs)
Low-rank Adaption of Large Language Models: Explaining the Key Concepts Behind LoRA
Low-Rank Adaptation - LoRA explained
How LoRa Modulation really works - long range communication using chirps
LoRA: Low-Rank Adaptation of LLMs Explained
What is Low-Rank Adaptation (LoRA) | explained by the inventor
Direct Preference Optimization: Your Language Model is Secretly a Reward Model | DPO paper explained