Home
Low-Rank Adaptation - LoRA explained
AI Bites
Dec 14, 2023
10,462 views
LoRA explained (and a bit about precision and quantization)
QLoRA paper explained (Efficient Finetuning of Quantized LLMs)
MAMBA from Scratch: Neural Nets Better and Faster than Transformers
LoRA: Low-Rank Adaptation of Large Language Models - Explained visually + PyTorch code from scratch
How positional encoding in transformers works?
So you think you know Text to Video Diffusion models?
LoRA & QLoRA Fine-tuning Explained In-Depth
What is Low-Rank Adaptation (LoRA) | explained by the inventor
Understanding 4bit Quantization: QLoRA explained (w/ Colab)
Why Does Diffusion Work Better than Auto-Regression?
Low-rank Adaption of Large Language Models: Explaining the Key Concepts Behind LoRA
Fine tuning Whisper for Speech Transcription
AI, Machine Learning, Deep Learning and Generative AI Explained
Model Distillation: Same LLM Power but 3240x Smaller
What is LoRA? Low-Rank Adaptation for finetuning LLMs EXPLAINED
Why Neural Networks can learn (almost) anything
But what is a GPT? Visual intro to transformers | Chapter 5, Deep Learning
PEFT LoRA Explained in Detail - Fine-Tune your LLM on your local GPU
Low-rank Adaption of Large Language Models Part 2: Simple Fine-tuning with LoRA
21世纪最重要的发明:LoRA,让每个人都能微调大语言模型!将会如何重写世界规则?(用笔记本就能做大模型)