Home
Low-rank Adaption of Large Language Models: Explaining the Key Concepts Behind LoRA
Chris Alexiuk
Apr 30, 2023
106,913 views
Low-rank Adaption of Large Language Models Part 2: Simple Fine-tuning with LoRA
LoRA explained (and a bit about precision and quantization)
[1hr Talk] Intro to Large Language Models
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Query, Key and Value Matrix for Attention Mechanisms in Large Language Models
Large Language Models in Five Formulas
LoRA & QLoRA Fine-tuning Explained In-Depth
Low-Rank Adaptation - LoRA explained
What is LoRA? Low-Rank Adaptation for finetuning LLMs EXPLAINED
Fine-tuning Large Language Models (LLMs) | w/ Example Code
PEFT LoRA Explained in Detail - Fine-Tune your LLM on your local GPU
LLMOps (LLM Bootcamp)
QLoRA: Efficient Finetuning of Quantized LLMs | Tim Dettmers
The Attention Mechanism in Large Language Models
Understanding 4bit Quantization: QLoRA explained (w/ Colab)
QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models
LoRA: Low-Rank Adaptation of Large Language Models - Explained visually + PyTorch code from scratch
A Guide to Parameter-Efficient Fine-Tuning - Vlad Lialin | Munich NLP Hands-on 021
QLoRA is all you need (Fast and lightweight model fine-tuning)
Tree of Thoughts: Deliberate Problem Solving with Large Language Models - Let Your LLMs Play Games!