Home
ReFT: Representation Finetuning for Language Models | AI Paper Explained
AI Papers Academy
Apr 10, 2024
2,933 views
CLLMs: Consistency Large Language Models | AI Paper Explained
LoRA & QLoRA Fine-tuning Explained In-Depth
The Era of 1-bit LLMs by Microsoft | AI Paper Explained
But what is a GPT? Visual intro to transformers | Chapter 5, Deep Learning
QLoRA paper explained (Efficient Finetuning of Quantized LLMs)
Representation Engineering
Fast Inference of Mixture-of-Experts Language Models with Offloading
ReFT: Representation Finetuning for Language Models -- Aryaman Arora & Zhengxuan (Zen) Wu
Should You Use Open Source Large Language Models?
What is LoRA? Low-Rank Adaptation for finetuning LLMs EXPLAINED
Stealing Part of a Production Language Model | AI Paper Explained
Why Does Diffusion Work Better than Auto-Regression?
What are AI Agents?
Mixture of Nested Experts: Adaptive Processing of Visual Tokens | AI Paper Explained
Low-rank Adaption of Large Language Models: Explaining the Key Concepts Behind LoRA
ReFT Explained
Self-Rewarding Language Models by Meta AI - Path to Open-Source AGI?
MAMBA and State Space Models explained | SSM explained
How AI 'Understands' Images (CLIP) - Computerphile
What is RAG? (Retrieval Augmented Generation)