Home
The ALPACA Code explained: Self-instruct fine-tuning of LLMs
Discover AI
Apr 10, 2023
7,261 views
Create Self-Instruct Data Sets: for Synthetic Self-Instruct (ChatGPT) fine-tuning of LLMs
Fine-tuning Large Language Models (LLMs) | w/ Example Code
Boost Fine-Tuning Performance of LLM: Optimal Architecture w/ PEFT LoRA Adapter-Tuning on Your GPU
How to Improve LLMs with RAG (Overview + Python Code)
Stanford's new ALPACA 7B LLM explained - Fine-tune code and data set for DIY
The Attention Mechanism in Large Language Models
How To Fine-Tune the Alpaca Model For Any Language | ChatGPT Alternative
Large Language Models (LLMs) - Everything You NEED To Know
Self Instruct: Aligning Language Model with Self Generated Instructions
Inside the LLM: Visualizing the Embeddings Layer of Mistral-7B and Gemma-2B
LLM-augmented Autonomous Agents (LAA): Achieving Goals with Just One PROMPT (No LC)
Fine-tuning LLMs with PEFT and LoRA
The inner workings of LLMs explained - VISUALIZE the self-attention mechanism
Prompt Engineering, RAG, and Fine-tuning: Benefits and When to Use
QLoRA—How to Fine-tune an LLM on a Single GPU (w/ Python Code)
Self-Instruct: Aligning Language Models with Self-Generated Instructions
LLM Ecosystem explained: Your ultimate Guide to AI
Llama - EXPLAINED!
Low-rank Adaption of Large Language Models: Explaining the Key Concepts Behind LoRA
Alpaca & LLaMA: Can it Compete with ChatGPT?