AI & ML

Fine-Tuning

The process of further training a pre-trained model on a specific dataset to adapt it for a particular task or domain.

Fine-tuning is a transfer learning technique where a pre-trained model is further trained on a smaller, task-specific dataset. This adapts the model's knowledge to a particular domain, style, or task without training from scratch.

For LLMs, fine-tuning can teach the model to follow specific output formats, adopt a particular writing style, or gain expertise in a niche domain. Common approaches include full fine-tuning (updating all parameters), LoRA (Low-Rank Adaptation, updating a small set of additional parameters), and QLoRA (quantized LoRA for memory efficiency).

Fine-tuning is appropriate when you need consistent behavior that prompt engineering alone can't achieve, when you have high-quality training examples, or when you need to reduce token usage by encoding instructions into the model's weights.

The process typically involves preparing a dataset of input-output pairs, choosing a base model, configuring hyperparameters (learning rate, epochs, batch size), and evaluating the fine-tuned model against a held-out test set.

Want to learn more?

Explore more developer terms or read in-depth articles on the blog.

Browse all terms