Skip to main content
Back to Glossary
Technical

Fine-tuning

The process of training an existing AI model on specific data to customize its behavior for particular tasks.


Why Fine-tune?

Base AI models are generalists. They know a lot about everything but aren't specialized for your specific needs.

Fine-tuning teaches a model your:

  • Writing style
  • Domain terminology
  • Specific formats
  • Company knowledge

After fine-tuning, the model responds more naturally for your use case without extensive prompting.

Fine-tuning vs Prompting

Prompting: Give instructions each time. "Write in a casual tone. Use these terms. Format like this..."

Fine-tuning: Train the model once. It just knows your preferences automatically.

Fine-tuning is more work upfront but saves effort (and tokens) long-term if you're using the model repeatedly.

When Fine-tuning Makes Sense

  • You have consistent, specific requirements
  • You're making many similar requests
  • Prompting alone isn't getting the quality you need
  • You have training data that represents what you want

When Fine-tuning Doesn't Make Sense

  • Your needs vary significantly between tasks
  • You don't have good training examples
  • You're just experimenting
  • A better base model might solve your problem

The RAG Alternative

For many use cases, RAG (retrieval-augmented generation) works better than fine-tuning. Instead of changing the model, you feed it relevant documents at query time. This is often easier and more flexible than fine-tuning.

Related Terms

More in Technical