Skip to main content
Back to Glossary
Core Concepts

AI Agent

An AI system that can take actions autonomously to achieve goals, rather than just responding to prompts.


What Makes an Agent Different

A regular AI chatbot waits for you to ask something, responds, and stops. An AI agent can plan, take actions, observe results, and adjust its approach without you guiding every step.

Think of the difference between a calculator (you push buttons, it computes) and a human assistant (you give a goal, they figure out how to achieve it).

Examples of AI Agents

Coding agents like Cursor's Composer or Claude Code can read your codebase, identify what needs changing, make edits across multiple files, run tests, and fix errors they find.

Research agents can search the web, read papers, synthesize information, and produce reports without you directing each search.

Automation agents can monitor systems, detect issues, and take corrective actions autonomously.

The Trust Problem

More autonomy means more potential for things to go wrong. An agent that can edit files can also delete them. An agent that can send emails can spam your contacts.

This is why agent reviews matter more than chatbot reviews. You need to know if an agent behaves safely and predictably before giving it access to your systems.

How to Evaluate Agents

When reviewing AI agents, consider:

  • What actions can it take?
  • What guardrails exist?
  • How does it handle errors?
  • Can you audit what it did?
  • How easily can you undo its actions?

Related Terms

More in Core Concepts