Modern LLM APIs separate prompts into roles: system, user, and assistant. Use them right and your outputs become dramatically more reliable.
What goes where
- system: persona, rules, output format, safety boundaries. Once.
- user: the actual task. The dynamic input.
- assistant: previous turns, or example outputs for few-shot.
A common mistake is dumping everything into a single user message. The model gives more weight to the system prompt — instructions there are obeyed more consistently.
Code
// Right
const messages = [
{ role: "system", content: "You are a JSON-only API. Always respond with valid JSON matching the schema." },
{ role: "user", content: "Extract the email from: Contact us at sales@acme.io anytime." }
];
// Wrong (works, but less reliable)
const messages = [
{ role: "user", content: "You are a JSON-only API. Always respond with valid JSON. Extract the email from: Contact us at sales@acme.io anytime." }
];