advanced · 3h · 2 lessons
Security & Prompt Injection
If your product takes untrusted input and feeds it to an LLM, you have a security problem. Here's the mental model and the defenses.
By the end of this course you will be able to:
- Identify which features in your app are vulnerable to prompt injection
- Apply defense-in-depth: input filtering, output filtering, sandboxed tools
- Choose between guardrails libraries (NeMo, LlamaGuard, OpenAI moderation) for your stack