Automating Checks with Hooks
1. Automating Checks with Hooks
Skills gave you consistent workflows. But there's still one manual step left: running tests after each change. Hooks remove it.2. The Manual Problem
Hooks trigger automatically when Claude acts, turning a habit you might forget into something that just happens. Every refactoring session follows the same pattern: change something, run pytest, change again, run pytest again. It's repetitive by design, and that's the problem. Repetitive manual steps get skipped. Hooks exist for exactly this reason.3. What Are Hooks?
Before looking at hook types, it helps to understand what a tool is in Claude Code. A tool is any action Claude takes: reading a file, writing a file, running a command. Every time Claude does something, it uses a tool. Hooks are shell commands that run automatically at specific points around those tool calls, configured in .claude/settings.json. Think of them as "if this, then that" rules: if Claude writes a file, then run tests. Use slash hooks to see what's active.4. The Project So Far
The test file from the previous video is already in place. Now it gets wired up: settings.json will tell Claude to run those tests automatically after every write operation.5. Hook Types & Workflow
Every tool call Claude makes follows the same path. PreToolUse fires first: your chance to inspect or block before anything happens. Then the tool executes. Then PostToolUse fires: your chance to act on what just happened, like running tests or lint checks. The diagram shows this sequence. A request comes in, PreToolUse fires, the tool executes, PostToolUse fires, and the response goes back. Every tool call is a potential automation point.6. Configuring a Hook
This is the hook configuration in settings.json. The PostToolUse trigger fires after any Write or Edit operation. The matcher catches both, and the command runs pytest in the project directory automatically. That's the entire setup: one JSON block, and tests run on every file change.7. Capturing Hook Output
But there's one problem: by default you won't see any hook execution results unless you capture it. The script conftest.py handles that. It's a pytest plugin that intercepts the test results when the session completes and logs them to a file and a system notification badge. Without it, the hook runs in the background and you'd have no way of knowing whether tests passed or failed.8. Hook Output Log
Here's the log file after a session. Every time Claude wrote a file, pytest ran automatically and the result was captured. A failure appears mid-session — the hook caught it immediately, before the next change was made. No manual step required.9. When to Use Hooks
The flowchart walks you through the decision. Is the task repetitive? If not, skip the hook. If yes, could you forget it? If not, hold off. If yes, that's your hook candidate. Testing after edits checks both boxes. That's why it's the first hook most developers add.10. Chapter 2 Recap
Three tools for customizing Claude Code. Agents for specialized tasks, skills for repeatable workflows, hooks for automatic enforcement. Each one builds on the last, and together they let you shape Claude Code around how your team actually works.11. Let's Practice!
Hooks are the final piece: automation that runs whether you remember or not. In the exercises, you'll configure one yourself. Let's practice.Create Your Free Account
or
By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.