SEO Workflows & Prompt Library:
Ready-to-Use Templates for Daily Work
This tutorial moves from theory to practice. You'll build a shared prompt library — a collection of battle-tested, reusable prompts for the most common technical SEO tasks. By the end, your whole team will be working faster, producing more consistent output, and spending less time writing the same prompts from scratch every day.
- Understand the value of a shared, standardised prompt library
- Use 6 production-ready prompts for core SEO workflows
- Learn how to feed large data (Screaming Frog exports, etc.) into Claude effectively
- Adapt templates for your specific clients and methodologies
- Know how to iterate within a conversation to refine Claude's output
1What is a Prompt Library and Why Does it Matter?
A prompt library is a curated collection of proven, reusable prompts stored somewhere your whole team can access — a Claude Project, a shared Notion doc, a Google Doc, or a team wiki. It's the Claude equivalent of having agency templates for reports and briefs.
Without one, every team member invents their own prompts from scratch. The results are inconsistent, the quality varies, and time is wasted. With one, a junior analyst can produce output indistinguishable from a senior's — because they're using the same carefully-crafted prompt.
What makes a good library prompt?
It has a clear, single purpose
One prompt per task. "Analyse title tags AND write meta descriptions AND check schema" is three prompts, not one.
It uses placeholders for variable content
Mark the parts that change each time with [BRACKETS] so users know exactly what to swap out.
It specifies the output format explicitly
Table, numbered list, JSON, client paragraph — never leave this to chance.
It has been tested and refined
The first draft of a prompt is rarely the best. Run it 5–10 times, note where it goes wrong, and update it.
Your library index — prompts in this tutorial
2Prompt #1 — Title Tag Reviewer
Use this when you have a batch of title tags to review — from a crawl, a client spreadsheet, or a content audit. It evaluates each one against SEO best practice and produces rewritten alternatives.
Getting data from Screaming Frog: Export via Reports → Export → Title. Open in a spreadsheet, copy the "Address" and "Title 1" columns, and paste them into the prompt. You don't need to clean the data first — Claude handles messy input well.
3Prompt #2 — Screaming Frog Export Prioritiser
After a crawl, you have a mountain of data. This prompt takes a pasted summary or list of issues from a Screaming Frog export and turns it into a prioritised action plan, saving the 30–60 minutes it can take to manually triage findings.
Data size: Claude has a context limit. If your crawl export has thousands of rows, don't paste the whole thing. Instead, paste: (a) the Overview tab summary statistics, (b) the Issues tab list, or (c) a filtered export of the most severe issues only. You can always run the prompt multiple times for different issue categories.
4Prompt #3 — Schema Markup Reviewer
Schema errors are easy to miss visually but have a clear impact on rich result eligibility. This prompt validates structured data against Google's guidelines and suggests improvements — faster and more reliably than manually scanning JSON-LD.
Where to get the JSON-LD: In Chrome DevTools, go to Elements and search for application/ld+json. Or use View Source (Ctrl+U) and Ctrl+F to find it. Copy everything between the <script> tags.
5Prompt #4 — Crawl Error Explainer
One of the highest-value uses of Claude in agency work is translating technical findings into client-friendly language. This prompt takes a raw crawl error — the kind that means nothing to a marketing director — and turns it into a clear, professional explanation with context and next steps.
6Prompt #5 — Content Gap Analyser
This prompt takes two keyword clusters — typically your client's current ranking topics versus a competitor's — and identifies gaps, overlaps, and quick-win opportunities. It's designed for strategy sessions and quarterly reviews.
Where to get keyword sets: Export ranking keywords from Google Search Console (Performance → Queries), Ahrefs, or Semrush. For competitor keywords, use Ahrefs' "Organic Keywords" report for their domain. Paste the keyword column only — you don't need the metrics.
7Prompt #6 — Search Console Performance Interpreter
Google Search Console data is rich but time-consuming to analyse manually. This prompt takes a pasted export and surfaces the most important insights — declining queries, cannibalisation signals, and CTR optimisation opportunities.
8Iterating Within a Conversation
One of Claude's most powerful features is that it remembers everything within a conversation. After running any of the prompts above, you can follow up with refinement requests without repeating the original data. This is called iterative prompting.
| After running… | You can follow up with… |
|---|---|
| Title Tag Reviewer | "Rewrite the bottom 5 titles again, this time making them more question-based to target featured snippets." |
| Screaming Frog Prioritiser | "Focus only on the Critical items. Write a developer brief for each one with specific acceptance criteria." |
| Schema Reviewer | "Now add FAQ schema to the corrected version using these Q&As: [paste Q&As]" |
| Content Gap Analyser | "For the top 3 quick wins, write a one-paragraph content brief outline." |
| GSC Interpreter | "Turn the Recommended Focus Areas into a slide-ready bullet point summary for a client presentation." |
Conversation context tip: Claude remembers all the data you pasted earlier in the conversation. You never need to paste it again. Just refer to it naturally: "From the data I shared earlier…" or "Looking at the title tags above…"
9Practice Exercises
Use Prompt #1 with real data from one of your clients:
- Export 10–20 title tags from Screaming Frog or pull them manually
- Fill in the
[INDUSTRY]and[TARGET AUDIENCE]placeholders - Run the prompt and review the output table
- Follow up: "Rewrite the three lowest-scoring titles again, this time emphasising [specific value prop]"
- Note: does the output quality change if you remove the role definition at the start?
Identify a repetitive task in your own workflow that isn't covered above, and build a prompt for it:
- Pick a task you do at least once a week (e.g. hreflang audits, redirect mapping, outreach email drafts)
- Write a first-draft prompt using the RRCF structure from Tutorial 1
- Test it 3 times with different inputs and note where it fails or produces weak output
- Refine it based on what you observe — add constraints, better format instructions, or examples
- Share it with a colleague and ask them to test it — can they use it without asking you questions?
Set up a shared "Prompt Library" Claude Project for your team:
- Create a new Project in Claude called "Agency Prompt Library"
- In the Project Instructions, paste all 6 prompts from this tutorial as a reference document
- Add a note at the top explaining what each prompt is for and when to use it
- Invite your team members to the Project
- Run a 15-minute team session where everyone tests one prompt they haven't used before
10Summary
Key takeaway: These 6 prompts should save your team several hours per week collectively. The real multiplier comes when the whole team is using the same refined templates — consistent quality, faster delivery, and a shared standard for what "good output" looks like.