Claude Training Program · Week 2 · Efficiency Track

SEO Workflows & Prompt Library:
Ready-to-Use Templates for Daily Work

📖 Tutorial 2 of 8 ⏱ 60–75 minutes 👥 All team members 🎯 Prerequisite: Tutorial 1

This tutorial moves from theory to practice. You'll build a shared prompt library — a collection of battle-tested, reusable prompts for the most common technical SEO tasks. By the end, your whole team will be working faster, producing more consistent output, and spending less time writing the same prompts from scratch every day.

Learning Objectives
  • Understand the value of a shared, standardised prompt library
  • Use 6 production-ready prompts for core SEO workflows
  • Learn how to feed large data (Screaming Frog exports, etc.) into Claude effectively
  • Adapt templates for your specific clients and methodologies
  • Know how to iterate within a conversation to refine Claude's output

1What is a Prompt Library and Why Does it Matter?

A prompt library is a curated collection of proven, reusable prompts stored somewhere your whole team can access — a Claude Project, a shared Notion doc, a Google Doc, or a team wiki. It's the Claude equivalent of having agency templates for reports and briefs.

Without one, every team member invents their own prompts from scratch. The results are inconsistent, the quality varies, and time is wasted. With one, a junior analyst can produce output indistinguishable from a senior's — because they're using the same carefully-crafted prompt.

What makes a good library prompt?

It has a clear, single purpose

One prompt per task. "Analyse title tags AND write meta descriptions AND check schema" is three prompts, not one.

It uses placeholders for variable content

Mark the parts that change each time with [BRACKETS] so users know exactly what to swap out.

It specifies the output format explicitly

Table, numbered list, JSON, client paragraph — never leave this to chance.

It has been tested and refined

The first draft of a prompt is rarely the best. Run it 5–10 times, note where it goes wrong, and update it.

Your library index — prompts in this tutorial

#1Title Tag Reviewer
Batch analysis with rewrites
#2Screaming Frog Prioritiser
Turn crawl exports into action plans
#3Schema Markup Reviewer
Validate and improve structured data
#4Crawl Error Explainer
Client-ready issue descriptions
#5Content Gap Analyser
Keyword cluster comparison
#6Search Console Interpreter
Performance data analysis

2Prompt #1 — Title Tag Reviewer

Use this when you have a batch of title tags to review — from a crawl, a client spreadsheet, or a content audit. It evaluates each one against SEO best practice and produces rewritten alternatives.

Prompt #1 · Title Tag Reviewer On-page SEO
You are a senior technical SEO specialist. Review the following title tags for a [INDUSTRY] website targeting [TARGET AUDIENCE]. For each title tag, evaluate: - Keyword placement (target keyword should appear early) - Character length (aim for 50–60 chars; flag if over 60 or under 30) - Uniqueness and descriptiveness (avoid generic tags) - Click-worthiness and alignment with search intent - Avoid keyword stuffing // Paste title tags below, one per line, in this format: // [Page name] | [Title tag] [PASTE TITLE TAGS HERE] Respond with a markdown table using these columns: | Page | Current Title | Length | Score /10 | Issues | Rewritten Title | After the table, add a "Key Patterns" section (3–5 bullet points) summarising the most common issues across all pages.

Getting data from Screaming Frog: Export via Reports → Export → Title. Open in a spreadsheet, copy the "Address" and "Title 1" columns, and paste them into the prompt. You don't need to clean the data first — Claude handles messy input well.

3Prompt #2 — Screaming Frog Export Prioritiser

After a crawl, you have a mountain of data. This prompt takes a pasted summary or list of issues from a Screaming Frog export and turns it into a prioritised action plan, saving the 30–60 minutes it can take to manually triage findings.

Data size: Claude has a context limit. If your crawl export has thousands of rows, don't paste the whole thing. Instead, paste: (a) the Overview tab summary statistics, (b) the Issues tab list, or (c) a filtered export of the most severe issues only. You can always run the prompt multiple times for different issue categories.

Prompt #2 · Screaming Frog Export Prioritiser Technical Audit
You are a senior technical SEO specialist preparing an audit report for a client. CLIENT CONTEXT - Website: [DOMAIN] - Industry: [INDUSTRY] - Site size: [NUMBER] pages crawled - Primary goal: [e.g. improve crawlability / fix indexation issues / pre-launch QA] Below is a summary of issues from a Screaming Frog crawl. Prioritise these issues into a structured action plan. [PASTE SCREAMING FROG ISSUES / OVERVIEW DATA HERE] For each issue, assess: 1. Likely SEO impact (High / Medium / Low) 2. Estimated implementation effort (Quick win / Standard / Complex) 3. Whether it affects crawling, indexing, or ranking Respond with: - A prioritised table: | Priority | Issue | # Affected URLs | SEO Impact | Effort | Action Required | - Sort by: Critical first, then High, Medium, Low - After the table, write a 2–3 sentence "Executive Summary" suitable for a non-technical client - Do not explain what each issue type is — keep recommendations concise and action-focused

4Prompt #3 — Schema Markup Reviewer

Schema errors are easy to miss visually but have a clear impact on rich result eligibility. This prompt validates structured data against Google's guidelines and suggests improvements — faster and more reliably than manually scanning JSON-LD.

Prompt #3 · Schema Markup Reviewer Structured Data
You are a structured data specialist with deep knowledge of Schema.org vocabulary and Google's rich result guidelines. Review the following JSON-LD schema markup. The page is a [PAGE TYPE: e.g. Product / Article / FAQ / LocalBusiness / Recipe] page. [PASTE JSON-LD SCHEMA HERE] Evaluate the schema against: 1. Required properties for this schema type per Google's documentation 2. Recommended properties that would improve rich result eligibility 3. Common errors: missing @context, incorrect property names, invalid data types, nesting errors 4. Any properties that are present but incorrectly formatted Respond with: - Section 1 — "Errors": Any issues that would cause Google to reject the schema for rich results - Section 2 — "Warnings": Issues that won't break it but reduce effectiveness - Section 3 — "Missing Recommended Properties": What to add to maximise rich result eligibility - Section 4 — "Corrected Schema": A clean, corrected version of the full JSON-LD Format Section 1–3 as numbered lists. Format Section 4 as valid JSON.

Where to get the JSON-LD: In Chrome DevTools, go to Elements and search for application/ld+json. Or use View Source (Ctrl+U) and Ctrl+F to find it. Copy everything between the <script> tags.

5Prompt #4 — Crawl Error Explainer

One of the highest-value uses of Claude in agency work is translating technical findings into client-friendly language. This prompt takes a raw crawl error — the kind that means nothing to a marketing director — and turns it into a clear, professional explanation with context and next steps.

Prompt #4 · Crawl Error to Client Explanation Client Comms
You are a senior account manager at a technical SEO agency. Your client is [CLIENT NAME], a [INDUSTRY] company. Your contact is a [ROLE: e.g. Marketing Director / Head of Digital] who is not technical but understands business impact. Convert the following technical SEO issue into a client-ready explanation: ISSUE: [TECHNICAL ISSUE, e.g. "312 pages returning 404 status codes, including 47 pages with external backlinks"] Write a short explanation (150–200 words) that: - Explains what the issue is in plain English (no jargon) - Explains why it matters to their business (traffic, rankings, user experience) - States clearly what needs to happen to fix it - Gives a rough sense of urgency (e.g. "this should be resolved within 2 weeks") Do NOT use phrases like: "crawl budget", "HTTP status code", "canonical", "hreflang", "robots.txt" — unless you immediately explain them in plain English. End with a one-line "What we need from you:" if the fix requires client action, or "No action needed from you — we'll handle this." if it's handled by our team.

6Prompt #5 — Content Gap Analyser

This prompt takes two keyword clusters — typically your client's current ranking topics versus a competitor's — and identifies gaps, overlaps, and quick-win opportunities. It's designed for strategy sessions and quarterly reviews.

Prompt #5 · Content Gap Analyser Content Strategy
You are a senior SEO content strategist. Analyse the following two keyword sets for a [INDUSTRY] website. SET A — Our client's current ranking keywords: [PASTE KEYWORD LIST A — one per line, or comma-separated] SET B — Competitor's ranking keywords (or target keyword universe): [PASTE KEYWORD LIST B] Perform a content gap analysis and respond with four sections: 1. OVERLAP — Keywords appearing in both sets (these are contested topics; note any where client ranks significantly lower) 2. CLIENT GAPS — Keywords in Set B but not Set A (prioritise by likely search volume / commercial value based on your knowledge) 3. CLIENT STRENGTHS — Keywords in Set A but not Set B (where our client has an advantage) 4. QUICK WINS — From the gaps, identify 5–8 keywords where creating or improving content would likely have the fastest impact, and briefly explain why Format sections 1–3 as concise bullet lists. Format section 4 as a table: | Keyword | Why it's a quick win | Suggested content action |

Where to get keyword sets: Export ranking keywords from Google Search Console (Performance → Queries), Ahrefs, or Semrush. For competitor keywords, use Ahrefs' "Organic Keywords" report for their domain. Paste the keyword column only — you don't need the metrics.

7Prompt #6 — Search Console Performance Interpreter

Google Search Console data is rich but time-consuming to analyse manually. This prompt takes a pasted export and surfaces the most important insights — declining queries, cannibalisation signals, and CTR optimisation opportunities.

Prompt #6 · Search Console Performance Interpreter Data Analysis
You are a senior SEO analyst. Analyse the following Google Search Console performance data for [DOMAIN]. The data covers: [DATE RANGE, e.g. "last 3 months vs previous 3 months"] Industry: [INDUSTRY] // Export from GSC: Performance → Queries → Export. Paste the CSV data below. // Include columns: Query, Clicks, Impressions, CTR, Position, (and comparison columns if available) [PASTE GSC DATA HERE] Analyse this data and identify: 1. TOP DECLINING QUERIES — Queries with significant click or impression drops. Flag the 5–10 most concerning with possible causes. 2. CTR OPPORTUNITIES — Queries with high impressions but low CTR (below 3% for positions 1–5, below 1% for positions 6–10). These are title tag / meta description optimisation candidates. 3. CANNIBALISATION SIGNALS — Multiple queries where intent overlap suggests two pages may be competing against each other. 4. QUICK WINS — Queries ranking in positions 8–15 with decent impressions that could reach page 1 with targeted optimisation. Format each section as a table where appropriate. End with a "Recommended Focus Areas" paragraph (3–4 sentences) summarising the single most impactful area to address first.

8Iterating Within a Conversation

One of Claude's most powerful features is that it remembers everything within a conversation. After running any of the prompts above, you can follow up with refinement requests without repeating the original data. This is called iterative prompting.

After running…You can follow up with…
Title Tag Reviewer"Rewrite the bottom 5 titles again, this time making them more question-based to target featured snippets."
Screaming Frog Prioritiser"Focus only on the Critical items. Write a developer brief for each one with specific acceptance criteria."
Schema Reviewer"Now add FAQ schema to the corrected version using these Q&As: [paste Q&As]"
Content Gap Analyser"For the top 3 quick wins, write a one-paragraph content brief outline."
GSC Interpreter"Turn the Recommended Focus Areas into a slide-ready bullet point summary for a client presentation."

Conversation context tip: Claude remembers all the data you pasted earlier in the conversation. You never need to paste it again. Just refer to it naturally: "From the data I shared earlier…" or "Looking at the title tags above…"

9Practice Exercises

✏️ Exercise 1 — Run the Title Tag Reviewer

Use Prompt #1 with real data from one of your clients:

  1. Export 10–20 title tags from Screaming Frog or pull them manually
  2. Fill in the [INDUSTRY] and [TARGET AUDIENCE] placeholders
  3. Run the prompt and review the output table
  4. Follow up: "Rewrite the three lowest-scoring titles again, this time emphasising [specific value prop]"
  5. Note: does the output quality change if you remove the role definition at the start?
✏️ Exercise 2 — Build one custom library prompt

Identify a repetitive task in your own workflow that isn't covered above, and build a prompt for it:

  1. Pick a task you do at least once a week (e.g. hreflang audits, redirect mapping, outreach email drafts)
  2. Write a first-draft prompt using the RRCF structure from Tutorial 1
  3. Test it 3 times with different inputs and note where it fails or produces weak output
  4. Refine it based on what you observe — add constraints, better format instructions, or examples
  5. Share it with a colleague and ask them to test it — can they use it without asking you questions?
✏️ Exercise 3 — Store the library in a Project

Set up a shared "Prompt Library" Claude Project for your team:

  1. Create a new Project in Claude called "Agency Prompt Library"
  2. In the Project Instructions, paste all 6 prompts from this tutorial as a reference document
  3. Add a note at the top explaining what each prompt is for and when to use it
  4. Invite your team members to the Project
  5. Run a 15-minute team session where everyone tests one prompt they haven't used before

10Summary

📚
6 Ready Prompts
Title tags, crawl audits, schema, client comms, content gaps, GSC analysis
🔁
Iterate, Don't Repeat
Follow up within the same conversation — Claude remembers your data
🤝
Share the Library
Store prompts in a shared Project so the whole team benefits

Key takeaway: These 6 prompts should save your team several hours per week collectively. The real multiplier comes when the whole team is using the same refined templates — consistent quality, faster delivery, and a shared standard for what "good output" looks like.