Building an Autonomous Content Factory with n8n and Claude 3.5 Sonnet
A comprehensive guide to scaling your content operations using n8n workflows and the high-reasoning capabilities of Claude 3.5, from research to automated publishing.
What You’ll Build
In this guide you will construct a fully autonomous content production pipeline that takes a topic from raw research all the way to a published article — without manual intervention. The system combines n8n as the orchestration backbone and Claude 3.5 Sonnet as the reasoning engine.
By the end, your pipeline will:
- Monitor RSS feeds, Google News, and trending-topic APIs for fresh material.
- Research and extract key facts from collected sources.
- Draft long-form articles using a custom style guide.
- Self-critique each draft for logical flow, readability, and AI-sounding phrases.
- Gate every piece through automated quality checks.
- Publish finished articles to Ghost, WordPress, or any headless CMS.
- Run on a schedule with error handling and parallel processing.
The entire workflow lives inside a single n8n instance and can be exported, version-controlled, and shared with your team.
Prerequisites
Before you begin, make sure you have the following ready:
| Requirement | Details | Where to Get It |
|---|---|---|
| n8n (v1.30+) | Self-hosted via Docker or n8n Cloud | n8n.io |
| Claude API key | Anthropic API access with claude-3-5-sonnet model | console.anthropic.com |
| Docker & Docker Compose | For running n8n locally | docker.com |
| CMS account | Ghost, WordPress, or any CMS with a REST API | Your preferred platform |
| RSS feed URLs | 3-5 feeds in your niche for the research layer | Feedly, Inoreader, or direct URLs |
| Node.js 18+ (optional) | Only if you plan to write custom n8n nodes | nodejs.org |
Architecture Overview
The content factory operates in three distinct layers. Each layer is isolated so you can swap components without breaking the rest of the pipeline.
┌─────────────────────────────────────────────────────────┐
│ LAYER 1 — RESEARCH │
│ │
│ ┌──────────┐ ┌──────────────┐ ┌───────────────────┐ │
│ │ RSS Feed │ │ Google News │ │ Trending Topic API│ │
│ │ Reader │ │ Scraper │ │ (Reddit / X) │ │
│ └────┬─────┘ └──────┬───────┘ └────────┬──────────┘ │
│ └───────────┬────┴───────────────────┘ │
│ ┌────▼────┐ │
│ │ Merge & │ │
│ │De-dupe │ │
│ └────┬────┘ │
├───────────────────┼─────────────────────────────────────┤
│ │ LAYER 2 — AI WRITING CHAIN │
│ ┌────▼──────────┐ │
│ │ Fact Extraction│ (Claude call #1) │
│ └────┬──────────┘ │
│ ┌────▼──────────┐ │
│ │ Draft Writing │ (Claude call #2) │
│ └────┬──────────┘ │
│ ┌────▼──────────┐ │
│ │ Self-Critique │ (Claude call #3) │
│ └────┬──────────┘ │
├───────────────────┼─────────────────────────────────────┤
│ │ LAYER 3 — PUBLISH & DISTRIBUTE │
│ ┌────▼──────┐ │
│ │Quality Gate│ │
│ └────┬──────┘ │
│ ┌─────────┼─────────┐ │
│ ┌────▼───┐ ┌───▼────┐ ┌─▼──────┐ │
│ │ Ghost │ │WordPress│ │Headless│ │
│ │ CMS │ │ REST │ │ CMS │ │
│ └────────┘ └────────┘ └────────┘ │
└─────────────────────────────────────────────────────────┘
Step 1: Set Up n8n with Claude Integration
1.1 Launch n8n with Docker Compose
Create a docker-compose.yml file in your project directory:
version: "3.8"
services:
n8n:
image: n8nio/n8n:latest
restart: always
ports:
- "5678:5678"
environment:
- N8N_BASIC_AUTH_ACTIVE=true
- N8N_BASIC_AUTH_USER=admin
- N8N_BASIC_AUTH_PASSWORD=your-secure-password
- N8N_ENCRYPTION_KEY=your-random-encryption-key
- GENERIC_TIMEZONE=America/New_York
volumes:
- n8n_data:/home/node/.n8n
volumes:
n8n_data:
Start the container:
docker compose up -d
Open http://localhost:5678 and log in with the credentials you set.
1.2 Add Claude API Credentials
- In n8n, go to Settings > Credentials > Add Credential.
- Select HTTP Header Auth.
- Set the header name to
x-api-keyand paste your Anthropic API key as the value. - Name this credential
Claude APIand save.
Alternatively, if you are using the n8n Anthropic community node, install it via Settings > Community Nodes > Install and search for n8n-nodes-anthropic. Then create a credential of type Anthropic API directly.
1.3 Verify the Connection
Add an HTTP Request node with these settings to confirm the key works:
| Setting | Value |
|---|---|
| Method | POST |
| URL | https://api.anthropic.com/v1/messages |
| Authentication | Claude API (header auth) |
| Header | anthropic-version: 2023-06-01 |
| Header | content-type: application/json |
| Body (JSON) | See below |
{
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 64,
"messages": [
{ "role": "user", "content": "Reply with OK if you can read this." }
]
}
Execute the node. If you receive a 200 response containing “OK”, the integration is ready.
Expected Result: n8n is running, the Claude API credential is saved, and a test call returns a successful response.
Step 2: Build the Research Layer
The research layer collects fresh material that Claude will later synthesize into articles.
2.1 RSS Feed Collection
Add an RSS Feed Read node:
- Feed URLs: Enter 3-5 RSS feeds relevant to your niche, one per line.
- Item Limit: Set to
10per feed to avoid overwhelming the pipeline.
2.2 Google News via SerpAPI (Optional)
If you want broader coverage, add an HTTP Request node that calls the SerpAPI Google News endpoint:
{
"method": "GET",
"url": "https://serpapi.com/search.json",
"qs": {
"engine": "google_news",
"q": "{{ $json.topic }}",
"api_key": "YOUR_SERPAPI_KEY"
}
}
2.3 Trending Topics from Reddit
Add another HTTP Request node:
{
"method": "GET",
"url": "https://www.reddit.com/r/{{ $json.subreddit }}/top.json?t=day&limit=5",
"headers": {
"User-Agent": "n8n-content-factory/1.0"
}
}
2.4 Merge and De-duplicate
Use a Merge node (mode: Append) to combine all sources. Then add a Code node to de-duplicate by title similarity:
const seen = new Set();
const unique = [];
for (const item of $input.all()) {
const title = item.json.title?.toLowerCase().trim();
if (title && !seen.has(title)) {
seen.add(title);
unique.push(item);
}
}
return unique;
2.5 Topic Scoring
Add a Code node that ranks topics by freshness and relevance so only the top candidates proceed:
const scored = $input.all().map(item => {
const hoursOld = (Date.now() - new Date(item.json.pubDate).getTime()) / 3600000;
const freshnessScore = Math.max(0, 100 - hoursOld * 2);
return {
...item,
json: {
...item.json,
score: freshnessScore
}
};
});
scored.sort((a, b) => b.json.score - a.json.score);
return scored.slice(0, 3);
Expected Result: The workflow collects items from multiple sources, removes duplicates, scores them, and outputs the top 3 topics as structured JSON objects ready for the writing chain.
Step 3: Implement the AI Writing Chain
This is the core of the factory. Each topic passes through three sequential Claude calls, each with a dedicated purpose.
3.1 Claude Call #1 — Fact Extraction
Add an HTTP Request node named Fact Extraction:
{
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 1024,
"messages": [
{
"role": "user",
"content": "You are a research analyst. Given the following source material, extract:\n1. 5-7 key facts with data points or statistics.\n2. Any named entities (people, companies, products).\n3. A one-sentence summary of the core news angle.\n\nSource material:\nTitle: {{ $json.title }}\nDescription: {{ $json.description }}\nContent: {{ $json.content }}\n\nRespond in JSON format with keys: facts (array), entities (array), angle (string)."
}
]
}
Parse the response with a Code node:
const response = JSON.parse($input.first().json.body);
const content = response.content[0].text;
const parsed = JSON.parse(content);
return [{ json: { ...$input.first().json, research: parsed } }];
3.2 Claude Call #2 — Draft Writing
Add a second HTTP Request node named Draft Writer:
{
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 4096,
"messages": [
{
"role": "user",
"content": "You are a senior content writer. Write a 600-800 word blog post using the research below.\n\nStyle guide:\n- Tone: Conversational but authoritative.\n- Structure: Hook → Context → 3 key points → Takeaway.\n- Use short paragraphs (2-3 sentences max).\n- Include at least 2 subheadings.\n- Do not use filler phrases like 'in today's world' or 'it's worth noting'.\n\nResearch:\nAngle: {{ $json.research.angle }}\nFacts: {{ JSON.stringify($json.research.facts) }}\nEntities: {{ JSON.stringify($json.research.entities) }}\n\nReturn the article in Markdown format."
}
]
}
3.3 Claude Call #3 — Self-Critique and Revision
This pass catches AI-sounding patterns and tightens the prose:
{
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 4096,
"messages": [
{
"role": "user",
"content": "You are a ruthless editor. Review the following draft and:\n1. Remove any phrases that sound like AI-generated filler.\n2. Ensure every claim references a specific fact from the research.\n3. Tighten sentences — cut any sentence that does not add new information.\n4. Fix any logical gaps between paragraphs.\n5. Verify the hook grabs attention within the first 15 words.\n\nDraft:\n{{ $json.draft }}\n\nOriginal research facts:\n{{ JSON.stringify($json.research.facts) }}\n\nReturn the revised article in Markdown. After the article, add a JSON block with keys: changes_made (array of strings), readability_estimate (string: 'easy', 'medium', 'hard'), confidence_score (number 1-10)."
}
]
}
3.4 Wire the Chain in n8n
Connect these three HTTP Request nodes sequentially. Between each pair, insert a Code node that extracts the content[0].text from the Claude response and merges it into the running JSON payload. The data shape at the end of the chain should look like:
{
"title": "Original topic title",
"research": { "facts": [...], "entities": [...], "angle": "..." },
"draft": "First draft markdown...",
"final_article": "Revised markdown...",
"edit_meta": {
"changes_made": ["Removed 3 filler phrases", "Added stat to paragraph 2"],
"readability_estimate": "easy",
"confidence_score": 8
}
}
Expected Result: Each topic enters the chain as raw source data and exits as a polished Markdown article with edit metadata. You can inspect every intermediate step in n8n’s execution log.
Step 4: Add Quality Gates
Before anything gets published, every article must pass automated checks.
4.1 Readability Scoring
Add a Code node that calculates a simple Flesch-Kincaid approximation:
const text = $input.first().json.final_article;
const sentences = text.split(/[.!?]+/).filter(s => s.trim().length > 0);
const words = text.split(/\s+/).filter(w => w.length > 0);
const syllables = words.reduce((count, word) => {
return count + word.replace(/(?:[^laeiouy]es|ed|[^laeiouy]e)$/, '')
.match(/[aeiouy]{1,2}/g)?.length || 1;
}, 0);
const avgWordsPerSentence = words.length / sentences.length;
const avgSyllablesPerWord = syllables / words.length;
const fleschScore = 206.835 - (1.015 * avgWordsPerSentence) - (84.6 * avgSyllablesPerWord);
return [{
json: {
...$input.first().json,
quality: {
flesch_score: Math.round(fleschScore * 10) / 10,
word_count: words.length,
sentence_count: sentences.length,
pass: fleschScore >= 50 && words.length >= 400
}
}
}];
4.2 Plagiarism Check (Conceptual)
For production systems, add an HTTP Request to a plagiarism API such as Copyscape or Originality.ai:
{
"method": "POST",
"url": "https://api.originality.ai/api/v1/scan/ai",
"headers": {
"Authorization": "Bearer YOUR_ORIGINALITY_KEY"
},
"body": {
"content": "{{ $json.final_article }}"
}
}
4.3 Gate Decision
Add an IF node after the quality checks:
- Condition:
{{ $json.quality.pass }}equalstrueAND (if using plagiarism check) originality score > 0.7. - True branch: Proceeds to publishing.
- False branch: Sends a Slack or email notification for manual review.
For the failure branch, add an HTTP Request node that posts to a Slack webhook:
{
"method": "POST",
"url": "https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK",
"body": {
"text": "Content factory: Article '{{ $json.title }}' failed quality gate. Flesch score: {{ $json.quality.flesch_score }}, Word count: {{ $json.quality.word_count }}. Please review manually."
}
}
Expected Result: Articles that score above the readability threshold and pass plagiarism checks proceed to publishing. Articles that fail are flagged for human review via Slack notification.
Step 5: Multi-Channel Publishing
5.1 Publish to Ghost CMS
Add an HTTP Request node:
{
"method": "POST",
"url": "https://your-ghost-site.com/ghost/api/admin/posts/",
"headers": {
"Authorization": "Ghost {{ $json.ghost_admin_token }}",
"Content-Type": "application/json"
},
"body": {
"posts": [
{
"title": "{{ $json.title }}",
"markdown": "{{ $json.final_article }}",
"status": "draft",
"tags": [{ "name": "auto-generated" }]
}
]
}
}
Note: Ghost Admin API tokens require JWT generation. You can handle this in a preceding Code node using the jsonwebtoken approach or use the dedicated Ghost Admin node if available in your n8n version.
5.2 Publish to WordPress
{
"method": "POST",
"url": "https://your-wordpress-site.com/wp-json/wp/v2/posts",
"headers": {
"Authorization": "Basic {{ Buffer.from('username:app-password').toString('base64') }}"
},
"body": {
"title": "{{ $json.title }}",
"content": "{{ $json.final_article }}",
"status": "draft"
}
}
5.3 Publish to a Headless CMS (Strapi Example)
{
"method": "POST",
"url": "https://your-strapi.com/api/articles",
"headers": {
"Authorization": "Bearer YOUR_STRAPI_TOKEN",
"Content-Type": "application/json"
},
"body": {
"data": {
"title": "{{ $json.title }}",
"body": "{{ $json.final_article }}",
"publishedAt": null
}
}
}
All three examples publish as draft by default. This gives you a final human review checkpoint before going live, which is recommended even with quality gates in place.
Expected Result: Finished articles are pushed as drafts to your CMS of choice. You can see them in your CMS dashboard ready for a final review and one-click publish.
Step 6: Schedule and Scale
6.1 Add a Cron Trigger
Replace the manual trigger at the start of the workflow with a Schedule Trigger node:
| Setting | Value |
|---|---|
| Trigger Mode | Cron |
| Cron Expression | 0 6 * * 1-5 |
| Timezone | Your local timezone |
This runs the pipeline every weekday at 6:00 AM.
6.2 Parallel Processing
To process multiple topics simultaneously, add a Split In Batches node after the topic scoring step:
- Batch Size:
3(matches your top-3 selection). - Connect the batch output to the AI Writing Chain.
n8n will process each topic in its own execution thread, significantly reducing total runtime.
6.3 Error Handling Workflow
Create a separate workflow named Content Factory - Error Handler and link it as the Error Workflow in your main workflow settings.
In the error handler, add:
- A Code node that extracts the error message and the failing node name.
- An HTTP Request node to send a Slack alert:
{
"method": "POST",
"url": "https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK",
"body": {
"text": "Content Factory ERROR in node '{{ $json.failedNode }}': {{ $json.errorMessage }}"
}
}
- An IF node that checks if the error is a Claude API rate limit (status 429). If yes, add a Wait node (5 minutes) and then use the Execute Workflow node to retry the main workflow.
6.4 Execution Monitoring
Enable execution logging in n8n settings:
# Add to docker-compose.yml environment
- EXECUTIONS_DATA_SAVE_ON_SUCCESS=all
- EXECUTIONS_DATA_SAVE_ON_ERROR=all
- EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS=true
This lets you review every run’s input/output data for debugging and performance tuning.
Expected Result: The pipeline runs automatically on your chosen schedule, processes topics in parallel, and sends alerts for any failures. Rate-limit errors trigger an automatic retry after a cooldown period.
Cost Analysis
Here is an approximate cost breakdown for running this pipeline at moderate scale:
| Component | Monthly Cost | Notes |
|---|---|---|
| n8n (self-hosted) | $5-15 | VPS hosting (e.g., Hetzner, DigitalOcean) |
| n8n Cloud (alternative) | $24-50 | Managed hosting, no Docker needed |
| Claude API (3 calls/article) | ~$0.15/article | ~3K input + 5K output tokens per chain |
| Claude API (100 articles/month) | ~$15 | Sonnet pricing at current rates |
| SerpAPI (optional) | $0-50 | Free tier covers 100 searches/month |
| Originality.ai (optional) | $15-30 | Pay-per-scan plagiarism detection |
| CMS hosting | $0-29 | Ghost Pro or self-hosted WordPress |
| Total (self-hosted, no extras) | ~$20-30 | n8n VPS + Claude API only |
| Total (full stack) | ~$60-125 | All optional services included |
At 100 articles per month, the per-article cost ranges from $0.20 (minimal setup) to $1.25 (full stack with plagiarism checking). Compare this to a freelance writer at $50-200 per article.
FAQ
Q: Can I use Claude 3 Opus instead of Sonnet for higher quality?
A: Yes. Replace the model string with claude-3-opus-20240229. Expect roughly 5x higher API costs but potentially better output on complex topics. Many teams use Opus for the self-critique step only and Sonnet for the other two calls as a cost-quality balance.
Q: What happens if the Claude API is down? A: The error handling workflow (Step 6.3) catches failures and retries after a cooldown. For extended outages, articles queue in n8n and process when the API returns. You can also add a fallback model by adding an IF node that checks the response status and routes to an alternative API.
Q: How do I customize the writing style? A: Modify the system prompt in Step 3.2. You can include sample paragraphs from your existing content, specify vocabulary preferences, or define a brand voice document that gets injected into every prompt. Store the style guide as an n8n variable for easy updates.
Q: Can I add human review before publishing? A: Absolutely. Change the publishing step to create drafts (already the default in our examples) and add a Slack notification with an approval button. You can use n8n’s webhook node to listen for the approval and trigger the final publish.
Q: How do I handle rate limits on the Claude API?
A: The Anthropic API returns a 429 status code with a retry-after header. The error handler in Step 6.3 already accounts for this. For higher throughput, request a rate limit increase through the Anthropic console.
Q: Is this workflow exportable? A: Yes. In n8n, go to Workflow > Export to download the full workflow as a JSON file. You can import it into any other n8n instance, share it with your team, or version-control it in Git.
Next Steps
Now that your content factory is operational, consider these enhancements:
- Add image generation: Insert a DALL-E or Stable Diffusion API call after the writing chain to auto-generate featured images for each article.
- Build a feedback loop: Track published article performance (page views, time on page) and feed that data back into the topic scoring algorithm to improve topic selection over time.
- Create content variants: Add a fourth Claude call that rewrites the article as a LinkedIn post, a tweet thread, and an email newsletter snippet for multi-channel distribution.
- Implement A/B headline testing: Generate 3 headline variants with Claude and use your CMS or email tool to test which performs best.
- Add translation: Append a Claude call that translates the finished article into other languages for international audiences.
- Set up a dashboard: Use n8n’s webhook node to push execution metrics to a Grafana dashboard for real-time monitoring of your content pipeline.
The modular architecture means each enhancement is a new node or sub-workflow — you never have to rewrite the core pipeline.
Related Manuals
Building a Multi-modal AI Agent on n8n (Voice, PDF, and Vision)
A comprehensive guide to constructing versatile n8n agents that can process voice notes via Deepgram, analyze PDFs with RAG, and 'see' images using latest LLMs.
[Side-Hustle Guide] Building a Zero-Cost Automated Cash-Generating Site with n8n & DeepSeek
Stop manual posting. Learn how to use n8n and DeepSeek to create a 24/7 autonomous website that earns passive income for you.