AI Agentic Workflow for B2B Lead Generation on LinkedIn
A deep dive into building an autonomous AI agent to identify, research, and engage high-value B2B prospects on LinkedIn using Claude and n8n.
What You’ll Build
In this guide, you will build a fully automated B2B lead generation pipeline that runs on LinkedIn. Instead of manually searching for prospects, copying profile data, and crafting individual messages, your AI agent will handle the entire workflow autonomously.
By the end of this tutorial, you will have:
- Automated prospect discovery that pulls LinkedIn profiles matching your Ideal Customer Profile (ICP)
- AI-powered company research using Claude to analyze each prospect’s background, recent activity, and company context
- Personalized outreach messages generated by Claude that reference specific details about each prospect
- CRM synchronization that pushes qualified leads and interaction data to your sales pipeline
- Analytics tracking to measure response rates and continuously optimize your messaging
This system typically achieves 3-5x higher response rates compared to generic outreach because every message demonstrates genuine research into the prospect’s situation.
Prerequisites
Before starting, make sure you have the following tools and accounts ready:
| Tool | Purpose | Required |
|---|---|---|
| n8n (self-hosted or cloud) | Workflow automation engine | Yes |
| Claude API Key | AI-powered research and message generation | Yes |
| LinkedIn Account | Prospect discovery and outreach | Yes |
| LinkedIn Sales Navigator | Advanced search filters and lead lists | Recommended |
| HubSpot or Airtable | CRM for tracking leads and interactions | Optional |
| Proxycurl API (or similar) | LinkedIn profile data extraction | Recommended |
Make sure your n8n instance is running and accessible. If you are self-hosting, confirm you can reach the n8n editor at your configured URL. You will also need your Claude API key from the Anthropic Console at console.anthropic.com.
Architecture Overview
The system follows a linear pipeline where each stage enriches the data before passing it downstream:
LinkedIn Search → Profile Extraction → Claude AI (Research & Scoring)
→ Personalized Message Generation → Outreach Queue → CRM Sync
Data flow in detail:
- A scheduled trigger fires the workflow daily (or on your preferred cadence)
- LinkedIn search results are fetched based on your ICP filters
- Each profile is enriched with company data, recent posts, and job details
- Claude analyzes the enriched data to score relevance and extract conversation hooks
- Claude generates a personalized connection request or InMail for each qualified prospect
- Messages enter a rate-limited queue to stay within LinkedIn’s daily limits
- Lead data and message history sync to your CRM
Step 1: Set Up LinkedIn Data Access
The foundation of this pipeline is reliable access to LinkedIn profile data. There are two primary approaches.
Option A: LinkedIn Sales Navigator + Proxycurl API
Sales Navigator provides the most powerful search filters. Combine it with Proxycurl (or a similar enrichment API) to extract structured profile data.
In n8n, create a new workflow and add a Schedule Trigger node:
{
"nodes": [
{
"name": "Daily Trigger",
"type": "n8n-nodes-base.scheduleTrigger",
"parameters": {
"rule": {
"interval": [
{
"field": "hours",
"hoursInterval": 24,
"triggerAtHour": 9
}
]
}
},
"position": [250, 300]
}
]
}
Next, add an HTTP Request node to query the Proxycurl API. This node fetches structured data for LinkedIn profile URLs you have collected from Sales Navigator:
{
"name": "Fetch LinkedIn Profile",
"type": "n8n-nodes-base.httpRequest",
"parameters": {
"method": "GET",
"url": "https://nubela.co/proxycurl/api/v2/linkedin",
"authentication": "genericCredentialType",
"genericAuthType": "httpHeaderAuth",
"sendQuery": true,
"queryParameters": {
"parameters": [
{
"name": "linkedin_profile_url",
"value": "={{ $json.profileUrl }}"
},
{
"name": "use_cache",
"value": "if-recent"
},
{
"name": "skills",
"value": "include"
},
{
"name": "inferred_salary",
"value": "include"
}
]
},
"options": {
"timeout": 30000
}
}
}
Option B: LinkedIn Search via n8n HTTP Request
If you do not have Sales Navigator, you can use LinkedIn’s public search combined with an enrichment service. Add a Code node in n8n to define your ICP search parameters:
// Define your Ideal Customer Profile search criteria
const searchParams = {
keywords: "CTO OR VP Engineering",
location: "San Francisco Bay Area",
industry: "Software Development",
companySize: "51-200",
connectionDegree: "2nd"
};
// Build the search URL
const searchQuery = encodeURIComponent(
`${searchParams.keywords} ${searchParams.location}`
);
return [
{
json: {
searchQuery,
filters: searchParams,
maxResults: 25,
timestamp: new Date().toISOString()
}
}
];
Store the collected profile URLs in an n8n Spreadsheet File node or a Google Sheet for batch processing.
Expected Result: You now have a mechanism to feed LinkedIn profile URLs into your pipeline on a scheduled basis. Each run should produce 15-25 new prospect profiles ready for research.
Step 2: Build the Prospect Research Pipeline
With profile data flowing in, the next stage uses Claude to perform deep research on each prospect. This is where the system differentiates itself from basic automation.
Add a Code node to prepare the research prompt for Claude. This node structures the raw LinkedIn data into a format that Claude can analyze effectively:
const profile = $input.first().json;
const researchPrompt = `You are a B2B sales research analyst. Analyze this LinkedIn prospect and provide actionable insights for a personalized outreach message.
## Prospect Data
- Name: ${profile.full_name}
- Title: ${profile.occupation}
- Company: ${profile.company || 'Unknown'}
- Location: ${profile.city}, ${profile.state}
- Summary: ${profile.summary || 'Not available'}
- Experience: ${JSON.stringify(profile.experiences?.slice(0, 3) || [])}
- Recent Activity: ${JSON.stringify(profile.activities?.slice(0, 5) || [])}
- Skills: ${(profile.skills || []).join(', ')}
## Your Analysis Should Include:
1. **Company Stage**: What growth stage is their company in? (startup, scaling, enterprise)
2. **Likely Pain Points**: Based on their role and company, what challenges do they probably face?
3. **Conversation Hooks**: What specific details from their profile could start a genuine conversation?
4. **Relevance Score**: Rate 1-10 how well this prospect matches a B2B SaaS buyer profile
5. **Recommended Approach**: Should we use a direct pitch, thought leadership share, or mutual connection intro?
Respond in JSON format.`;
return [
{
json: {
prompt: researchPrompt,
profileData: profile,
prospectName: profile.full_name
}
}
];
Now add an HTTP Request node to call the Claude API:
{
"name": "Claude Research Analysis",
"type": "n8n-nodes-base.httpRequest",
"parameters": {
"method": "POST",
"url": "https://api.anthropic.com/v1/messages",
"authentication": "genericCredentialType",
"genericAuthType": "httpHeaderAuth",
"sendHeaders": true,
"headerParameters": {
"parameters": [
{
"name": "x-api-key",
"value": "={{ $credentials.anthropicApiKey }}"
},
{
"name": "anthropic-version",
"value": "2023-06-01"
},
{
"name": "content-type",
"value": "application/json"
}
]
},
"sendBody": true,
"bodyParameters": {
"jsonBody": "={{ JSON.stringify({ model: 'claude-sonnet-4-20250514', max_tokens: 1024, messages: [{ role: 'user', content: $json.prompt }] }) }}"
},
"options": {
"timeout": 60000
}
}
}
Add an IF node after the Claude response to filter prospects based on the relevance score:
// Parse Claude's JSON response
const response = JSON.parse(
$input.first().json.content[0].text
);
// Only proceed with prospects scoring 7 or higher
return [
{
json: {
...response,
profileData: $input.first().json.profileData,
qualified: response.relevance_score >= 7
}
}
];
Expected Result: Each prospect now has a structured research dossier including pain points, conversation hooks, and a relevance score. Only prospects scoring 7 or above move to the message generation stage. Expect roughly 40-60% of profiles to qualify.
Step 3: AI-Powered Message Personalization
This step is the core differentiator. Claude generates a unique, personalized message for each qualified prospect using the research from Step 2.
Add another Code node to build the message generation prompt:
const research = $input.first().json;
const profile = research.profileData;
const messagePrompt = `You are an expert B2B sales copywriter. Write a LinkedIn connection request message for this prospect.
## Prospect Research
- Name: ${profile.full_name}
- Title: ${profile.occupation}
- Company: ${profile.company}
- Pain Points: ${JSON.stringify(research.likely_pain_points)}
- Conversation Hooks: ${JSON.stringify(research.conversation_hooks)}
- Recommended Approach: ${research.recommended_approach}
## Rules for the Message:
1. Maximum 300 characters (LinkedIn connection request limit)
2. Open with something specific about THEM, not about you or your product
3. Reference a concrete detail from their profile or company activity
4. End with a soft question, not a hard pitch
5. Sound human and conversational, never salesy or templated
6. Do NOT use phrases like "I noticed" or "I came across your profile"
7. Do NOT mention AI or automation
## Good Example:
"The serverless migration approach your team shared at re:Invent was sharp, especially the cost modeling piece. We've been tackling similar problems with mid-market SaaS teams. Worth comparing notes?"
## Bad Example:
"Hi [Name], I noticed your impressive profile and would love to connect. Our solution helps companies like yours increase revenue by 40%."
Write exactly ONE message. Return only the message text, no explanation.`;
return [
{
json: {
prompt: messagePrompt,
prospectName: profile.full_name,
profileUrl: profile.linkedin_url,
research: research
}
}
];
Send this prompt to Claude using the same HTTP Request pattern from Step 2. Then add a Code node to validate the output:
const message = $input.first().json.content[0].text.trim();
const prospectName = $input.first().json.prospectName;
// Validate message length for LinkedIn connection request
if (message.length > 300) {
// Truncate gracefully at the last complete sentence within limit
const truncated = message.substring(0, 297);
const lastPeriod = truncated.lastIndexOf('.');
const lastQuestion = truncated.lastIndexOf('?');
const cutPoint = Math.max(lastPeriod, lastQuestion);
return [
{
json: {
prospectName,
message: cutPoint > 200 ? message.substring(0, cutPoint + 1) : message.substring(0, 297) + '...',
originalLength: message.length,
wasTruncated: true
}
}
];
}
return [
{
json: {
prospectName,
message,
originalLength: message.length,
wasTruncated: false
}
}
];
Expected Result: Each qualified prospect now has a unique, research-backed connection request message that fits within LinkedIn’s 300-character limit. Messages should reference specific details about the prospect rather than generic value propositions.
Step 4: Automated Outreach Workflow
Now connect all the pieces into a single n8n workflow with proper rate limiting and safety measures. LinkedIn enforces daily connection request limits, so respecting these is critical to avoid account restrictions.
Add a Code node to implement rate limiting:
// LinkedIn rate limiting configuration
const DAILY_CONNECTION_LIMIT = 20; // Stay well under LinkedIn's threshold
const MIN_DELAY_SECONDS = 45; // Minimum delay between actions
const MAX_DELAY_SECONDS = 180; // Maximum delay for human-like pacing
// Get today's send count from workflow static data
const staticData = $getWorkflowStaticData('global');
const today = new Date().toISOString().split('T')[0];
if (staticData.lastRunDate !== today) {
staticData.lastRunDate = today;
staticData.dailySendCount = 0;
}
if (staticData.dailySendCount >= DAILY_CONNECTION_LIMIT) {
return []; // Stop processing, daily limit reached
}
// Calculate a random delay for human-like behavior
const delay = Math.floor(
Math.random() * (MAX_DELAY_SECONDS - MIN_DELAY_SECONDS) + MIN_DELAY_SECONDS
) * 1000;
staticData.dailySendCount += 1;
return [
{
json: {
...$input.first().json,
sendDelay: delay,
dailyCount: staticData.dailySendCount,
dailyLimit: DAILY_CONNECTION_LIMIT
}
}
];
Add a Wait node after the rate limiter to enforce the delay:
{
"name": "Human-Like Delay",
"type": "n8n-nodes-base.wait",
"parameters": {
"amount": "={{ $json.sendDelay }}",
"unit": "milliseconds"
}
}
Then add the actual outreach execution. If you are using a LinkedIn automation tool with an API, configure the HTTP Request:
{
"name": "Send Connection Request",
"type": "n8n-nodes-base.httpRequest",
"parameters": {
"method": "POST",
"url": "https://your-linkedin-automation-api.com/connect",
"sendBody": true,
"bodyParameters": {
"jsonBody": "={{ JSON.stringify({ profileUrl: $json.profileUrl, message: $json.message }) }}"
},
"options": {
"timeout": 30000
}
}
}
Add an Error Trigger node to handle failures gracefully:
// Log failed outreach attempts for review
const errorData = {
prospectName: $json.prospectName || 'Unknown',
profileUrl: $json.profileUrl || 'Unknown',
errorMessage: $json.error?.message || 'Unknown error',
timestamp: new Date().toISOString(),
action: 'connection_request_failed'
};
// Store in a dead letter queue for manual review
return [{ json: errorData }];
Safety measures to implement:
- Daily caps: Never exceed 20 connection requests per day (the code above enforces this)
- Weekend pausing: Add a schedule check to skip Saturdays and Sundays
- Duplicate detection: Check your CRM before sending to avoid contacting someone twice
- Blacklist support: Maintain a list of domains or people to never contact
Expected Result: Your workflow now executes outreach at a human-like pace with built-in rate limiting. The daily cap of 20 requests keeps you safely within LinkedIn’s guidelines while still generating meaningful pipeline volume.
Step 5: CRM Integration
Every qualified prospect and outreach attempt should sync to your CRM. This section covers integration with both HubSpot and Airtable.
HubSpot Integration
Use n8n’s built-in HubSpot node to create or update contacts:
{
"name": "Create HubSpot Contact",
"type": "n8n-nodes-base.hubspot",
"parameters": {
"resource": "contact",
"operation": "create",
"email": "={{ $json.profileData.email || '' }}",
"additionalFields": {
"firstName": "={{ $json.profileData.first_name }}",
"lastName": "={{ $json.profileData.last_name }}",
"company": "={{ $json.profileData.company }}",
"jobTitle": "={{ $json.profileData.occupation }}",
"city": "={{ $json.profileData.city }}",
"lifecycleStage": "lead",
"leadStatus": "NEW",
"customProperties": [
{
"name": "linkedin_url",
"value": "={{ $json.profileData.linkedin_url }}"
},
{
"name": "ai_relevance_score",
"value": "={{ $json.research.relevance_score }}"
},
{
"name": "outreach_message",
"value": "={{ $json.message }}"
},
{
"name": "ai_pain_points",
"value": "={{ JSON.stringify($json.research.likely_pain_points) }}"
}
]
}
}
}
Airtable Integration
If you prefer Airtable as a lightweight CRM, use the Airtable node:
{
"name": "Log to Airtable",
"type": "n8n-nodes-base.airtable",
"parameters": {
"operation": "create",
"application": "your_airtable_base_id",
"table": "Leads",
"fields": {
"Name": "={{ $json.profileData.full_name }}",
"Title": "={{ $json.profileData.occupation }}",
"Company": "={{ $json.profileData.company }}",
"LinkedIn URL": "={{ $json.profileData.linkedin_url }}",
"Relevance Score": "={{ $json.research.relevance_score }}",
"Pain Points": "={{ JSON.stringify($json.research.likely_pain_points) }}",
"Outreach Message": "={{ $json.message }}",
"Status": "Contacted",
"Date": "={{ new Date().toISOString().split('T')[0] }}"
}
}
}
Create a second workflow that checks for LinkedIn responses and updates the CRM status. Use a Schedule Trigger that runs every 4 hours to check for new messages and update the lead status from “Contacted” to “Replied” or “Accepted Connection.”
Expected Result: Your CRM now contains every qualified prospect with their AI research dossier, the personalized message sent, and the current status. Sales reps can pick up warm leads without duplicating research.
Step 6: Analytics & Optimization
Tracking performance is essential for improving your outreach over time. Build a feedback loop that uses response data to refine Claude’s messaging.
Add a Code node that runs weekly to compile metrics:
// Weekly analytics aggregation
const leads = $input.all().map(item => item.json);
const metrics = {
period: `${new Date(Date.now() - 7 * 86400000).toISOString().split('T')[0]} to ${new Date().toISOString().split('T')[0]}`,
totalProspected: leads.length,
totalContacted: leads.filter(l => l.status === 'Contacted').length,
connectionsAccepted: leads.filter(l => l.status === 'Accepted').length,
repliesReceived: leads.filter(l => l.status === 'Replied').length,
meetingsBooked: leads.filter(l => l.status === 'Meeting Booked').length,
acceptanceRate: 0,
replyRate: 0,
meetingRate: 0
};
const contacted = metrics.totalContacted || 1;
metrics.acceptanceRate = ((metrics.connectionsAccepted / contacted) * 100).toFixed(1) + '%';
metrics.replyRate = ((metrics.repliesReceived / contacted) * 100).toFixed(1) + '%';
metrics.meetingRate = ((metrics.meetingsBooked / contacted) * 100).toFixed(1) + '%';
// Identify top-performing message patterns
const repliedLeads = leads.filter(l => l.status === 'Replied');
const topMessages = repliedLeads.map(l => ({
message: l.outreachMessage,
prospectTitle: l.title,
industry: l.industry
}));
return [
{
json: {
metrics,
topPerformingMessages: topMessages.slice(0, 5),
timestamp: new Date().toISOString()
}
}
];
Feed these analytics back to Claude to improve future messages. Add an optimization prompt:
const analytics = $input.first().json;
const optimizationPrompt = `You are a B2B outreach optimization analyst. Based on the following performance data, suggest improvements to our LinkedIn outreach strategy.
## Weekly Performance
- Contacted: ${analytics.metrics.totalContacted}
- Acceptance Rate: ${analytics.metrics.acceptanceRate}
- Reply Rate: ${analytics.metrics.replyRate}
- Meeting Rate: ${analytics.metrics.meetingRate}
## Top Performing Messages (that got replies):
${JSON.stringify(analytics.topPerformingMessages, null, 2)}
## Provide:
1. What patterns do you see in successful messages?
2. Three specific suggestions to improve reply rates
3. Any ICP adjustments based on who is responding
4. An updated message template incorporating these learnings
Return as JSON with keys: patterns, suggestions, icp_adjustments, updated_template`;
return [{ json: { prompt: optimizationPrompt } }];
This creates an AI feedback loop: your outreach data informs Claude’s future message generation, which should improve response rates week over week.
Results & Metrics
After running this system for a typical 30-day period, you can expect metrics in these ranges:
| Metric | Manual Outreach | AI-Powered Pipeline |
|---|---|---|
| Prospects researched per day | 5-10 | 50-100 |
| Time per prospect | 15-20 minutes | ~30 seconds |
| Connection acceptance rate | 15-25% | 35-50% |
| Reply rate | 3-8% | 15-25% |
| Meetings booked per week | 1-2 | 5-10 |
| Total time investment | 3-4 hours/day | 30 min setup + monitoring |
The key performance driver is personalization quality. Because Claude analyzes each prospect individually and generates unique messages, recipients perceive these as genuine human outreach rather than automation.
FAQ
Q: Will LinkedIn detect and ban my account for using automation? A: The rate limiting in Step 4 is designed to mimic human behavior. Keeping daily connection requests under 20 and adding random delays between actions significantly reduces detection risk. However, always monitor your account for any warning notices from LinkedIn and reduce volume immediately if you see any.
Q: How much does running this pipeline cost? A: The primary costs are the Claude API (approximately $0.02-0.05 per prospect for research and message generation), the Proxycurl API ($0.01 per profile lookup), and your n8n hosting. For 100 prospects per week, expect roughly $10-15 in API costs.
Q: Can I use this with a free LinkedIn account? A: Yes, but with limitations. Free accounts have stricter connection request limits and fewer search filters. Sales Navigator is strongly recommended for serious B2B lead generation because it provides advanced search filters, lead lists, and higher monthly InMail credits.
Q: How do I handle prospects who reply negatively? A: Add a sentiment analysis step using Claude when processing replies. Negative responses should automatically update the CRM status to “Not Interested” and add the prospect to your blacklist to prevent future contact.
Q: What if Claude generates a message that references incorrect information? A: Build a human review queue for the first 50-100 messages before enabling full automation. This lets you verify accuracy and refine your prompts. After validation, spot-check 10% of messages weekly.
Next Steps
Once your pipeline is running smoothly, consider these enhancements:
-
Multi-channel outreach: Extend the workflow to send follow-up emails via a tool like Instantly or Lemlist if the LinkedIn connection is not accepted within 7 days.
-
Intent signal monitoring: Add a node that monitors prospect companies for buying signals such as job postings, funding rounds, or technology stack changes using APIs like BuiltWith or Crunchbase.
-
Conversation management: Build a second workflow that handles follow-up messages after a connection is accepted. Use Claude to generate contextual follow-ups based on the original research and any new activity from the prospect.
-
Team scaling: Create separate ICP configurations for different sales reps or territories, all feeding into the same CRM with proper lead assignment rules.
-
A/B testing framework: Run two different Claude prompt templates simultaneously and route prospects randomly to each. After 100 sends per variant, compare acceptance and reply rates to identify the stronger approach.
The combination of n8n’s workflow automation and Claude’s research and writing capabilities creates a lead generation system that scales without sacrificing the personalization that drives responses. Start with a small daily volume, validate your results, and increase gradually as you refine the system.
Related Manuals
Self-Improving Content Agents: Automating the Viral Content Loop on n8n
How to build an AI agent that doesn't just post content, but analyzes real-time engagement data to optimize its next viral hook autonomously.
Harnessing OpenAI Swarm for Complex SaaS Automations
Is Multi-Agent orchestration the next frontier? Learn how to implement OpenAI Swarm patterns within your SaaS backend to handle non-linear customer success and sales workflows.
[Must Read] Stop Just Chatting! 5 AI Agent Patterns to Multiplied Your Side-Hustle Income
Simple prompts are not enough. Mastering these 5 patterns will turn AI into your most powerful earning partner in the side-hustle economy.