Skip to main content
← Back to Blog

Build a competitor analyzer that keeps your CRM honest

Colin Gillingham··4 min read
gtm-automationhubspotai-agentssales-automationcompetitive-intel

This post is part of the GTM Automation Playbook — a 13-part series on building AI-powered GTM agents with HubSpot.


Competitive intel is the fastest-decaying data in your CRM. A competitor changes pricing or lays off 20% of engineering, and your reps don't find out until a prospect tells them on a call.

Most teams treat competitor research like a quarterly project. Product marketing builds a battlecard, circulates it as a PDF, and it starts going stale the week after it's published. The information never makes it into the CRM where reps actually work. It sits in a Google Drive folder nobody bookmarks.

I wanted something different: a system that monitors competitor signals continuously and writes structured summaries directly to HubSpot company records. Here's how I built it with n8n and the Claude API.

Set up HubSpot properties for competitor data

Before building the workflow, create custom properties on your company records. Use the Properties API (POST /crm/v3/properties/companies) or go to Settings > Properties > Company Properties in HubSpot.

I create four properties in the companyinformation group: competitor_summary (string/textarea) for the AI-generated analysis, competitor_signals (string/textarea) for raw signal data, competitor_last_analyzed (date/date) for tracking freshness, and competitor_watch_priority (enumeration/select) with options like high, medium, low.

The textarea fields matter. If you use single-line text, HubSpot truncates anything over 65,535 characters. Textarea gives you room for structured analysis.

The five-node workflow

The n8n workflow runs on a Schedule Trigger set to weekly. Every Monday at 6 AM, it pulls fresh competitor data, runs it through Claude, and updates HubSpot.

Node 1: Schedule Trigger. Weekly cadence. I set mine to Mondays because reps prep for the week ahead and competitor context is most useful before pipeline reviews.

Node 2: HubSpot node. Pull all companies where competitor_watch_priority is not empty. This gives you the list of competitors to monitor. Use the "Get Many" operation with a filter on that property.

Node 3: HTTP Request nodes. This is where signal collection happens. For each competitor, I run parallel HTTP requests to four source types: G2 review feeds (the G2 API or a scraping service like ScrapingBee against the company's G2 profile), job posting aggregators (LinkedIn's job search page or a service like Adzuna's API filtered by company name), the competitor's pricing page or changelog (direct HTTP GET), and Google News API filtered by company name for press coverage.

Not every source will return data every week. That's fine. The Code node downstream handles missing inputs gracefully.

Node 4: Code node + Claude API call. Merge all signal data in a Code node, then send it to the Claude API (POST /v1/messages) with a system prompt designed to extract actionable insight. The difference between useful output and noise comes down to the prompt. Don't ask Claude to summarize. Ask it to answer specific questions.

You are a competitive intelligence analyst. Given the raw signals below,
answer these questions for each competitor:

1. What changed since last analysis? (new hires, pricing shifts, product launches, layoffs)
2. What does this mean for our positioning? (one sentence, specific)
3. What objection might a prospect raise based on this intel?
4. What's the single most important thing a rep should know this week?

Raw signals:
{{ $json.merged_signals }}

Previous analysis:
{{ $json.competitor_summary }}

The "previous analysis" field is what makes this compound over time. Claude compares new signals against the last summary and calls out what actually changed. Without it, you get the same generic overview every week.

Node 5: HubSpot node. Update the company record with the new analysis. Map Claude's output to competitor_summary, the raw signals to competitor_signals, and set competitor_last_analyzed to the current date using PATCH /crm/v3/objects/companies/{companyId}.

When to trigger re-analysis outside the schedule

The weekly cadence covers the baseline. But some events should trigger an immediate refresh. I set up two additional n8n workflows for this.

First: a HubSpot workflow trigger that fires when a deal moves to "negotiation" stage. If the associated company has competitors tagged, the workflow kicks off a fresh analysis so the rep has current intel before the pricing conversation.

Second: a webhook trigger connected to call recording software. When a transcript mentions a competitor name, it fires the analysis workflow for that specific competitor. This one catches signals your reps hear before any public source reports them.

What makes this different from buying a tool

Products like Klue and Crayon do this well at enterprise scale. They cost $20K-$40K per year. If you're a team of 5-15 reps, that math doesn't work. This n8n workflow costs roughly $0.03 per competitor per analysis run in Claude API usage, plus whatever your signal sources charge. For 20 competitors analyzed weekly, that's about $2.40 per month.

The other advantage is that the intel lives in HubSpot, not in a separate app. Reps see competitor context on the same company record where they log calls and track deals. No tab-switching, no second login.

Competitive intelligence only matters if it reaches the person having the conversation. Everything else is research for research's sake.

Colin Gillingham

Need a Fractional Head of AI?

I help companies build an AI operating system — shared context across teams, AI handling the repetitive work, and your people focused on what actually matters.

15+

Years in Tech

12+

AI Products Shipped

3

Fortune 500 Brands