How to Do Competitive Research in One Hour (With AI)

Competitive research fails in two opposite ways: too thin (three blog posts and a vibe) or too fake (a table of features the model invented because it "sounds right"). One hour is enough for a usable internal brief -- if you define questions first, constrain sources, and treat AI as a synthesizer and formatter, not an oracle.

Competitive research with AI means using a model to organize, compare, and summarize the notes you collected from real sources -- not asking it to "tell me everything about Competitor X." The human does the reading; the model does the structure.

This workflow aligns with AI research assistant for ongoing monitoring, how to prep a sales call in 15 minutes when you need to use the output live, and how to build a company knowledge base for AI so competitive notes land in a single place sales and marketing trust. AI content creation is the next step when research becomes publishable copy -- with the same no invented facts rule.

What "one hour" is for (and what it is not)

In scope for 60 minutes

Out of scope without more time or primary sources

Positioning and narrative on their site

Private pricing for every enterprise tier

Public pricing page, packaging, FAQs

Accurate feature parity at API-field level

Review themes (G2, Capterra -- take with salt)

Legal review of comparative claims

Integration lists and partnership pages

Churn and revenue internal to competitor

Hiring signals (roles, seniority)

"They are failing" -- unless public evidence

Your deliverable is decision support: what should we believe enough to act on this week -- not an encyclopedia.

Minute 0 to 5: frame the decision

Answer in writing:

  1. Who will use this output? (Sales, product, marketing, leadership)

  2. What decision does this research inform? (pricing page copy, enterprise talk track, roadmap bet)

  3. Which competitors (max 3 in one hour -- a fourth competitor halves depth)

  4. What would change our strategy if true? (for example, "They ship native Salesforce bi-directional sync")

If you skip this, you will end up with interesting and useless. For how models behave when you do not ground them, ChatGPT vs. Claude is a useful comparison of tool strengths -- neither replaces clicking their site yourself.

Minute 5 to 25: pull primary-ish sources (human-led)

For each competitor, open in this order:

Source type

What you extract

Skepticism level

Homepage + product pages

Category, ICP hints, hero claims

High marketing polish

Pricing or plans

Packaging names, limits, annual vs monthly

Mid -- check footnotes

Docs or dev portal

Real capabilities, APIs, auth models

Higher signal for technical claims

Changelog or release blog

Velocity, direction

Good for "what they emphasize"

Trust or security

Certifications they claim (verify badges link out)

Do not repeat as your legal position

Reviews

Recurring praise and pain

Selection bias; sample sizes are small

Paste short excerpts or URLs into notes -- do not rely on memory. AI will work from your clips, not from imagination, in the next phase.

Minute 25 to 40: AI synthesis with a hard anti-hallucination rule

Use a single structured prompt:

  • Input: your bullets + URLs + pasted excerpts.

  • Instruction: "Do not add facts not present in my notes. If unknown, write UNKNOWN."

  • Output schema:

    • One-paragraph positioning (their words paraphrased)

    • ICP signals (with citation marker to your note A, B, or C)

    • Packaging summary (only from pricing page content you provided)

    • Strengths and weaknesses (clearly labeled hypothesis vs evidence-based)

    • Landmines for sales (claims we should avoid unless verified)

If the model outputs a precise stat you did not provide, delete it or mark UNVERIFIED -- do not ship it to the field.

Minute 40 to 50: build the comparison matrix (small and honest)

Dimension

Us (source)

Comp A (source)

Comp B (source)

Confidence

Category or wedge

AI workforce platform (homepage)

Automation builder (homepage)

AI assistant suite (homepage)

H

Buyer motion (PLG or sales-led)

Sales-led + free trial (pricing page)

PLG with usage caps (pricing page)

Sales-led, no public pricing

M

Packaging

3 tiers, per-seat (pricing page)

Pay-per-task + add-ons (pricing page)

Custom enterprise only (sales page)

H

Key integration

Gmail, Outlook, Notion, Slack (docs)

Zapier, Slack, HubSpot (integrations page)

Salesforce, Teams (partner page)

H

Differentiator (their claim)

"AI employees with memory" (homepage)

"500+ pre-built templates" (homepage)

"Enterprise-grade security" (trust page)

M

Risk if we copy them

Template bloat dilutes positioning

Enterprise-only locks out SMB

"AI employees" term loses meaning if overused

L

Confidence matters more than more rows. Sales will trust five high-truth rows over twenty guessed cells. When you are choosing which tools to evaluate -- not just competitors -- best AI tools for small business frames the buying decision without hype.

Minute 50 to 60: claims, gaps, and next steps

Claims hygiene

Separate:

  • We can say in public -- only with citations you would hand legal or marketing.

  • Internal only -- review themes, rumors, weak signals.

  • Never say -- unsubstantiated knock-offs ("they are insecure") without evidence.

Gap list (what to schedule later)

Examples: "Talk to 2 customers who evaluated them," "POC their API auth," "Pull G2 CSV when we have N."

One-page summary

Ten bullets max for execs: positioning, motion, packaging, top risk, top opportunity, recommended next action.

Common failure modes

Failure

Why it happens

Prevention

Feature fantasy

Model completes the table

Unknowns + source-bound synthesis only

Trash-talk

Easy rhetorically, costly commercially

"Contrast without contempt" rule in brief

Static snapshot

Competitors ship weekly

Date stamp doc; owner for refresh

Analysis paralysis

Too many competitors

Hard cap at 3 per hour

How Lens fits in

Lens is Agently's research AI employee: it can help structure notes, compare excerpts, and keep outputs tied to your Brain -- so positioning stays consistent when research turns into Pages or briefs your AI marketing assistant can reuse. AI Work OS explains why research, tasks, and knowledge belong in one workspace. Try Agently free.

Frequently asked questions

Can AI replace reading competitor sites?

No. AI can summarize what you feed it. The 20-minute human pass through real source pages prevents elegant fiction. Never skip the reading step.

How often should we refresh competitive research?

Monthly in active categories; quarterly when stable. Always refresh before a major launch or pricing change. Date-stamp every doc so readers know how fresh the data is.

What about secret intel from customers?

Handle ethically and NDA-aware. Summarize patterns ("buyers cite implementation time as the top concern") without attributing identifiable customer statements unless explicitly allowed.

Should we share the competitive doc externally?

Usually no without legal review. Internal enablement first. Comparative claims in customer-facing materials carry legal risk and need sign-off.

What is the number-one quality signal of good competitive research?

Every strong claim has a link or excerpt behind it. If the source is "everyone knows," it is not research -- it is assumption.

Agently gives research and GTM teams an AI employee that respects sources and connects to your workspace. Try it free.

CEO

Omar Ghandour

April

15,

2026

Share on social media