A 24/7 sales intelligence pipeline, built on OpenClaw
This is how I deployed OpenClaw at a specialty B2B services firm to replace four to five hours of daily manual lead research with a self-hosted, always-on pipeline that delivers scored, enriched, call-ready leads to the sales team's Slack channel within minutes of a new trigger event. Client identity and industry are anonymized. Technical approach, timeline, and outcomes are real.
The situation
The firm sells a specialty B2B service where the trigger event for a sales call is a specific kind of public government filing. Every business day the sales operator pulled that public registry manually, scanned new entries, filtered for the small subset that matched the firm's ideal customer profile, looked up each matching company, found the right decision-maker, pulled a phone number, and assembled a one-page call pack. Then handed it to a field salesperson who made the call.
The process worked but had three problems. It consumed most of a workday for one person. It had a 24-to-48 hour lag between the filing event and the outbound call, during which competitors were also calling. And the manual scoring was inconsistent, meaning call-worthy filings sometimes got deprioritized and low-value filings sometimes consumed sales time.
What I built
An OpenClaw agent running a scheduled five-minute pipeline. Each cycle does four things.
- Scrape. A custom skill uses a headless browser to pull new filings from the public government registry. Deduplicates against what we have already ingested.
- Score. Each new filing is classified into a four-tier priority scheme based on filing type, indicators in the filing body, and the firm's ideal customer profile. The top two tiers trigger enrichment automatically. Tier three only triggers enrichment when specific indicator keywords appear. Tier four is logged but never calls are generated for it.
- Enrich. For scored-in filings, a waterfall of enrichment sources runs in order. First a paid data provider for the highest-quality contact records. Second a secondary provider as fallback. Third a Claude-powered web search to fill remaining gaps. Every enrichment step records its source so the call operator can judge confidence.
- Deliver. A call-ready pack is posted to a dedicated Slack channel with the company overview, the scored filing, the decision-maker's name, title, and click-to-call phone number. Every pack also lands in an Airtable base with full history so nothing gets lost and trends can be reviewed weekly.
On top of the main pipeline, a separate daily lifecycle skill runs at 7 AM local time. It inspects filings from prior days and sends follow-up nudges at specific intervals: X days after the initial filing to confirm a downstream event, the day after a scheduled decision, and a longer-horizon reminder as a particular window approaches. These follow-ups are the kind of work a human never remembers to do on schedule and that closes deals.
The architecture
| Component | Role |
|---|---|
| OpenClaw (Mac Mini, on client premises) | Primary agent runtime, holds workspace state, runs channel connectors, exposed via Tailscale for admin access |
| Modal (serverless cloud) | Runs the five-minute pipeline cron so the client machine does not have to wake for every tick. Sends results back to OpenClaw for delivery. |
| Slack | Primary delivery channel to the sales operator. Click-to-call links, rich card format. |
| Airtable | System of record. Every filing, every enrichment, every call pack, every follow-up trigger. |
| Claude Opus (primary), Claude Sonnet (fallback) | Scoring, enrichment fallback, follow-up message composition. xhigh thinking effort on scoring to avoid false negatives. |
| Paid data providers (enrichment) | Primary and secondary contact data sources, called in sequence with automatic fallback. |
The Mac Mini plus Modal split deserves a note. The client prefers that core state lives on hardware they own rather than in someone else's cloud. Modal runs the recurring cron and calls back into the Mac Mini for the stateful steps. This pattern gives the client cost-efficient compute for the scrape and enrich steps without giving up local ownership of the workspace, credentials, or Airtable write path.
Outcomes
Two things are worth calling out beyond the numbers. First, the scoring now has a consistent rubric applied every cycle, which reduced the cases where borderline filings were deprioritized by a tired operator. Second, the daily follow-up skill runs without needing human memory. A class of deals that were previously left on the table because no one remembered to call back on the right day now get touched at exactly the right interval.
Engineering notes on the build
- Scoring as a pure skill, not an API call. The tier assignment is deterministic based on filing type and keyword indicators. Only the enrichment decision at the boundary cases calls the model. This keeps scoring cheap, explainable, and fast. I went through two iterations before locking the scoring rubric.
- Enrichment waterfall with source tracking. The operator sees which source produced each contact record. A direct hit from a paid provider is treated differently from a web-search-inferred guess. This transparency is essential for call confidence.
- Follow-ups as a separate skill. The lifecycle triggers live in their own daily cron, not tangled into the five-minute pipeline. Debugging each in isolation is much easier.
- Airtable as the durable substrate. Slack is the delivery channel but memory is Airtable. Anything the agent writes is queryable there. If Slack goes down we do not lose state; if the agent restarts we do not reprocess.
- Two-model setup. Opus handles scoring-edge-cases and enrichment reasoning. Sonnet handles summarization and message composition. The routing is skill-level, not request-level, so costs stay predictable.
What would not have worked
Three patterns I explicitly declined during scoping, each of which would have been a worse fit:
- A SaaS sales-intelligence tool. The filing-source and the scoring rubric were too industry-specific for any off-the-shelf product. Paying $2k per month for 30% of the value would have been worse than building the 100% version once.
- A pure n8n or Zapier workflow. The scoring step needs judgment at the edges that rules cannot encode. An LLM call inside an n8n node would have worked for the easy cases but failed on the borderline ones, which is exactly where the dollars are.
- A full-time hire. The operator time saved is real but a full new hire was overkill for the scope, and would not have solved the 24/7 coverage or the consistency problem.
Could we build something like this for you?
This pattern generalizes. If your business has a process that looks like "a human watches a source, filters results, enriches with research, and hands off to a team for action," OpenClaw can run that process on hardware you control, deliver to the channels your team already uses, and hold its own against SaaS pricing at a fraction of the cost.
Source can be a public registry, a webhook, an inbox, a dashboard, a scraper target, or any combination. Output can be Slack, Airtable, email, SMS, Teams, or a dashboard. The useful work is in the middle: the scoring, the enrichment, the judgment about what matters.
Walk through your workflow with me
The first call is free. 30 to 60 minutes. I will tell you whether your process fits this pattern, what a scoped build would look like, and whether it is worth doing. If it is not, I will say so.
Book a Free Strategy CallOr start with the Playbook
Get The OpenClaw Deployment Playbook in your inbox. The six-phase process I use for every client deployment, including this one. Free. Email required.
Get the Playbook