
A home restoration firm’s CEO and COO were personally spending three hours on every proposal review — comparing contractor estimates against insurance adjustments line by line, hunting for discrepancies worth negotiating. The process consumed 20 to 30 percent of their combined executive time. Deals that required multiple review rounds lost momentum and sometimes clients. The bottleneck wasn’t fieldwork. It was paperwork sitting on the desks of the two people who could least afford to be doing it.
Ciridae built an AI document comparison pipeline for a home restoration firm, encoding the CEO's negotiation expertise as structured rules applied automatically at scale. The system ingests contractor estimates and insurance adjustments, classifies every line item, flags discrepancies, and produces an annotated comparison document in minutes — replacing three hours of manual executive review per proposal.
The engagement started with a structured knowledge extraction session with the CEO — mapping the negotiation rules, line-item classification logic, and escalation criteria he applied instinctively after years in the industry. These were documented and converted into a rule set the AI pipeline could apply consistently.
Ciridae's platform orchestrates multiple AI models, calling Claude, OpenAI, and Gemini for different subtasks based on where each performs best. For this engagement, the pipeline ingested contractor estimates and insurance adjustment documents, classified each line item, identified discrepancies above defined thresholds, and flagged items with recovery potential.
The output was an annotated comparison document ready for the COO to review and act on — replacing the three-hour manual process with a minutes-long review of flagged items only. The full deployment was live within 48 hours of the knowledge extraction session.
Infrastructure
- Ciridae Platform (cloud-hosted document ingestion and classification)
Integration Points
- Multi-model AI orchestration via Ciridae Platform (Claude, OpenAI, Gemini called per task)




