Published Accuracy
Everyone in construction tech claims AI. Almost nobody shows the numbers.
We publish ours because you should be able to check.
Everyone in construction tech claims AI. Almost nobody shows the numbers.
We publish ours because you should be able to check.
— Every pitch deck, ever
— You, hopefully
You wouldn't quote a $50M mechanical package without reading the spec. You'd check every section, flag the alternates, and know exactly what you're pricing.
Why would software be any different?
The gap between these and the solved tasks tells you where a human still needs to check.
Each task sounds straightforward. In practice, construction documents are messy. Specs contradict drawings. Addenda override base documents. Equipment schedules use different naming conventions from page to page.
That's why we measure on real project data.
Receipts > promises
We say equipment extraction runs at 95%. Here are the executions.
We say we've processed tens of thousands of specs. Here's the data.
You can check.
When models improve or we add new workflows, the numbers here change. If accuracy drops on something, that shows up too.
Send us a bid package. See the extractions. Judge the accuracy yourself.
Get StartedData from production workloads over the past quarter. Accuracy measured against human-verified ground truth. Some things work well. Some don't yet. That's the point of publishing it.