Voyance Vision
AI document intelligence for fintech KYC. −80% manual intervention, −70% HR dependency, 90% faster processing.
Overview
As lead product designer at Voyance (Techstars S'22), I led the design of Voyance Vision, an AI document intelligence platform that automated identity verification and document processing for fintech businesses across Nigeria. The product cut manual intervention by 80%, reduced human resource dependency by 70%, and sped up document processing by 90%.
The challenge
Nigerian fintechs were drowning in document work. KYC compliance required them to verify thousands of identity documents per week (driver's licences, passports, voter cards, utility bills) and the processes were almost entirely manual. Teams scanned documents, typed data into forms, cross-referenced with databases, and approved customer accounts one at a time. Slow, expensive, error-prone.
Off-the-shelf OCR existed but it was generic. It couldn't read a Nigerian utility bill any better than it could read a US tax form, and it had no understanding of which fields mattered for KYC compliance.
The reframe
The product wasn't a document tool. It was a workflow tool.
I started thinking we were building a smarter OCR. User research changed that. The pain wasn't that documents were hard to read. It was that humans were the bottleneck inside an otherwise automatable workflow. The win wasn't “extract this field from this image”. It was “never have a human touch this document at all”. Once that landed, the product had to be built around workflows that triggered themselves, not around a better extraction screen.
Key decisions
Trainable models, not pre-baked extraction
Pre-trained models couldn't handle the diversity of Nigerian documents, and one-size-fits-all extraction was always wrong. I designed the product around training: businesses upload sample documents, label fields, and the system learns their specific document types. Tradeoff: more onboarding friction. Win: a fintech could train a model for their KYC stack and a logistics company could train one for shipping manifests, all from the same product surface.
Workflow automation as the spine
Most document tools require someone to click “process this document”. I designed a workflow system where extraction triggers automatically inside existing business events: a new customer signs up → their KYC docs auto-process → instant approval. Tradeoff: more upfront integration work for the customer. Win: the entire human-in-the-loop layer disappeared for routine documents.
PDF support and an API layer (added late)
The first version assumed all documents would be uploaded as images. User testing revealed two things I was wrong about: businesses needed PDF support (hugely common in fintech) and they wanted to embed Vision's models inside their own systems via API. I added both. Tradeoff: scope creep on what was meant to be a focused MVP. Win: the API layer became the distribution mechanism. Businesses didn't have to come to us, we went to them.
An annotation feedback loop
Extracted fields needed corrections sometimes. Instead of treating those corrections as cleanup, I designed an annotation interface that fed the corrections back into the model. The system got smarter every time a user fixed something. Tradeoff: more complexity in the data pipeline. Win: extraction accuracy improved organically with use, without explicit retraining cycles.
Impact
80% reduction in manual intervention. Documents processed automatically without human prompting.
70% decrease in human resource dependency. Teams redeployed to higher-value work.
90% faster document processing. What took hours now took minutes.
Instant approvals and smooth customer activations, which improved the end-user experience.
API integration let businesses embed Vision's AI inside their own systems.
Reflection
Voyance Vision taught me that the most senior thing a designer can do on an AI product is figure out where the human shouldn't be. It's tempting to build interfaces around AI output, the “here's what we extracted, please verify” kind. The bigger win is removing the verification step entirely by trusting the model and letting humans intervene only when things are genuinely uncertain. That's a design decision dressed up as a product decision.
If I were starting Voyance today, I'd skip the design strategy doc and start with workflows. The strategy I wrote was good. The workflows would have shaped the product faster.
Automate the tedious. Empower the human.