Back

Designing AI-Assisted Documentary Credit Checking for Trade Finance

AI-assisted documentary credit checking interface overview
Role
Lead Product Designer
Company
Raft
Timeline
2026
Platform
Desktop
What I did
  • Defined the interaction model for AI checks and human review
  • Established patterns for field-level and document-level checks
  • Designed the exception review and resolution workflow
  • Shaped how AI confidence is communicated to reviewers
Collaborators
  • Product Manager
  • AI/ML Engineering Team
  • Frontend Engineers
  • Trade Finance Domain Experts

Overview

Documentary credits (DCs) are one of the most document-intensive instruments in trade finance. Each transaction requires multiple documents to be checked against both the credit terms and UCP 600 rules — traditionally a fully manual process.

At Raft, I led the design of the full trade finance product. This case study focuses on the checking experience: the interface where AI-extracted document data is validated against DC terms and UCP 600 rules, exceptions are surfaced, and human reviewers make the final determination.

The Problem

DC checking is a process where precision matters and mistakes have financial consequences.

Slow and repetitive. A single transaction can involve 10+ documents, each requiring line-by-line comparison. Most checker time goes to routine validations, not judgment calls.

Inconsistent across checkers. Without a standardized framework, the same discrepancy might be flagged by one checker and missed by another.

No audit trail. The reasoning behind each determination is often undocumented, making it difficult to review decisions or onboard new team members.

Goal

Design an AI-assisted workflow that:

  • Covers the full presentation review process end to end
  • Runs both DC checks and UCP 600 checks per document
  • Supports both field-level and document-level checks
  • Preserves expert control over every final determination

Outcome

  • Significantly faster checking with consistent rule application
  • Every determination documented with full audit trail
  • Adopted by both operations teams and senior specialists

Gathering Context

I spent time with trade operations teams and senior specialists to map the full checking workflow before designing any automation. The journey below captures the end-to-end process from document receipt through to presentation summary — this became the foundation for every design decision.

End-to-end journey map showing the process from receiving documents to preparing the presentation summary
End-to-end journey from document receipt through AI-assisted checking to presentation summary.

How checkers work today

Checkers mentally split their work into straightforward comparisons (does this field match?) and interpretive assessments (does this document comply with the spirit of the requirement?).

Most time is spent confirming correctness, not finding discrepancies. The majority of checks pass, but each still requires reading and verification.

Flagging an exception often depends on context beyond any single field — prior correspondence, common trade practices, or bank-specific policies.

Two types of checks

Documents are validated against two distinct rule sets. Their differences shaped every design decision in the checking interface.

DC Checks

Transaction-specific. Validate documents against the terms of that particular credit — amounts, dates, parties, goods descriptions.
Mostly field-level

UCP 600 Checks

Universal. Validate documents against international rules — Article 18 for invoices, Article 20 for bills of lading, Article 28 for insurance.
Mostly document-level
SWIFT MT700 documentary credit message with structured fields for DC checking
DC check: fields validated against credit terms (SWIFT MT700)
UCP 600 articles used for document-level compliance checks
UCP 600: universal rules for documentary credit compliance

Field-level checks have clear inputs and deterministic outcomes — invoice amount vs credit amount, shipment date vs latest date. Document-level checks require evaluating passages of text against requirements and carry more ambiguity.

This distinction drove two different interaction patterns: field-level checks could be presented with high confidence and minimal context, while document-level checks needed to surface reasoning and source text for the reviewer.

"I know within seconds if a number is wrong. What takes time is reading a clause and deciding if it actually satisfies the requirement."

Solution

Defining the check model with ML

I worked with the ML engineering team to define what a "check" is in the system and map the resolution flow every check follows.

Document classificationCLASSIFICATION MODELData extractionOCR + FIELD EXTRACTIONRun DC + UCP 600 checksRULE ENGINE + NLP COMPLIANCECheck passesException flaggedLogged in audit trailHuman reviewsDiscrepantDisregardQuery agent

Every check has a status, a source (DC or UCP 600), a type (field-level or document-level), and three possible resolution actions. Passing checks are logged automatically. Exceptions carry enough context for a confident human decision — or escalation via query agent when clarification is needed from the presenting bank.

Before checking begins

A DC may be amended multiple times before documents are presented. The AI merges original terms with all amendments into a single consolidated view. Documents are classified by type, grouped into the presentation structure, and checked for completeness.

Check results by document

The AI runs DC checks and UCP 600 checks per document. Results are organized by document, each showing check completion progress.

List of presented documents showing per-document check completion progress
Document list with per-document check completion progress indicators.

The Checks Assistant panel provides the detailed view. Exceptions are grouped by document with status, relevant rule, and finding description. Reviewers can filter by status, search by field or criteria, and switch between the Exceptions and Passed Checks tabs.

Checks Assistant panel showing exceptions grouped by document with status annotations
Checks Assistant showing exceptions grouped by document, with status indicators for each finding.
Checks Assistant states: filter dropdown, empty state when no exceptions found, and passed checks view
Checks Assistant states: status filters, empty state when all exceptions are resolved, and the Passed Checks view.

Extraction and checks

The checking interface is divided into three panels: the Checks Assistant (exception details and actions), extracted document data (structured fields with issues highlighted inline), and the document preview (original source for visual verification).

Full checking interface showing exception detail, extracted document fields, and document preview
Three-panel layout: Checks Assistant, extracted data with inline highlights, and the original document.

Each extracted field is interactive. Clicking any field surfaces two actions: View in document highlights the extracted value in the original source, confirming exactly where the data came from. View checks filters the Checks Assistant to only the rules evaluated against that field.

Field-level interaction showing check count, view checks, and view in document options
Clicking a field reveals its check count and two actions: view checks or locate the value in the source document.
Selecting a field to view its source location in the document or filter to its associated checks.

An "Exceptions only" toggle filters the extracted data to just the fields that flagged exceptions, letting the reviewer focus on what needs attention.

Toggling 'Exceptions only' to filter down to flagged fields.
Checking interface with Exceptions only toggle active

Exception types

Field-level exceptions are usually fast — extracted and expected values side by side. Document-level exceptions require more context: source text, the applicable rule, and the AI's reasoning.

Exception types: missing documents, originals and copies mismatch, value consistency mismatch
DC check showing field mismatch and unmapped check requiring a query agent

Exception review

Each flagged check gives the reviewer three options: mark as discrepant (confirm the finding), disregard (override because the exception is not material or the AI's interpretation was incorrect), or escalate via query agent when the exception requires clarification from the presenting bank before a determination can be made. Both disregard rationale and query context are logged in the audit trail.

Exception detail showing originals and copies mismatch with mark discrepant and disregard actions
Exception detail: the finding, the requirement, and three resolution paths.
Exception review flow with mark discrepant and disregard actions

Reviewers can also correct extracted values directly when the AI misreads a field, confirming the update before re-running the associated checks.

Reviewing and resolving a document-level exception with AI reasoning and source text.

After checking

Once all exceptions are resolved, the checker prepares the presentation summary: presentation details and determination, a discrepancy log with rationale for each finding, and an entity log cross-checking all parties. The AI pre-fills each based on completed check results, but the checker owns the final version.

Impact

For operations
  • DC checking time reduced by ~65%
  • Consistent application of UCP 600 rules across all checkers
  • Full audit trail for every transaction and decision
For the product
  • Adopted by both operations teams and senior specialists
  • Reduced training time for new checkers
  • Established a reusable pattern for AI-assisted review

What I Learned

Design for the seam between AI and human judgment. The hardest part was designing the moment where the AI says "I found something" and the human decides what to do. That transition needs enough context for a confident decision without overwhelming the reviewer.

Field-level and document-level are different design problems. A mismatched date and an ambiguous compliance clause require fundamentally different amounts of context and interaction patterns.

Transparency builds adoption. Showing what the AI extracted and how it reached its conclusions convinced experienced trade finance professionals to trust the system. Experts don't accept a black box.