Artificial intelligence has moved from experimentation to real deployment in trade credit insurance (TCI) and surety. Underwriting, portfolio monitoring, fraud detection, and even claims are all becoming – or are already in some companies – AI-enabled processes.
A recent survey of ICISA members conducted during the summer of 2025 showed that 41% of TCI respondents and 50% of Surety respondents currently use AI in different aspects of their business. Risk underwriting was by far the most significant area of use for AI among respondents, reflecting the impact that AI can have on analysing large volumes of complex information.
Even more strikingly, 94% of TCI respondents and 92% of Surety respondents expect to see an increase in the use of AI in the next 12 months highlighting the pace of investment and development in these systems. For a sector handling vast amounts of data, the efficiency gains are real — but so are the governance responsibilities.
ICISA recently reported on the development of EIOPA’s Opinion on AI Governance and Risk Management. This sets out the expectations regulators will consider when looking at the implementation and use of AI by insurers in Europe. Similar frameworks are also being developed elsewhere around the world and will follow broadly the same principles.
EIOPA’s opinion is a useful reference point for framing what supervisors expect today and into the future: robust risk assessment, fairness, data quality, explainability, and human oversight. But for TCI and surety, simply “checking the compliance box” is not enough. Governance cannot just be a list that is checked now and then and shoved in a drawer until the next review – it must shape how AI is built, deployed, and monitored in practice throughout the lifecycle of AI use cases.
Make Risk Assessment Commercially Relevant
EIOPA advocates a proportionate, risk-based approach. For TCI and surety, this means looking beyond model accuracy alone. Underwriters and risk officers should ask:
- Could this model amplify volatility in claims?
- Could it systematically disadvantage small suppliers or emerging-market buyers?
- Can we identify and correct AI-driven false positives or false negatives (underwriting poor risks, or not underwriting good risks) before they impact performance?
Embedding these questions early ensures that AI supports operational efficiency and enhanced quality, rather than magnifying errors — a concern supervisors increasingly share.
Treat Data Governance as a Competitive Advantage
AI models (like all models) are only as good as their inputs. For TCI and surety, that data often spans multiple internal and external sources. Insurers must ensure that not only they apply strong governance and ethical frameworks, but that the third parties they work with uphold the same high standards. The challenge for data is not just quality, but bias: missing data on specific segments, geographic gaps, or over-reliance on negative signals that can distort risk views.
A governance framework that systematically validates data sources, addresses bias, and documents origins will not just meet regulatory expectations – it will differentiate insurers that can underwrite with confidence in emerging or information-poor markets where AI has the potential to create entirely new opportunities for insurers.
Build Explainability Into the Customer Journey
While TCI and surety are B2B products, there can still be vulnerable customers involved – small exporters or contractors, for example – for whom an opaque decision-making process can be difficult to understand and respond to. Importantly for TCI and Surety compared to other traditional insurance lines, the impact of our sector also stretches beyond the policyholder to buyers and obligors. Explainability should therefore be part of the product design where AI is utilised. This should entail:
- Clear processes for decision-making, including timeframe and outcomes;
- Evidence of human oversight and intervention opportunities where decisions are challenged;
- Communication that demystifies AI decisions rather than hiding behind a black box – “Computer says no”.
This transparency is not just good governance; it strengthens trust in the insurer-policyholder relationship. And all insurers in the sector know that retention of customers is essential to long-term success.
Close the Loop With Monitoring and Human Oversight
AI models will drift, data will evolve, and markets will change. Governance must include continuous monitoring – not just of technical performance, but of market outcomes. Periodic human review should be designed in, not bolted on, with clear triggers for intervention when model outputs deviate from expectations.
This is where regulatory guidance on accountability becomes practical: appointing clear owners for AI risk, integrating compliance and audit functions into model lifecycle management, and giving risk committees regular visibility on AI’s contribution to underwriting and claims performance.
From Compliance Burden to Strategic Asset
Done well, AI governance can enhance operational resilience, sharpen underwriting discipline, and open doors to new business segments. It also positions insurers as credible partners in the policy conversation – showing that innovation and consumer protection are not in tension, but aligned.
The TCI and Surety sector can use regulatory frameworks as a map to success in AI deployment rather than a compliance hurdle before they get on with the real work of innovation. Build governance frameworks that go deeper than compliance: make risk assessment scenario-based, treat data quality as a strategic priority, invest in explainability, and hard-wire oversight into decision-making. Those who do will not just satisfy regulators — they will create new opportunities for themselves and make the most of the potential gains AI may bring.