27 November 2025
Safe by Design
Large language models are brilliant, they really can be effective, but they do need to be managed in a certain way. You have to put safeguards around them.
Insurance is not known for moving fast on new technology. For Richard Jelbert, that is not a criticism, it is reality. When the product is a promise and the currency is trust, being cautious is rational.
Richard has spent more than 25 years working at the intersection of technology, data and insurance. He has founded and sold businesses in telematics, helped build digital motor products at Zixty and now leads Cyberrock, a cyber security insurtech focused on SMEs. He also sits on Cluda’s advisory board, bringing together his experience in engineering, broking and cyber risk.
That safety first mindset runs through the way he thinks about cyber, AI and the future of broking.
From black boxes in cars to cyber “black boxes” for SMEs
Long before Cyberrock, Richard was working with black boxes in a very different context. In earlier ventures he helped build telematics solutions that used smartphones in cars to capture driving behaviour and translate it into risk information for insurers.
That experience taught him an important lesson. If you design a system purely for the benefit of the underwriter and make it feel punitive, people disengage. Drivers become suspicious, feel watched and start to resent the product.
With Cyberrock he flipped that logic. The “black box” sits inside the small business network and constantly monitors devices, external exposure and activity. It is there first and foremost to protect the SME. The cyber score that comes out of it, a live view of how safe the business is, belongs to the client. They control who sees it and when it is shared with a cyber insurer.
That architecture matters. It allows brokers and insurers to gain the underwriting benefits of live cyber data, while keeping control and transparency with the SME. It is also a pragmatic response to a real threat, because attackers are already using AI to accelerate cyber attacks on small businesses, including brokers, and the traditional once a year questionnaire is not designed for that world.
What boards really worry about with LLMs
When the conversation moves to AI, Richard is very clear that investors and boards are asking detailed questions.
They want to know about accuracy. How often does the model get things wrong, and what happens when it does. They ask about governance. How is the AI tested, what standards does it adhere to, and who is accountable for its behaviour over time. They dig into security and privacy. Where does the data go, does it include any personal information and how is that data stored and encrypted.
Regulators and standards bodies are now moving into this space as well. Work by the FCA and emerging standards such as ISO 42001 signal a future in which AI governance will look a lot more like information security does today.
From his perspective, this all adds up to a new specialist skill set. Organisations using large language models have to think carefully about architecture, encryption and testing. At Cyberrock he talks about double or triple encrypting sensitive data such as the weaknesses of a client’s network. At Cluda he highlights the multiple layers of protective technology and testing that sit around the models so that output stays consistent and safe.
“It takes a lot of effort,” he says. “But that is what you have to do to use these tools properly.”
Real risks, not science fiction
Having used large language models extensively, Richard is pragmatic about where the real risks lie.
The first is the one most people have now heard of: hallucinations and errors. In his view, this is less about some uncontrollable danger and more about how the technology is used. The quality of the response depends heavily on the quality of the prompt and on the information that is passed into the model. If you ask a vague question and rely on generic training data, you are far more likely to get an answer that does not reflect the specific documents or facts you care about.
That is why he emphasises prompting as a discipline. In practice, the prompt is not just a question, it is a paragraph of instructions and context. Teams like Cluda’s put real effort into structuring those prompts and deciding what information to send to the model, so that brokers get answers grounded in their own documents rather than internet noise.
The second risk he highlights is less well known, but growing fast in importance: prompt injection and prompt poisoning. Large language models take in text from multiple sources, from chat to uploaded files. Attackers can hide malicious instructions inside PDFs, text files or even images, with the aim of overriding the original prompt and changing how the model behaves.
“It is a real vector,” he explains. “You can embed prompt poisoning in all sorts of places, and it will be a common way of interfering with businesses that are powered by large language models.”
The response, again, is architectural. Filters scan the text from chats and documents before it ever reaches the model, stripping out anything that looks like an attempt to subvert instructions. For Richard, this sort of pre processing and testing is non negotiable if you want to use LLMs safely at scale.
Practical AI for the broking life cycle
Despite the focus on risk, Richard is clearly excited about what AI can do when it is used properly.
He sees large language models as particularly powerful for cleaning up messy data and enabling more natural, conversational interfaces. Because they can work with unstructured text at scale, they open up new possibilities across the entire insurance life cycle. Inquiries, quote production, the buy journey, mid term adjustments, claims and renewals all contain repetitive, document heavy work that can be made faster and more consistent with the right AI workflows. Pricing and fraud detection are also likely to be reshaped as models are used to analyse larger datasets in closer to real time.
That is one reason Cluda stood out to him. What he liked was not just the technology, but the way it was being applied: practical tools designed by people from the broking world, aimed squarely at day to day tasks.
“These are time consuming tasks,” he says, talking about the daily reality of a broker’s inbox and workload. “What I really liked about Cluda was the way they were going to come up with practical solutions to help solve those day to day problems and have a real impact very quickly. You can use it straight away on top of the system you are already using.”
He also sees Cluda as a particularly good use of large language model technology because the tools sit alongside humans and are tightly constrained by directed documents and instructions. That combination of human oversight and focused input is exactly how he believes LLMs should be used in high trust environments.
“Engage now, or risk being left behind”
For brokers, the obvious question is what to do next. If you are worried about security, regulation and client trust but can see the potential of AI, where do you start.
Richard’s advice begins at a very human level. Get your own account with a tool like ChatGPT or Claude. Give it documents, ask it questions and watch how it behaves. Most people, he argues, quickly get a feel for the strengths and limitations of the technology. That basic familiarity makes it much easier to ask the right questions when vendors come knocking.
Beyond that, he believes brokers have two responsibilities. One is to create pull with their core platform providers, explicitly asking for AI driven efficiency tools so that those projects are prioritised. The other is to look beyond their main vendors and engage with specialist providers like Cluda that have already done the hard work of deploying LLMs safely.
“You need to engage, take demos, ask lots of questions,” he says. Data governance, GDPR compliance and emerging standards should be on the list when you evaluate any AI product, and the answers should be clear and specific.
He finishes with a simple warning. Automation and AI based processes are coming into the market either way. Whatever you end up doing, you will be faced with using AI. If you do not learn to use it, you will be left behind. The safest place to be is not on the sidelines, it is hands on, working with partners who put safeguards first and who design AI that is, as he puts it, “broker safe and broker secure”.
Editor’s Note:
This interview is part of Cluda’s Thought Leadership Series, capturing perspectives from leaders across the insurance and broking industry on the future of AI, trust, and technology in broking.

