This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Search our site

Viewpoints

| 4 minute read

Why trust is the real test of AI and how in-house lawyers can build it

AI depends on trust. Based on a panel discussion at our annual GC event, S&B Activate 2025, this article provides ideas for how legal teams can build all-important trust through strong governance, ethical decision-making, data discipline and clearer supplier controls.

AI is moving faster than the systems built to govern it. At S&B Activate 2025, unknown risks and unregulated use of AI across organisations were front of mind as the top issues causing concern about AI adoption.

When the gap is growing between fast moving tools and slower moving regulation, trust is key. As Charlie Maurice, Partner and Head of Technology at Stevens & Bolton, put it: “Building greater trust in AI is becoming essential, and legal teams are central to shaping the structures that enable it.”

Build governance that earns trust

Most organisations are using AI in isolated pockets, but far fewer have governance structures that keep pace with the rapid development of these tools. The result is a widening gap between adoption and oversight.

Speakers agreed that policy alone cannot close that gap. Governance needs to be practical, visible, and part of everyday behaviour. This means knowing which tools are being used, who uses them and for what purpose, and ensuring decisions about deployment don’t sit solely with IT.

Key takeaways:

  • Build governance around real use, not assumptions. Map where AI tools sit in the business and who is using them, so controls reflect actual activity.
  • Embed oversight into day-to-day workflows. Make sure decisions about design, deployment and monitoring are made by a coalition of experts, including IT, legal and commercial roles.
  • Create governance that people can apply easily. Keep frameworks practical so teams know exactly how to follow them.

Build trust through ethical decision-making

Ethics also came through strongly in the panel conversation as a building block of trust. As AI outpaces regulation, ethical standards can fill the gap to guide decision-making—from procurement to product design. 

Imogen Armstrong, Chief Legal Officer at Stelia, put it simply: “The law is still playing catch-up, so ethics have to take the lead.”

Speakers described cross-functional AI committees as the architects of these standards, bringing together leaders from legal, tech, data and ethics to reinforce transparency, oversee responsible use, challenge decisions and pause deployments.

Key takeaways:

  • Create ethical standards to fill gaps where regulation lags. Use shared principles to guide decisions long before external rules catch up.
  • Bring cross-functional expertise together early. Involve legal, tech, data and ethics specialists when assessing new AI use cases.
  • Keep human judgement at the centre. Ensure people can pause, question and override AI where decisions carry sensitive or high-impact outcomes.

Treat data control as the foundation of trusted AI

The panel agreed that most AI risks are ultimately data risks. If data is inaccurate, biased or obtained without proper permissions, the output will reflect those flaws. Unsurprisingly, strengthening AI governance topped the audience's data compliance priorities for 2026.

With reforms under the Data Use and Access Act and updates to UK GDPR, the pressure to understand data flows is increasing.

Legal teams were encouraged to get involved early, particularly when organisations are sourcing training data, selecting vendors or procuring third party tools. Early engagement helps prevent issues that are much harder to unwind once a model is deployed.

Key takeaways:

  • Strengthen data mapping. Understand what data enters each AI system and where it goes.
  • Ensure training data meets the required standards. Confirm it is lawfully obtained, bias-checked and capable of being audited if challenged. No black boxes.
  • Update DPIAs to reflect AI-specific risks. Capture issues such as model behaviour, data provenance and new forms of automated decision-making.

Build trust into your supplier relationships

Most organisations use third party AI tools, which creates a structural risk management challenge—being accountable for practices you don’t fully control. 

Imogen Armstrong, Chief Legal Officer at Stelia, captured the point: “When you’re buying AI from someone else, you’re still responsible for the risks. You need to test the supplier properly, not just take their word for it.”

If you rely on external AI tools, your contract sets expectations and limits exposure—what the supplier must disclose, how the model can be used and what happens if it behaves unpredictably.

Many organisations are moving beyond single AI clauses and adopting AI addenda that set out vendor obligations. These cover data rights, transparency, audit access and responsibility for errors or model drift.

Key takeaways:

  • Set firm expectations on data access, ownership and transparency. Make sure suppliers explain what data they use, how models are trained and what you can see if something goes wrong.
  • Build in audit and oversight rights. Ensure you can review inputs, outputs, or system behaviour, especially when decisions affect customers, pricing or HR.
  • Allocate liability for AI-specific risks. Benchmark liability caps and obligations against risks such as bias, model drift, inaccurate outputs and improper data use.

Give people the confidence to trust their own AI use

The panel noted that most AI risks arise not from the technology but from uncertainty about how to use it. 

Some teams experiment without understanding the limits, while others avoid AI altogether because they’re unsure of the rules. Safe adoption relies on a culture where people understand the basics, know when to pause and ask questions, and feel confident using AI within clear boundaries.

As Sarah Batterbury, General Counsel at Unily, put it, “If people don’t know the rules, they either take too much risk or no risk at all.” The goal is to create a middle ground where AI is used thoughtfully, transparently and with the proper checks in place.

Key takeaways:

  • Provide training built around real scenarios, not abstract principles. Focus on how teams actually use AI today and where the risks sit, such as drafting, analysis, and decision support.
  • Give managers clear and simple escalation routes. Make sure people know when and how to involve legal or compliance early.
  • Encourage collaboration between legal, IT, data and product teams. Shared understanding builds consistency, reduces missteps and improves trust in AI tools.

Trust is earned through structure

AI will continue to evolve quickly, whether businesses feel ready or not. But trust does not come from regulation. It comes from structure, oversight and clear boundaries that allow innovation to happen safely.

For in-house lawyers, this is a moment of influence. The organisations that thrive will be the ones that build frameworks that earn trust, protect people and give teams the confidence to innovate.

Legal advice you can act on

For support developing your AI governance framework or updating your contracts, policies or procurement processes, please get in touch with your S&B contact or a member of our Technology and Data team.

Tags

artificial intelligence, data protection, employment, technology, in-house lawyers, articles