Beyond the Hype: Christopher de Waas on AI in Legal Ops
- Cosmonauts Team

- Sep 3, 2025
- 6 min read

In the second feature of our Legal Innovators UK 6.0 interview series, we sit down with Christopher de Waas, Manager, Digital Transformation at Rio Tinto, to unpack the realities of AI in legal operations.
Christopher is known for his practical, clear-eyed approach, cutting through the hype to focus on what truly drives change. From exploring whether AI is delivering real transformation, to how legal education must evolve, to the career paths of the next generation, his insights spotlight the cultural, political, and operational work that often matters more than the tech itself.
Over the past few years, there’s been a lot of talk about AI “liberating” in-house legal teams. From your perspective, are we seeing real transformation, or is it still mostly hype?
There’s been extraordinary hype about what AI can do for legal teams. AI systems are powerful, but they don’t do everything, and they don’t embed themselves. Implementation requires unglamorous work: onboarding, integration into workflows, and change management.
For in-house teams, the challenges are compounded by their position inside large enterprises. Legal departments are usually small relative to the business, so their ability to influence technology adoption is limited. IT functions tend to prioritise standardisation and scale, which doesn’t always align neatly with the bespoke needs of legal. Add to that the fact that many legal processes are entangled with business processes outside the team’s direct control, and the road to ‘liberation’ is anything but straightforward.
That said, I think we’ve reached a turning point. People are now accepting that this technology isn’t going away, and legal teams are engaging more seriously. The hype distorted expectations, but we’re now seeing genuine transformation emerge—slowly, unevenly, often through non-technical work like workflow redesign or organisational buy-in. Transformation here is as much cultural and political as it is technological.
So I’d say we are only just moving beyond the hype. The path forward is not simple, but for those willing to grapple with the realities, AI is finally beginning to make a tangible, if uneven, difference.
Are there areas of practice you believe AI should never touch? Why?
I struggle to identify areas to which AI should be completely off-limits. The reality is that AI is being embedded into every industry and discipline, and legal cannot realistically cordon itself off as an exception.
The key principle, in my view, is not about where AI is used, but how. At least for the foreseeable future, there should always be a human in the loop: someone scrutinising outputs, applying judgement, and taking ownership for decisions. That is the safeguard. Within that frame, the question becomes: why shouldn’t we use AI wherever it can reduce costs, improve quality, expand access to legal services, or deliver faster, more consistent outcomes in line with business needs and risk appetites?
What matters is discernment and accountability. Just as lawyers learned when and how the internet is and isn’t useful for research, so too must they learn when AI adds value, and when it doesn’t. That familiarity and fluency is the skill that matters, not blanket prohibitions.
There will, of course, be serious debates about where AI should not replace human judgement—judicial decision-making being one obvious example. But that is different from saying AI should “never touch” those areas. Even there, AI can play a role in research, drafting, or analysis that supports the human decision-maker.
The commercial reality is also unavoidable: the business world will increasingly demand that AI be applied to legal work—whether for cost reasons, speed, or access to expertise. The legal industry can either get ahead of that curve and use AI, or find the path ahead being dictated to it. Personally, I would far rather we take the proactive path.
Do you think AI literacy should become part of legal education and bar requirements?
I find the phrase “AI literacy” slightly misleading. What we should be talking about is digital fluency—the same skills lawyers already use with Outlook, Excel, or the internet—plus the ability to connect dots: learning agility, systems thinking, stakeholder skills, basic comfort with data, and applied critical thinking.
The new wave of AI tools is being commercialised and commoditised in ways that make them deliberately easy to use. You don’t need to understand the mechanics of machine learning to benefit from them. You just need the confidence and familiarity to use them well.
At Rio Tinto, we don’t focus on “AI literacy”, but on fluency and familiarity. That means giving lawyers hands-on exposure, encouraging them to practise, and helping them recognise where AI adds value and where it doesn’t. It’s about knowing when to dip into these tools naturally as part of everyday work, not sitting on the sidelines wondering if they “should” or second-guessing the value.
I worry that formalising “AI literacy” into education or bar requirements risks creating a mini-industry that overcomplicates the obvious. It’s reminiscent of the fad of lawyers needing to learn to code. If legal education changes, it shouldn’t be an “AI module”—it should rebalance towards those dot-connecting skills listed above. And the best way to build fluency is the simplest: put the tools in people’s hands early and let them develop confidence. That’s how young children are already doing it, without a curriculum.
So no, I don’t think AI literacy needs to be a formal requirement, no more than students must be taught “internet literacy”. But yes, every lawyer will need to get comfortable enough to use these systems as naturally as they use email.
How do you see AI influencing the career path of the next generation of in-house lawyers?
The influence will undoubtedly be significant, and unpredictable. We know that AI is already impacting work at every level of the legal value chain. At the junior end, AI is automating work that once defined early years of practice: research, first-cut drafting, data gathering. At the mid-tier, it is synthesising and analysing information. At the senior level, it is increasingly capable of producing recommendations informed not just by departmental context, but by data across the wider enterprise.
The challenge is obvious: how do juniors learn when the work that once trained them is being handled by machines? How do organisations redesign workflows so that junior talent is still applied meaningfully? And how do we ensure there are still rungs up the career ladder if AI is encroaching at every point?
The opportunity, though, is equally real. If we re-engineer our processes, junior lawyers could find themselves engaged in more substantive work earlier in their careers. AI can strip away routine work, allowing them to focus on higher-value skills such as judgement, communication, and relationship-building. But this will only happen if organisations are intentional about it—otherwise, there’s a real risk of hollowing out the profession’s training ground.
So my answer is cautious. AI will profoundly shape career paths. Whether it accelerates development or stunts it will depend less on the technology itself, and more on how legal work is redesigned.
What advice would you give to someone who’s just starting to explore AI for their team?
My advice is simple: don’t overcomplicate it—just get on with it. At Rio Tinto, we’ve been running a global programme across several corporate functions, spanning multiple geographies. The single most effective step we took was to put ChatGPT Enterprise directly into people’s hands and give them permission to explore. No heavy rulebooks or rigid frameworks—just clear, minimal guardrails, and breathing room.
First, it has generated use cases organically. Rather than trying to predict how AI would apply, the best ideas have come from lawyers experimenting, discovering ‘Aha!’ moments, and sharing them.
Second, it has built fluency. People are learning how to use AI, but they’re also learning when not to use it. That discernment only comes from first-hand experience. We’ve seen colleagues grow confident in recognising the limits of the technology and in trusting their own judgement when it matters most. That fluency also sets our people up to navigate changes, as the technology continues to mature.
A third benefit is cultural. Some colleagues were sceptical. By dipping their toes in slowly, they have convinced themselves of AI’s usefulness, which has been far more effective than trying to convince them centrally. This has all helped us to make AI not just a tool, but a shared journey of exploration and team building.
So my advice is: create the conditions for exploration. Give people agency. Let them define how AI fits their work, rather than dictating it ‘formally’. As the technology and industry evolve at pace, your people won’t feel subject to change, but will help shape it. That’s the most reliable path to sustainable transformation.
Join the Conversation
Christopher de Waas will expand on these insights at Legal Innovators UK 6.0 this November, joining a global line-up of legal leaders shaping the future of AI and legal operations. On In-House Day, he’ll join the panel “From Talk to Transformation: AI and In-House Empowerment” - exploring how corporate legal teams can move beyond hype to deliver sustainable change and tangible client value with AI. Passes are live — will you join Christopher and other innovators this November?



Comments