On this week's episode of Yahoo Finance Future Focus, host Brian McGleenon speaks with Sir Chris Holmes, a prominent champion of the ethical use of technology, about his proposed Artificial Intelligence Regulation Bill . Lord Holmes highlighted the urgent need for regulatory action to reduce the significant risks posed by rapid advances in AI. Despite predictions that the UK AI market will exceed $1 trillion by 2035, without proper oversight AI could have catastrophic consequences and even threaten the survival of humanity, Lord Holmes said. I warned you. He emphasized the importance of promoting the development of “ethical AI” based on principles such as trust, transparency and accountability. Lord Holmes emphasized the need for public engagement and warned against the competing interests of nation-states, particularly the introduction of AI on the battlefield.
video transcript
[MUSIC PLAYING]
Brian McGleanon: On this week's episode of Yahoo Finance Future Focus, we're delighted to welcome Sir Chris Holmes back to the studio. Lord Holmes proposed the Artificial Intelligence Regulation Bill and submitted it to UK MPs. Lord Holmes, welcome to Yahoo Finance Future Focus.
Chris Holmes: Thank you so much for inviting me, Brian.
Brian McGleanon: Why does the UK need regulation around artificial intelligence?
Chris Holmes: In many ways, AI is already all around us. The Prime Minister hosted a hugely successful AI Safety Summit at Bletchley Park last November. And that really applied to frontier risks, the very important existential risks associated with AI. But once that's done, it's even more important to look at all the other elements of AI that are already impacting people's lives.
Let's look at recruiting activities as an easy-to-understand example. When it comes to shortlisting and candidate selection, a great deal of AI is already in use, but it is probably unknown to most, if not all, candidates in a given selection process. .
Brian McGleanon: We just touched on existential threats, but what are we talking about in that regard?
Chris Holmes: Potentially the complete extinction of humanity. So in terms of risk, it's probably at the top. And I remember when he was a member of the AI Task Force in 2018. We have prepared a report. We put together a very well-thought-out and nuanced report on the big picture of AI, and most of the newspaper headlines said things like: , The lords predict the destruction of humanity due to artificial intelligence.
And there is no question that potentially deployed AI on the battlefield is a risk that we should all be fully aware of. And that's why I mentioned ethical AI as part of the basis of all my bills, and indeed in the select committee report that we did. We understand the principles of making this successful. It makes sense to incorporate AI ethics into any regulatory approach.
Brian McGleanon: Where does the UK currently stand in the development of AI? Is it a world leader?
Chris Holmes: The UK is in an excellent position for the development of AI across a range of sectors. We have great startups and scale-ups. But when it comes to the legislative and regulatory arena, we need to and must act. We have the opportunity to do something particularly unique in the UK. It's not just because of everything that's incredibly important and impressive – our financial ecosystem, our geography, our time zones, our cities, our tech startups, our university sector – the greatest gift we have is that we owe it to the common law. It's the English common law on which you can build something, the certainty and stability that you get from the common law.
That is why it is used in contracts all over the world. Based on common law, developed over time through case law and case law, while also being interoperable with other regulatory approaches such as the EU.
Brian McGleenon: Does the legal framework – UK flexibility versus EU flexibility – give the UK an advantage in developing any regulatory framework?
Chris Holmes: It will be if we choose it. And a very simple example from last year, when we enacted the Electronic Transaction Documents Act, it was a blockchain act, and it didn't mention blockchain at all. So we set standards through it, but we never mention specific technologies. Therefore, technology is not only neutral; It's also as future-proof as possible.
And this is the same approach that an AI regulation bill could take to define principles, concepts, values, ethics, what we understand. We know how to do this successfully. That's why we believe we can and must act now to legislate.
Brian McGleenon: There are existing regulatory bodies. Now, should they address this, or do we need a new single AI authority to oversee this rapidly advancing technology?
Chris Holmes: The government's position is that existing regulators such as the FCA, Ofcom and Ofgem should regulate AI in their respective sectors and verticals. My feeling is that it should go a little further than that. The first clause in my bill is that the AI Authority is not an AI regulator, a giant, cumbersome, bureaucratic, and ever-expanding regulator, but rather a lightweight, agile, horizontally focused regulator, so that all It is suggested that you be able to look around. Evaluate these existing regulators and evaluate their ability to address AI challenges and opportunities and where the gaps are. For example, look at all relevant current laws, such as consumer protection, and assess their capabilities. Address AI challenges and opportunities. It's not a light touch by any means, but when he's at his best, it's the correct touch regulation.
Brian McGleenon: It is always a pleasure to speak to you, Lord Holmes. Thank you for visiting this week's Yahoo Finance Future Focus.
Chris Holmes: joy. Thank you very much for the opportunity.
[MUSIC PLAYING]