Artificial Intelligence (AI) is undoubtedly in the zeitgeist. In case you’re unfamiliar, AI can be defined as technology that enables computers and machines to simulate human intelligence and problem-solving capabilities. Due to its exponential growth and vast potential, how we legislate AI usage has quickly become a key debate within the tech community. 

If you’ve found yourself listening to any AI-related debates, then the topic of regulation has likely cropped up. As Ellen Keenan O’Malley, Senior Associate in the Commercial IP team at EIP and a key member of their Codiphy offering, illustrates, it’s an issue AI experts are grappling with daily: “I was at a recent talk about AI regulation and its evolution. One speaker highlighted that we’re at the cusp of a new civilisation; lawyers and regulators are pivotal at this time because they are creating the framework for the future.” 

I was lucky enough to pop into EIP’s London offices to chat with Ellen about how we’re regulating AI – what are the key challenges? How is the UK Government approaching this? Are we striking the right balance between protection and innovative freedom? Read on for Ellen’s expert insights. 

Where do we currently stand with AI regulation? 

Ellen emphasises that AI regulation is nothing new, despite how it may be portrayed in the media. 

“The term Artificial Intelligence was first used in 1955, by John McCarthy at Dartmouth College,” she tells us. Making that almost 70 years ago, it’s safe to say there have been people working in and around this tech for some time. 

Importantly, there have been laws crafted around technology that may not have been coined specifically for AI but have certainly guided governance through its evolution. Ellen cites back as far as 1932 when the legal case of Donoghue v Stevenson resulted in the inception of the law of negligence, which effectively imposed a duty of care in certain situations. Legislation like this has directed how we build and utilise AI today. 

Similarly, in 2018 when the General Data Protection Regulation (GDPR) was introduced, Article 22 outlined that companies are not allowed to implement automated decision-making, which has had an evident impact on the distribution of AI technology. Further, Article 32, requires Data Processors and Data Controllers to implement technical and organisational measures that ensure a level of data security appropriate for the level of risk presented by processing personal data. 

“Right now, it’s a question of whether we need specific laws and regulations to govern AI in particular, which is what the EU AI Act is specifically aimed at. In this instance, it’s taken a risk-based approach and isn’t sector-specific,” explains Ellen. “There are certain inclusions that provide a very specific AI framework around rules such as transparency.

“On the flip side, the UK Government’s view is that each sector has its own risks, and therefore the existing regulators for that industry should be what’s governing it.” Meanwhile, the Online Safety Act recently came into force to bolster existing UK laws relating to consumer protection.

So, whilst AI regulation is nothing new, there is going to be a rise in the number of lawyers specialising in the sector, and we’re set to see constant refinement across all industries as the tech continues to evolve. 

Regulating AI through ethics 

How can we create a framework for regulating AI conscientiously? Ellen suggests that ethics will be the driving force in how we establish new laws and a risk that the UK Government highlighted during its AI Safety Summit in November 2023 as something that needs protecting. In light of 2023’s Writers Strike, it’s evident that people are taking the potential implications of AI seriously, which will inform how legislation takes shape. 

Ellen elaborates, “We need to think, what are the risks we are worried about? And what is the purpose of the law that we’re trying to address? We can expect to see more protests around the protection of IP crop up as more concerns emerge.”

“If people had a better understanding of the positives, they will be less afraid of it and therefore embrace it”

Taking a step back to thoroughly assess the current situation is key to making the correct and most considered decisions. The impact is tenfold, so the importance of taking the best approach to protect people, whilst not prohibiting innovation, can’t be understated. 

“One of the key drivers behind the Online Safety Act was the ethical concerns around the rise of chatbots and social media platforms concerning the influence they have on children. So we set about defining what measures we can put in place to protect them without stifling the innovation itself.

“It’s all about striking a balance between ethics and ensuring that you’ve got an adequate regulation to protect our society, but recognising that these are really useful tools to build and utilise,” summarises Ellen.

Are there gaps in how we're regulating AI? 

Ellen tells us she doesn’t explicitly believe that there are fundamental gaps in existing regulations, but that we need to assess if the current laws are fully fit for purpose. 

“One of the key things I find most fascinating to look at as an IP lawyer is copyright infringement,” outlines Ellen. “It’s to see whether our existing IP laws are adequate to balance innovation in AI versus the IP rights holders.

“You’ve got the Patents Act, amongst other laws, but they weren’t drafted to be thinking about tech such as generative AI. I suppose a potential gap could be in the sense of how it’s applied and how it’s going to be interpreted. Could someone argue there are grey areas, we fall on this side, and therefore we’re not infringing? There may be instances where bad actors can manipulate the current laws, so we need to look at what defences they would rely on in these cases.”

“The UK Government has been very vocal in saying we don’t fully understand it, therefore, we don’t think it’s appropriate to draft a very top down piece of legislation”

However, the biggest challenge will likely be around legislation keeping up with the pace that technologies like AI are developing at. Ellen highlights that drafting a regulation that “1) Deals with the current ideas we’re grappling with and 2) Is drafted generally enough that it future-proofs us, is risky.

“It’s been quite interesting to read about since the EU AI Act was approved last week. One of the things they said was, we just needed something to get out.” The underlying theme behind the EU AI Act was the idea that the EU felt it needed to be the first to market but there is a big question mark around whether or not it is actually fit to regulate AI (a comment made by Kai Zenner, Head of Office and Digital Policy Adviser for MEP Axel Voss (EPP group) in the European Parliament, in his personal reflection piece on the EU AI Act on February 15, 2024).

And we’re already seeing this challenge play out in real-time, as various versions of the EU AI Act were drafted as new problems came to light. Ellen explains that the original version didn’t address IP, and this had to be reiterated as we saw a huge spike in potential IP infringement cases, aptly illustrating this point. 

“The UK Government has been very vocal in saying we don’t fully understand it, therefore, we don’t think it’s appropriate to draft a very top down piece of legislation that tries to address these issues,” she adds. “I am quite supportive of the UK’s approach in trying to be sector specific.” Allowing the people who have a direct understanding of the risks of that particular sector to determine the measures that should be in place should mitigate the chances of key considerations being left out – or of over regulation. 

Underregualtion vs Overregualtion 

In a hypothetical world, if we left AI unregulated, Ellen says her biggest concern would simply be around the unknown: “We’ve already seen the advancements of deep fakes and unfortunately we do live in a society with bad actors; without regulation, there would be bigger scares that we can’t even imagine right now.”

It goes without saying that we must have a framework in place to deter utilising AI with bad intentions, and Ellen argues that these legal guidelines may even facilitate more innovation. It’s crucial that we build AI tools in a safe and secure way.

AI has unearthed huge possibilities for us to develop as a society. “We’ve seen the advancements in science and cancer diagnosis and treatment from what technology can do that maybe that human could never,” says Ellen, and we want to ensure these breakthroughs are still able to flourish. 

“So is there a danger of overregulation? I think there can be. And that’s the driver behind the UK government’s approach; creating a balance between innovation and considering safety risk.”

There’s a part we all have to play in shaping how we utilise AI. Particularly as a tech community, we have a responsibility to educate. As Ellen says, “I think if people had a better understanding of the positives, they will be less afraid of it and therefore embrace it. One of the things that people worry about, for example, is that AI will take their jobs.”

We know as a tech sector that AI is able to enhance and streamline processes. A lot of work is required to keep that narrative on track, whilst also not dismissing valid concerns. 

Regardless of any personal opinions on AI, one thing is certain: AI is here to stay. We have the opportunity to embrace the change and be a part of the conversation steering usage in the most positive direction for all. Ultimately, the more education around how beneficial these tools are, and the adjacent regulations being put into place to keep us all safe, the better the outcome for everyone.

Thanks so much to Ellen Keenan-O’Malley for taking the time to chat with us about regulating AI! If you’ve enjoyed this piece, make sure to check out more content on our AI hub here

Shona Wright

Shona covers all things editorial at TechSPARK. She publishes news articles, interviews and features about our fantastic tech and digital ecosystem, working with startups and scaleups to spread the word about the cool things they're up to. She also oversees TechSPARK's social media, sharing the latest updates on everything from investment news to green tech meetups and inspirational stories.