Artificial Intelligence (AI) is arguably the most controversial yet significant technological development of the 21st century. In no time, AI has reshaped industries, economies, and societies worldwide, such that it is now integrated into almost every task one could think of. To define AI precisely is perhaps a strenuous task, given its current form and capability to perform widespread tasks in a variety of sectors, including but not limited to, virtual assistants, generative tools, healthcare, and financial services. The United Kingdom’s (UK) White Paper on AI describes “AI,” “AI systems” and/or “AI technologies” as “products and services that are ‘adaptable’ and ‘autonomous'” but stops short of providing an exhaustive definition. Alternatively, legislation(s) in the United States (US) provides a variety of definitions, where one in the National Defense Authorization Act of 2019 defines it as “a set of techniques, including machine learning that is designed to approximate a cognitive task”. To say it simply, AI tools utilize machine learning to generate outputs similar to the input data, based on unique and complex commands from the user. To be precise, the data utilized for such models may include audio, video, imagery, and information.
The escalation of AI has necessitated calls for legislation to govern its development and regulation. Considering the non-conclusive nature and extensive potential of AI tools, there have been conflicting views worldwide on the question of whether the technology should be governed on a statutory footing. The European Union (EU) seems to have pioneered in this regard by enacting the ‘first-ever legal framework on AI’, the AI Act. A European Commission report on the AI Act provides an overview of what the Act aims to achieve. It contends that the legislation comes as a part of the much wider EU policy on fostering AI development of ‘trustworthy AI’ in Europe and beyond whilst ensuring fundamental rights, safety, and ethical principles are complied with as it states. It particularly addresses the insufficiency of existing legislation to cope with the problems associated with AI, whereby the new legislation aims to address the risks created by these AI applications, and place restrictions on such unacceptable practices. It further lays down the requirement of a ‘conformity assessment’, specifically for ‘high-risk’ systems, before such a system is put on the market. The regulatory framework laid down in the AI Act classifies these systems on four levels of risk: unacceptable, high, limited, and minimal. AI systems with an unacceptable risk can not be put on the market for consumer use, whereas the ones with high risk are subject to strict obligations before being put on the market. High-risk systems include all remote biometric identification systems alongside areas such as educational or vocational training, for the sake of reference. Systems with limited risks include chatbots, or AI-generated content (including audio and video – deepfakes), which must be disclosed to the user or consumer so they can make an informed decision to withdraw should they wish to. A recent example of this being put to practical use is YouTube’s newly introduced policy to label synthetic or AI-generated content, complying with the EU’s AI Act.
This move from the EU comes with certain implications, particularly in a developmental context where such legislation may restrict the development of AI systems, particularly due to there being a threat of being met with legal consequences. This was raised by Lord Hannan of Kingsclere in the House of Lords debate on the draft AI Bill for the UK. He went on to say that his newly-bought iPhone had the Apple Intelligence function which the EU variants, released the same day, did not have as Apple was legally bound to comply with the EU’s AI Act as chatbots fall under the high-risk category. This position was eventually affirmed by the government in the House of Lords report on the Artificial Intelligence (Regulation) Bill [HL]. Although it recognizes that primary legislation will be necessary to regulate AI technology at some point in the future, doing so at this stage would pose a risk to the evolution of such technologies and be counterproductive instead. Likewise, Ministers emphasize that sectoral regulators with support from central functions currently in development within the Department of Science, Innovation and Technology are ‘best placed’ to regulate AI as of now. The UK’s National AI Strategy serves as a testament backing this stance, whereas the ’10 Year Vision’ aims to make the UK an AI superpower.
The UK’s approach to AI, as laid out in the White Paper, differs from the EU where it aims not to regulate AI technologies but rather lays out a new framework to bring “clarity and coherence” to the AI regulatory landscape. The UK’s approach is primarily based on five key principles: (i) safety, security, and robustness, (ii) appropriate transparency and explainability, (iii) fairness, (iv) accountability and governance, and (v) contestability and redress. However, as of now, the report suggests that the Parliament is reluctant to bring any regulatory or primary legislation into practice at all, perhaps as a part of their national policy. The report further compares the UK’s draft Bill with US enactments on AI, where it provides that 17 states have enacted 29 bills since 2019, where these bills primarily address two regulatory concerns: data privacy and accountability, similar to the UK’s approach. To sum up the implications briefly, it requires developers to share critical information with the US government and to ensure the safety and well-being of Americans, including protecting them from fraud and making them aware of whether they are interacting with AI systems or AI-generated content. One such example is the Algorithmic Accountability Act of 2023 which requires developers to document and disclose their AI systems’ functioning and impact.
Although the report makes no mention of the EU’s AI Act, a brief report by the European Parliament addresses the challenges and consequences it may have for the UK, EU, and US on a global forum. It particularly raised the possibility of the UK AI bill conflicting with crucial legislation such as The Equality Act 2010, the Data Protection Act 2018, and Human Rights Act 1998. Likewise, it highlights that, as raised by the House of Commons European Scrutiny Committee, ‘certain elements of the EU AI Act may also apply directly in Northern Ireland under the Windsor Framework but their impact on the UK’s regulatory approach to AI is unclear’. It also raises that its November 2023 resolution on the implementation of the EU-UK Trade and Cooperation Agreement (TCA) would unequivocally lead to the UK and EU cooperating on emerging technologies, including AI. By the same token, it stresses upon UK AI minister Jonathan Berry’s offer to work with EU as they shared a similar approach to copyright in AI, where he was of the view that creators are owed remuneration should their work be used in training on AI models. The entertainment industry has huge economic potential and it may not be wrong to dub it as an evergreen booming sector, which is duly recognized by Berry. The Canadian singer Grimes has, similar to the EU, pioneered in this aspect where she states, making a unilateral offer, “I’ll split 50% royalties on any successful AI-generated song that uses my voice”. The report finally calls for potential cooperation between the UK and EU on AI policies, similar to that between France and the UK. This could be achieved by initiatives such as the EU’s AI Pact, which invites stakeholders from Europe and beyond to collaborate on policies and development of legal framework concerning AI ‘ahead of time’.
From an overall perspective, it may not be wrong to conclude that the UK’s approach to AI seems to strike a balance between the development and regulation of AI technologies, where reluctance to enact premature legislation may certainly hinder AI’s potential as a significant game-changer. However, deserting from collaborative opportunities such as the EU’s AI Pact or UK AI Minister Berry’s offer to the EU may lead to future issues with keeping pace with majority standards on AI. Such legal collaborations store a huge potential to bring uniformity on a global level, or at least, consistency in how legal frameworks governing the use of AI develop, to avoid repugnant situations for both developers and lawmakers, as the one seen with Apple Intelligence.