Tech
I
August 18, 2023

EU gets closer to AI laws

On 14 June, the European Parliament took a major step towards passing the Artificial Intelligence Act (AI Act) by releasing its negotiating mandate setting out what it wants to see included in the upcoming legislation. The AI Act is one of the world’s first comprehensive laws designed to regulate the use of artificial intelligence. Like the GDPR (Europe’s massively influential data protection and privacy legislation) before it, the AI Act is likely to provide a template for how other jurisdictions regulate AI as the world tries to get a handle on this transformative technology.

The AI Act (much like its Canadian counterpart, the Artificial Intelligence and Data Act) is built around the aim of minimising the harm caused by AI, and as such takes a “risk based” approach while casting a relatively wide net regarding the definition of AI.

Casting a wide net

The definition of “Artificial Intelligence” used in the AI Act is extraordinarily broad. The Act captures any:

“software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”.

The “techniques” referred to are:

a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; and
c) Statistical approaches, Bayesian estimation, search and optimization methods.

By taking this approach, the AI Act captures not just the bleeding edge technology du jour (such as “Transformer”-based Large Language Models like ChatGPT, or Generative Adversarial Networks like the image generating platform Midjourney), but the far more mundane GOFAI technologies (“Good Old Fashioned AI”, or Symbolic AI, like the voice recognition your phone’s digital assistant does, or the algorithms that build your playlists and recommend which show to stream next) as well.

Risk-based approach

Rather than creating rules based on the type of technology, the Act is built around specific use cases, separated into categories based on the level of risk they present to European citizens.

Unacceptable Risk

The first category, “Unacceptable Risk”, places an outright ban on certain applications. The draft Act prohibits the use of AI for:

• Subliminal manipulation;

• Exploitation of vulnerable people;

• Any sort of “social scoring” system (such as China’s Social Credit program that evaluates and scores individuals and businesses based on their behaviour - a low score can limit access to services and opportunities).

High Risk

The “High Risk” category covers a large number of uses, but the gist is that any use of AI in situations that can impact a person’s privacy, freedom, safety or prospects for advancement / betterment, will be considered “high risk”. This category includes the use of AI for:

• Biometric identification of people at a distance;

• Components of essential infrastructure related to safety (such as control mechanisms for power stations, dams, or traffic management);

• Education admissions assessments;

• Recruiting or hiring;

• Migration and border controls;

• Judicial or law enforcement purposes.

The suppliers of AI systems used for the above purposes (“High Risk AI Systems”, or “HRAIS”) will be subject to multiple obligations designed to make the risks as minimal and transparent as possible. Suppliers will need to be able to show that they have robust risk management, data protection, cyber security, record keeping and governance in place.

While the above requirements should be fairly straightforward, the AI Act also requires that HRAIS have a relatively high degree of human oversight, which many technologists argue will take away from the benefits of using AI, a key advantage of which is the ability to make decisions faster than humans, drawing from far greater amounts of data and making connections that we might not.

Complicating this further is the “Black Box” problem. This is the name given to the phenomenon common to Machine Learning-based AI systems, by which the developers of these systems have no idea what is going on under the hood. They know what data has formed the training set, and they know what it’s being trained to do, but how any given machine learning AI system reaches a conclusion / makes a decision / decides that “this is what a tree looks like” is fairly opaque. This means the degree to which human oversight is practical - or even possible - will depend on the machine, and how willing the developers, deployers or whatever bodies are tasked with administering the AI Act are to compromise the efficiency of the systems.

Limited Risk

The "Limited Risk" designation currently only applies to “Human impersonation”, such as AI chatbots or “deep fakes”. These sorts of systems will need to carry notifications that the user is not observing or interacting with a real human, and that the user themselves may be subject to emotional or biometric monitoring and categorisation.

Minimal Risk

The final category, "Minimal Risk", will not be subject to any regulations, but suppliers of these systems will be encouraged to sign up to voluntary codes of conduct. This will apply to systems such as AI-driven non-player characters in video games, and advanced email spam filters and the like.

Steep penalties and extra-territoriality

Although the numbers are likely to be adjusted before the AI Act becomes law, the draft position contains some very steep penalties for suppliers that do not meet the requirements.

For example, a breach of the prohibition on uses with an “Unacceptable Risk” could see an individual fined up to €40,000,000 (not a typo), or, if the offender is a company, fined up to 7% (if higher than €40,000,000) of its global turnover for the prior year. The scale of these sanctions is indicative of how seriously the European Parliament is taking the development and use of AI. Even breaches of the lesser restrictions will bring steep penalties:

• Breach of data governance and transparency requirements: the higher of €20,000,000 or 4% of turnover;

• Non compliance with other requirements: the higher of €10,000,000 or 2% of turnover;

• Supplying incorrect, incomplete or misleading information to the relevant regulator: the higher of €10,000,000 or 1% of turnover.

And it’s not just EU-based companies that need to take heed. Any “deployer” of AI (to use the Act’s terminology) that uses or makes their technology available in the EU, or where the technology will directly impact European citizens, will be subject to the AI Act.

What next?

As the AI Act is essentially a draft (although a fairly advanced draft), we cannot be certain of its final form. The European Parliament, Commission and Council now have to negotiate the final terms in a process known as the trilogue negotiations. Although the draft was overwhelmingly supported by the European Parliament, it may yet change as there are a number of stakeholders that are not so enthusiastic.

On 30 June, an open letter was sent to the European Commission (part of the EU government’s executive branch) by representatives of over 150 businesses with skin in the AI game from across the EU. Their concern is that the Act will be overly restrictive and limit the potential for deploying and developing AI in ways that may harm Europe’s global competitiveness in the AI market, and in industries that AI is set to transform. While their argument may have merit, it seems the European Parliament is keenly aware of the risks that AI poses, and it will be interesting to see if there is any appetite for watering down the restrictions in the draft Act.

The expectation is that the final form of the Act will be released later this year, but there is no concrete date for when it will come into force. Even after it does take effect there is likely to be a two-year grace period to allow “deployers” to comply before the penalties kick in.

What does this mean for us?

Despite the uncertain timeline, we can be fairly certain that AI legislation that looks an awful lot like the draft AI Act will come into force in the EU in the near future, and, like the GDPR before it did with data protection, will shape the global approach to AI regulation.

NZ-based businesses with any sort of AI or AI adjacent technology need to become familiar with the AI Act and work out if and how they are going to make their offerings compliant if they want to make these offerings available in this key market. Or even further afield, because when it comes to protecting citizens from the risks of technology, where the EU goes, the rest of the world follows.

If you are operating in the AI space and want to make sure you’re best prepared for the imminent AI regulations, please get in touch.

No items found.

Article Link

Dowload Resource

Dowload Resource

Insights

Tech
August 18, 2023

EU gets closer to AI laws

On 14 June, the European Parliament took a major step towards passing the Artificial Intelligence Act (AI Act) by releasing its negotiating mandate setting out what it wants to see included in the upcoming legislation. The AI Act is one of the world’s first comprehensive laws designed to regulate the use of artificial intelligence. Like the GDPR (Europe’s massively influential data protection and privacy legislation) before it, the AI Act is likely to provide a template for how other jurisdictions regulate AI as the world tries to get a handle on this transformative technology.

The AI Act (much like its Canadian counterpart, the Artificial Intelligence and Data Act) is built around the aim of minimising the harm caused by AI, and as such takes a “risk based” approach while casting a relatively wide net regarding the definition of AI.

Casting a wide net

The definition of “Artificial Intelligence” used in the AI Act is extraordinarily broad. The Act captures any:

“software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”.

The “techniques” referred to are:

a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; and
c) Statistical approaches, Bayesian estimation, search and optimization methods.

By taking this approach, the AI Act captures not just the bleeding edge technology du jour (such as “Transformer”-based Large Language Models like ChatGPT, or Generative Adversarial Networks like the image generating platform Midjourney), but the far more mundane GOFAI technologies (“Good Old Fashioned AI”, or Symbolic AI, like the voice recognition your phone’s digital assistant does, or the algorithms that build your playlists and recommend which show to stream next) as well.

Risk-based approach

Rather than creating rules based on the type of technology, the Act is built around specific use cases, separated into categories based on the level of risk they present to European citizens.

Unacceptable Risk

The first category, “Unacceptable Risk”, places an outright ban on certain applications. The draft Act prohibits the use of AI for:

• Subliminal manipulation;

• Exploitation of vulnerable people;

• Any sort of “social scoring” system (such as China’s Social Credit program that evaluates and scores individuals and businesses based on their behaviour - a low score can limit access to services and opportunities).

High Risk

The “High Risk” category covers a large number of uses, but the gist is that any use of AI in situations that can impact a person’s privacy, freedom, safety or prospects for advancement / betterment, will be considered “high risk”. This category includes the use of AI for:

• Biometric identification of people at a distance;

• Components of essential infrastructure related to safety (such as control mechanisms for power stations, dams, or traffic management);

• Education admissions assessments;

• Recruiting or hiring;

• Migration and border controls;

• Judicial or law enforcement purposes.

The suppliers of AI systems used for the above purposes (“High Risk AI Systems”, or “HRAIS”) will be subject to multiple obligations designed to make the risks as minimal and transparent as possible. Suppliers will need to be able to show that they have robust risk management, data protection, cyber security, record keeping and governance in place.

While the above requirements should be fairly straightforward, the AI Act also requires that HRAIS have a relatively high degree of human oversight, which many technologists argue will take away from the benefits of using AI, a key advantage of which is the ability to make decisions faster than humans, drawing from far greater amounts of data and making connections that we might not.

Complicating this further is the “Black Box” problem. This is the name given to the phenomenon common to Machine Learning-based AI systems, by which the developers of these systems have no idea what is going on under the hood. They know what data has formed the training set, and they know what it’s being trained to do, but how any given machine learning AI system reaches a conclusion / makes a decision / decides that “this is what a tree looks like” is fairly opaque. This means the degree to which human oversight is practical - or even possible - will depend on the machine, and how willing the developers, deployers or whatever bodies are tasked with administering the AI Act are to compromise the efficiency of the systems.

Limited Risk

The "Limited Risk" designation currently only applies to “Human impersonation”, such as AI chatbots or “deep fakes”. These sorts of systems will need to carry notifications that the user is not observing or interacting with a real human, and that the user themselves may be subject to emotional or biometric monitoring and categorisation.

Minimal Risk

The final category, "Minimal Risk", will not be subject to any regulations, but suppliers of these systems will be encouraged to sign up to voluntary codes of conduct. This will apply to systems such as AI-driven non-player characters in video games, and advanced email spam filters and the like.

Steep penalties and extra-territoriality

Although the numbers are likely to be adjusted before the AI Act becomes law, the draft position contains some very steep penalties for suppliers that do not meet the requirements.

For example, a breach of the prohibition on uses with an “Unacceptable Risk” could see an individual fined up to €40,000,000 (not a typo), or, if the offender is a company, fined up to 7% (if higher than €40,000,000) of its global turnover for the prior year. The scale of these sanctions is indicative of how seriously the European Parliament is taking the development and use of AI. Even breaches of the lesser restrictions will bring steep penalties:

• Breach of data governance and transparency requirements: the higher of €20,000,000 or 4% of turnover;

• Non compliance with other requirements: the higher of €10,000,000 or 2% of turnover;

• Supplying incorrect, incomplete or misleading information to the relevant regulator: the higher of €10,000,000 or 1% of turnover.

And it’s not just EU-based companies that need to take heed. Any “deployer” of AI (to use the Act’s terminology) that uses or makes their technology available in the EU, or where the technology will directly impact European citizens, will be subject to the AI Act.

What next?

As the AI Act is essentially a draft (although a fairly advanced draft), we cannot be certain of its final form. The European Parliament, Commission and Council now have to negotiate the final terms in a process known as the trilogue negotiations. Although the draft was overwhelmingly supported by the European Parliament, it may yet change as there are a number of stakeholders that are not so enthusiastic.

On 30 June, an open letter was sent to the European Commission (part of the EU government’s executive branch) by representatives of over 150 businesses with skin in the AI game from across the EU. Their concern is that the Act will be overly restrictive and limit the potential for deploying and developing AI in ways that may harm Europe’s global competitiveness in the AI market, and in industries that AI is set to transform. While their argument may have merit, it seems the European Parliament is keenly aware of the risks that AI poses, and it will be interesting to see if there is any appetite for watering down the restrictions in the draft Act.

The expectation is that the final form of the Act will be released later this year, but there is no concrete date for when it will come into force. Even after it does take effect there is likely to be a two-year grace period to allow “deployers” to comply before the penalties kick in.

What does this mean for us?

Despite the uncertain timeline, we can be fairly certain that AI legislation that looks an awful lot like the draft AI Act will come into force in the EU in the near future, and, like the GDPR before it did with data protection, will shape the global approach to AI regulation.

NZ-based businesses with any sort of AI or AI adjacent technology need to become familiar with the AI Act and work out if and how they are going to make their offerings compliant if they want to make these offerings available in this key market. Or even further afield, because when it comes to protecting citizens from the risks of technology, where the EU goes, the rest of the world follows.

If you are operating in the AI space and want to make sure you’re best prepared for the imminent AI regulations, please get in touch.

No items found.

Article Link

Dowload Resource

Dowload Resource

Insights

Get in Touch