Artificial Intelligence (AI) has seemingly been in every article and conversation for the better part of 2023. It seems as though every email received from an advertising or marketing technology provider is promoting their use of AI in new features and functionality, of course all to our benefit! But with the focus on this new technology, so too has come scrutiny with the ways in which data is being used and the potential harms to end users. Just last month we saw the ‘banning’ and then re-enabling of the popular ChatGPT product in Italy.
In response to the hype and concerns, governments across the globe are beginning to fast-track new laws and regulations meant to provide guardrails for the technology and protections for people impacted by it. As usual, Europe is leading the charge in this rush for regulation with the proposed Artificial Intelligence Act (AI Act). While still a proposal and not yet finalized, the AI Act has begun to take shape and some key insights and themes are shining through to give us an idea about how AI regulation will be addressed in the European Union. These core ideas can provide a great starting point for anyone considering building or using Artificial Intelligence in their solutions for marketing and advertising.
What is the AI Act?
The AI Act is a proposed regulation in the EU meant to harmonize rules on Artificial Intelligence. The European Commission outlined four specific objectives for the framework in their initial proposal:
- Ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values;
- Ensure legal certainty to facilitate investment and innovation in AI;
- Enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems;
- Facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.
How is Europe approaching regulation of AI?
The proposed AI Act is “meant to present a balanced and proportionate horizontal regulatory approach to AI that is limited to the minimum necessary requirements to address the risks and problems linked to AI, without unduly constraining or hindering technological development or otherwise disproportionately increasing the cost of placing AI solutions on the market.” (European Commission Explanatory Memorandum)
This is done by using a proportionate risk-based approach that defines specific obligations for “high-risk” AI systems and lesser, minimum transparency obligations for AI systems which humans interact with directly such as chatbots or where ‘deep fakes’ are used. By instituting strict requirements only at the highest levels of risk, while also providing voluntary codes of conduct for non-high-risk AI systems, the regulatory approach is meant to be proportionate to enable innovation while still protecting users.
Who is in scope?
The regulation applies to:
- providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country;
- users of AI systems located within the Union;
- providers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union
What systems are considered AI?
An Artificial Intelligence System is software that is developed with one or more specified techniques and approaches and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decision influencing the environments they interact with.
The techniques and approaches specified are:
- Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
- Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;
- Statistical approaches, Bayesian estimation, search and optimization methods.
What is “high-risk” AI?
The majority of specific requirements are enforced for what is defined as “high-risk” AI. This designation can be made in one of two ways.
- An AI system which is intended to be used as a safety component of a product or is itself a product covered by the listed Union harmonisation legislation and is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the listed Union harmonisation legislation in Annex II of the proposed regulation.
- Is an AI system referred to in Annex III of the proposed regulation.
Are any platforms used for marketing and advertising use cases likely to fall within the “high-risk” AI category?
Unlikely. The high-risk designation is primarily reserved for systems that either directly, or indirectly through the product they are a part of, are possible to materially distort a person’s behavior in such a way that it is likely to cause the person or another person physical or psychological harm.
In addition to systems with the potential to cause significant harm, systems that may result in the detrimental or unfavorable treatment of people or groups are also often included in the designation.
As currently written, outcomes resulting from AI systems within marketing and advertising technologies are unlikely to meet the requirements for a high-risk AI category. However, this is not to say that they certainly will not be in the future once ongoing impacts are further assessed.
As a marketer or advertiser, should I care about the AI Act?
Absolutely! While the AI systems developed and used for advertising use cases may not carry the strict regulatory requirements outlined in the AI Act, a Code of Conduct for all AI systems will be created as a result of this regulation. This Code of Conduct calls on all AI systems to voluntarily adhere to the same standards as high-risk systems for the responsible development of AI.
In addition, some AI systems will also be in scope for the transparency requirements that are outlined in the proposed regulation.
What are the obligations for high-risk AI under the regulation?
High-risk AI systems carry a number of requirements for the systems and obligations for providers and users, all of which are detailed in Title III of the proposed AI Act. To summarize the requirements:
- Risk management systems to identify, evaluate, and mitigate risks must be established, implemented, documented, and maintained to run throughout the entire lifecycle of a high-risk AI system.
- Any high-risk AI systems that make use of techniques involving the training of models with data must be developed on the basis of training, validation, and testing data sets that meet outlined quality criteria.
- Technical documentation must be drawn up before the system is placed on the market and shall be kept up to date.
- High-risk AI systems must be designed and developed with capabilities enabling the automatic recording of events (‘logs’) while the high-risk AI systems are operating. Logging capabilities must conform to recognised standards or common specifications.
- High-risk AI systems must be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriately.
- High-risk AI systems must be designed and developed in such a way that they can be effectively overseen by natural persons during the period in which the system is in use.
- High-risk AI systems must be designed and developed in such a way that they achieve, in the light of their intended purpose, an appropriate level of accuracy, robustness and cybersecurity, and perform consistently in those respects throughout their lifecycle.
In addition to these requirements, there are also related obligations of providers and users of high-risk AI systems:
- Ensure systems are compliant with the listed requirements
- Have a quality management system in place
- Draw-up the technical documentation
- When under the user’s control, to keep the logs automatically generated by their high-risk AI systems
- Ensure the high-risk AI systems undergo relevant conformity assessment procedures prior to placing it on the market or putting into service
- Comply with registration obligations
- Take necessary corrective actions if the system is not in conformity with listed requirements
- Inform national authorities of any non-compliance or of any corrective actions taken
- Affix the CE marketing to the systems to indicate conformity with the regulation
- Demonstrate the conformity of the high-risk AI system with requirements set out upon request of a national authority
What are the transparency requirements?
In addition to the explicit requirements and obligations related to high-risk AI systems, there are also separate transparency obligations for certain AI systems.
- If an AI system is intended to interact with natural persons, the person must be informed that they are interacting with an AI system unless it is obvious from the circumstances and the context of use.
- Users of an emotion recognition system or a biometric categorisation system must inform any natural persons exposed to the system of the operation.
- Users of an AI system that generates or manipulates image, audio, or video content that resembles existing persons, objects, places, or other entities or events and would falsely appear to a person to be authentic or truthful must disclose that the content has been artificially generated or manipulated.
These transparency requirements are likely to apply to systems used for marketing and advertising use cases. For example, a system like ChatGPT (or any chatbot style AI for that matter) is intended to interact with people, thus introducing the transparency requirements in number one above. Systems being developed to generate and optimize ad creative for campaign optimization are also likely to meet the criteria for number three above, thus introducing those transparency requirements.
What is the next step in the process?
The original proposed text for the EU AI Act was initially created by the European Commission in 2021. In December of 2022, the EU Council of Ministers endorsed a general approach to AI legislation while the European Parliament published their compromise amendments in May 2023.
All of these actions set the stage for negotiations for the finalization of the Act across these EU bodies to begin at the end of June 2023. Once compromise has been reached, the EU AI Act will be finalized with some provisions kicking in as quickly as a few months following passage and others kicking in within two years of passage. Much more to come as the final version of the regulation takes shape over the second half of 2023.
With the AI Act, Europe looks to set the global standard for regulation of this new technology in the same way that they did for privacy rights with the General Data Protection Regulation in 2018. By taking a risk-based approach to AI system requirements, they aim to strike the balance between protection and allowance of innovation. Much more to come in the area of AI regulation, but the structure and requirements are beginning to take shape!