As third-party cookie deprecation is quickly approaching, many platforms are introducing AI capabilities promising greater utility with less data. Organizations can rely on features for user profile creation, identification of similar audiences for targeting, bid optimization, and automated creative optimization. On the surface, all of this seems great, one answer to the challenges posed by the changing privacy landscape. Take a further look, however, and inherent compliance risk becomes apparent when using such technologies. It is important for advertisers to consider these risks and ensure proper protections are put in place before jumping head-first into using AI.
Artificial intelligence (AI) is defined as technology that enables computers and machines to simulate human intelligence and problem-solving capabilities. The application of AI for marketing and advertising technologies is meant to help automate processes for profiling, insight generation, and optimization. Inherent to these processes are the usage of personal data collected from users for profiling and ultimately automated decision making. The UK Information Commissioner’s Office defines profiling as an activity that “analyses aspects of an individual’s personality, behavior, interests and habits to make predictions or decisions about them”. Meanwhile, automated decision making is defined as “the process of making a decision by automated means without any human involvement”. These definitions provide an almost perfect explanation of the aims of AI features within marketing and advertising technologies.
Regulators across the globe are busy at work crafting new AI-specific regulations and guidance, but the fact that the purpose of AI systems is engaged with profiling and automated decision making means that current principles of privacy law must be considered. Let’s examine requirements under the four primary rights afforded to users in global privacy laws: disclosure, consent and opt-out, access and rectification, and impact assessments.
Disclosure
A universal requirement across global privacy regulations is disclosure to consumers about personal data collected and the purposes for which the collection is occurring. GDPR Article 13 states “… the controller shall, at the time when personal data are obtained, provide the data subject with the following information …” Meanwhile, the California CCPA stipulates that consumers have “the right to be notified, before or at the point businesses collect personal information, of the types of personal information they are collecting and what they may do with that information”. Important in both of these requirements is timing—before or at the point of collection. A new platform or feature is wonderful, but for data on hand to be used, the purposes for which a new AI capability is leveraged must have been disclosed at the point of collection of the data.
Consent and Opt-Out
For any use of AI, which results in automated decision making, there is a high bar for consent. According to guidance from the European Commission, “individuals should not be subject to a decision that is based solely on automated processing (such as algorithms) and that is legally binding or which significantly affects them.” In such cases, the individual must give explicit consent to a decision based on the algorithm. As more cases of data collected from consumers are used for such decision making, including driving data from connected devices in vehicles used for insurance premium calculation, consumer protection enforcement is further scrutinizing consent processes for these use cases.
Beyond consent requirements for automated decision making, profiling as accomplished with the help of AI also carries compliance obligations. Virginia’s Consumer Data Protection Act defines profiling as “any form of automated processing performed on personal data to evaluate, analyze, or predict personal aspects related to an identified or identifiable natural person’s economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.” Any profiling in furtherance of decisions that produce legal or similarly significant effects for the consumer requires an opt-out ability for said user.
While many will argue that profiling for advertising does not have a significant effect on the consumer, consider how ‘significant effects’ are characterized in many U.S. state privacy laws. These are activities which present a “reasonably foreseeable risk of (i) unfair or deceptive treatment of, or unlawful disparate impact on, consumers, (ii) financial, physical or reputational injury to consumers, (iii) a physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, of consumers, where such intrusion would be offensive to a reasonable person, or (iv) other substantial injury to consumers.” Now consider the result of profiling for advertising that was even possible all the way back in 2012, when the New York Times reported on an instance of Target segmenting users likely to be pregnant based upon shopping behaviors. The resulting advertising tipped off a father that his 16-year-old daughter was pregnant. The profiling capabilities with AI available more than 10 years later are likely to be challenged as an intrusion offensive to a reasonable person.
California is likely to codify this requirement in CCPA rulemaking. The CPPA’s current draft version of rulemaking for automated decision-making technologies includes the language that opt-out rights include the right to opt out of profiling for behavioral advertising. Any processing of personal data for advertising purposes should also include the ability for consumers to opt out of using their data for such processing.
Access and Rectification
Another universal principle of global compliance regulations is the consumer right to access personal data collected from them and to request the rectification of any inaccuracies. The nature of AI systems, to say nothing of the training datasets necessary for training generative AI, inherently make it difficult to adhere to an access request from a user. This is one of the primary accusations in recent GDPR complaints raised against Open AI for their ChatGPT product. It is also central to Italy’s DPA’s decision to temporarily ban ChatGPT in 2023 and open a further investigation in 2024 on the matter. All AI technologies should be reviewed with processes defined to satisfy user access and rectification requests.
Data Privacy and Impact Assessments
Privacy laws in the United States, including Virginia’s CDPA, include the requirement to conduct a Data Privacy Impact Assessment when processing personal data for purposes of targeted advertising as well as profiling activities. Similar requirements apply for processing for automated decision making and profiling in European laws. Impact assessments should be conducted for all AI technologies adopted by organizations to understand what data is necessary, the outcomes for consumers, and protections that can be put in place to mitigate risk.
The application of AI in marketing and advertising technologies is a welcome development for advertisers. With less data available for targeting and analysis, new techniques are required to fill the gap. Organizations need to be cognizant of the inherent risk involved with AI, specifically related to the likely outcomes of automated decision making and profiling. It is imperative for organizations to evaluate these technologies through the lens of current privacy law. Understand exactly what data will be used in processing; ensure it has been lawfully obtained; ensure user privacy rights are able to be executed; and conduct impact assessments to document the review of new technologies.