Much has been made in the first half of 2023 about the advances in Artificial Intelligence and potential threats to everything from the normal ways of working to humanity itself. Along with the discussion of the promise of AI, so too has come the discussion of how it should be regulated. In the United States, a lot of the focus (if you are to believe the press and those leading AI companies such as OpenAI) has been about the need to regulate the doom-and-gloom scenario, the so-called emergence of “superintelligence”.
Often lost in the shuffle is how current use cases in the here and now will be regulated. In some locations across the globe, such as in Europe, purpose-specific AI legislation is in the works. In the United States there is a slightly different approach of relying on current laws and regulations and applying their provisions to new technologies. Federal regulators in the United States contend that the principles of these laws are sufficient, at least at the foundational level, and will be applied and enforced as such.
It is important to first define what is being discussed when we say “Artificial Intelligence” or “AI”. The United States defined the term “AI” in the National Artificial Intelligence Act of 2020 section 5002(3) as:
- “A machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to:
- Perceive real and virtual environments;
- Abstract such perceptions into models through analysis in an automated manner; and
- Use model inference to formulate options for information or action”
Federal regulators provide a simpler definition in their joint statement on enforcement, “We use the term ‘automated systems’ broadly to mean software and algorithmic processes, including AI, that are used to automate workflows and help people complete tasks or make decisions. Taken together, these definitions will cover all of the technologies in today’s popular lexicon as ‘AI’, as well as systems used for machine learning and automated/algorithmic decision making.”
Central to the approach of U.S. regulators, as outlined in the Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems, is the potential for the use of AI systems to “perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes,” with enforcement focused on ensuring this does not happen across areas such as adverse actions with credit decisions (Consumer Financial Protection Bureau), discrimination in tenant screening processes for housing (Civil Rights Division of the Justice Department), and discrimination in employment decisions (Equal Employment Opportunity Commission). Published guidance from each agency is linked in the preceding sentence.
The regulatory agency with perhaps the most enforcement scope for areas marketers and advertisers work within is the Federal Trade Commission (FTC). Section 5 of the FTC Act prohibits “unfair or deceptive acts or practices in or affecting commerce”. In a recent interview with Gizmodo, FTC Consumer Protection Chief Samuel Levine stated, “As an agency we’ve been pretty clear about our confidence that the FTC Act applies to many of the practices we’re seeing in the AI space … We made it clear years ago that algorithmic decision making that can result in harm to protected classes can be unfair under the FTC Act.”
Beyond how AI systems and algorithmic decision making could negatively impact protected classes, other unlawful uses of data with AI can also be enforced. In the same interview, Levine provided the example of a recent order against the Amazon company Ring where they required them to delete models and data products trained on data they alleged Ring collected illegally. Similar enforcement could be applied in the broader AI context, for example, if sensitive data collected without consent is used within an AI model.
Luckily for practitioners looking for ways to use AI within their organizations, the FTC released guidance in April of 2020 of core principles to consider when using artificial intelligence and algorithms. The guidance headlines are as follows:
- Be transparent.
- Don’t deceive consumers about how you use automated tools.
- Be transparent when collecting sensitive data.
- If you make automated decisions based on information from a third-party vendor, you may be required to provide the consumer with an “adverse action” notice.
- Explain your decision to the consumer.
- If you deny consumers something of value based on algorithmic decision-making, explain why.
- If you use algorithms to assign risk scores to consumers, also disclose the key factors that affected the score, rank ordered for importance.
- If you might change the terms of a deal based on automated tools, make sure to tell consumers.
- Ensure that your decisions are fair.
- Don’t discriminate based on protected classes.
- Focus on inputs, but also on outcomes.
- Give consumers access and an opportunity to correct information used to make decisions about them.
- Ensure that your data and models are robust and empirically sound.
- If you provide data about consumers to others to make decisions about consumer access to credit, employment, insurance, housing, government benefits, check-cashing or similar transactions, you may be a consumer reporting agency that must comply with the FCRA, including ensuring that the data is accurate and up to date.
- If you provide data about your customers to others for use in automated decision-making, you may have obligations to ensure that the data is accurate, even if you are not a consumer reporting agency.
- Make sure that your AI models are validated and revalidated to ensure that they work as intended, and do not illegally discriminate.
- Hold yourself accountable for compliance, ethics, fairness, and nondiscrimination.
- Ask questions before you use the algorithm.
- Protect your algorithm from unauthorized use.
- Consider your accountability mechanism.
In addition to enforcement of current laws when applied to AI and automated decision-making, there is still a push for new and updated legislation to specifically address the challenges posed by advancements in AI technology. As a policy guide to enumerate the core pillars of focus, early in 2023 the White House Office of Science and Technology identified five principles that should guide the design, use, and deployment of automated systems to protect the American public:
- Safe and Effective Systems – You should be protected from unsafe or ineffective systems.
- Algorithmic Discrimination Protections – You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
- Data Privacy – You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.
- Notice and Explanation – You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.
- Human Alternatives, Consideration, and Fallback – You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.
Each use case and application of AI will be different. For the current and likely future state of affairs in the United States, there will not be a single set of rules and requirements that must be followed as outlined in a single piece of legislation. Each business sector and use case will be evaluated based upon the laws and regulations in place applying to your usage of data. To ensure all specific requirements are addressed, legal teams should always be involved early in the strategic planning process for any data initiative, especially those which involve the usage of AI.
For marketers and advertisers, at least at the first principles level, any activity involving the usage of AI or algorithmic decision making should be considered through the lens of the current FTC guidance and the five AI Bill of Rights principles. Beginning with these as the guide will cut through some complexity of the purpose-specific regulatory environment for AI in the United States.