AI Governance In The United States: Principles for Responsible Use

Estimated Reading Time: 8 minutes
June 23, 2023
AI Governance In The United States: Principles for Responsible Use

Much has been made in the first half of 2023 about the advances in Artificial Intelligence and potential threats to everything from the normal ways of working to humanity itself. Along with the discussion of the promise of AI, so too has come the discussion of how it should be regulated. In the United States, a lot of the focus (if you are to believe the press and those leading AI companies such as OpenAI) has been about the need to regulate the doom-and-gloom scenario, the so-called emergence of “superintelligence”. 

Often lost in the shuffle is how current use cases in the here and now will be regulated. In some locations across the globe, such as in Europe, purpose-specific AI legislation is in the works. In the United States there is a slightly different approach of relying on current laws and regulations and applying their provisions to new technologies. Federal regulators in the United States contend that the principles of these laws are sufficient, at least at the foundational level, and will be applied and enforced as such. 

It is important to first define what is being discussed when we say “Artificial Intelligence” or “AI”. The United States defined the term “AI” in the National Artificial Intelligence Act of 2020 section 5002(3) as:

  • “A machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to:
    • Perceive real and virtual environments;
    • Abstract such perceptions into models through analysis in an automated manner; and
    • Use model inference to formulate options for information or action”

Federal regulators provide a simpler definition in their joint statement on enforcement, “We use the term ‘automated systems’ broadly to mean software and algorithmic processes, including AI, that are used to automate workflows and help people complete tasks or make decisions. Taken together, these definitions will cover all of the technologies in today’s popular lexicon as ‘AI’, as well as systems used for machine learning and automated/algorithmic decision making.” 

Central to the approach of U.S. regulators, as outlined in the Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems, is the potential for the use of AI systems to “perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes,” with enforcement focused on ensuring this does not happen across areas such as adverse actions with credit decisions (Consumer Financial Protection Bureau), discrimination in tenant screening processes for housing (Civil Rights Division of the Justice Department), and discrimination in employment decisions (Equal Employment Opportunity Commission). Published guidance from each agency is linked in the preceding sentence.

The regulatory agency with perhaps the most enforcement scope for areas marketers and advertisers work within is the Federal Trade Commission (FTC). Section 5 of the FTC Act prohibits “unfair or deceptive acts or practices in or affecting commerce”. In a recent interview with Gizmodo, FTC Consumer Protection Chief Samuel Levine stated, “As an agency we’ve been pretty clear about our confidence that the FTC Act applies to many of the practices we’re seeing in the AI space … We made it clear years ago that algorithmic decision making that can result in harm to protected classes can be unfair under the FTC Act.”

Beyond how AI systems and algorithmic decision making could negatively impact protected classes, other unlawful uses of data with AI can also be enforced. In the same interview, Levine provided the example of a recent order against the Amazon company Ring where they required them to delete models and data products trained on data they alleged Ring collected illegally. Similar enforcement could be applied in the broader AI context, for example, if sensitive data collected without consent is used within an AI model.

Luckily for practitioners looking for ways to use AI within their organizations, the FTC released guidance in April of 2020 of core principles to consider when using artificial intelligence and algorithms. The guidance headlines are as follows:

  1. Be transparent.
    1. Don’t deceive consumers about how you use automated tools.
    2. Be transparent when collecting sensitive data.
    3. If you make automated decisions based on information from a third-party vendor, you may be required to provide the consumer with an “adverse action” notice.
  2. Explain your decision to the consumer.
    1. If you deny consumers something of value based on algorithmic decision-making, explain why.
    2. If you use algorithms to assign risk scores to consumers, also disclose the key factors that affected the score, rank ordered for importance.
    3. If you might change the terms of a deal based on automated tools, make sure to tell consumers.
  3. Ensure that your decisions are fair.
    1. Don’t discriminate based on protected classes.
    2. Focus on inputs, but also on outcomes.
    3. Give consumers access and an opportunity to correct information used to make decisions about them.
  4. Ensure that your data and models are robust and empirically sound.
    1. If you provide data about consumers to others to make decisions about consumer access to credit, employment, insurance, housing, government benefits, check-cashing or similar transactions, you may be a consumer reporting agency that must comply with the FCRA, including ensuring that the data is accurate and up to date.
    2. If you provide data about your customers to others for use in automated decision-making, you may have obligations to ensure that the data is accurate, even if you are not a consumer reporting agency.
    3. Make sure that your AI models are validated and revalidated to ensure that they work as intended, and do not illegally discriminate.
  5. Hold yourself accountable for compliance, ethics, fairness, and nondiscrimination.
    1. Ask questions before you use the algorithm.
    2. Protect your algorithm from unauthorized use.
    3. Consider your accountability mechanism.

In addition to enforcement of current laws when applied to AI and automated decision-making, there is still a push for new and updated legislation to specifically address the challenges posed by advancements in AI technology. As a policy guide to enumerate the core pillars of focus, early in 2023 the White House Office of Science and Technology identified five principles that should guide the design, use, and deployment of automated systems to protect the American public:

  1. Safe and Effective Systems – You should be protected from unsafe or ineffective systems.
  2. Algorithmic Discrimination Protections – You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
  3. Data Privacy – You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.
  4. Notice and Explanation – You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.
  5. Human Alternatives, Consideration, and Fallback – You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.

Each use case and application of AI will be different. For the current and likely future state of affairs in the United States, there will not be a single set of rules and requirements that must be followed as outlined in a single piece of legislation. Each business sector and use case will be evaluated based upon the laws and regulations in place applying to your usage of data. To ensure all specific requirements are addressed, legal teams should always be involved early in the strategic planning process for any data initiative, especially those which involve the usage of AI. 

For marketers and advertisers, at least at the first principles level, any activity involving the usage of AI or algorithmic decision making should be considered through the lens of the current FTC guidance and the five AI Bill of Rights principles. Beginning with these as the guide will cut through some complexity of the purpose-specific regulatory environment for AI in the United States.

Ready to begin your privacy-centric marketing journey?

We're here to help whenever you need us.

Author

  • Lucas Long

    Lucas Long is co-author of the Amazon best-selling book, Crawl, Walk, Run: Becoming a Privacy-Centric Marketing Organization. He is also the Director of Privacy Strategy at InfoTrust, working with global organizations at the intersection of digital strategy, privacy regulations, and technical data collection architecture. Through these efforts, Lucas helps companies understand their limitations for data enablement due to privacy challenges and design optimal ways to accomplish core use cases in a compliant manner.

    When not discussing the intricacies of GDPR and cookie laws with clients, Lucas enjoys traveling and exploring new cultures, one bite at a time. Based in Barcelona, he is also a presenter, featured at industry events organized by Google, the Digital Analytics Association, the American Marketing Association, and the Journal of Applied Marketing Analytics.

    View all posts
Last Updated: June 23, 2023

Get Your Assessment

Thank you! We will be in touch with your results soon.
{{ field.placeholder }}
{{ option.name }}

Talk To Us

Talk To Us

Receive Book Updates

Fill out this form to receive email announcements about Crawl, Walk, Run: Advancing Analytics Maturity with Google Marketing Platform. This includes pre-sale dates, official publishing dates, and more.

Search InfoTrust

Leave Us A Review

Leave a review and let us know how we’re doing. Only actual clients, please.