Advertisement


 

The Coming Regulation of AI in the US



Published: Oct 27, 2022  |  

Founder and chairman of Quinn Emanuel Urquhart & Sullivan LLP



Data is everywhere; wherever there is data, there is a potential application for AI.  Therefore, AI is, or will be, everywhere.  It is in phones, cars, appliances, medical devices, and even courtrooms.  It touches every industry, often where its uses are not immediately apparent.  It assists companies with hiring decisions and doctors with medical diagnoses.  It forms the backbone of the most popular social media platforms, serving up curated content to users.  From the profound to the mundane, AI has become pervasive in modern society. And its reach is always expanding to new sectors and applications.

While AI holds great promise, it also comes with risk.  When used in decision-making, AI can introduce bias, such as an employee screening algorithm that tends to filter out candidates above a certain age, or that favors male applicants, or an insurance algorithm that tends to set higher rates for customers of certain ethnicities.  It often relies on opaque algorithms that make it difficult to understand why AI decision-making tools make the decisions they do.  For example, in the case of the biased insurance algorithm, the biased decision-making might not be apparent.  Investigation might reveal that the algorithm’s factoring in of data points such as zip codes may in fact be driving the biased results.  A medical diagnostic tool trained with data from white men might be less likely to accurately diagnose the target condition in women and patients of other ethnicities.  Similarly, many studies have concluded that facial recognition technology tends to be less accurate when identifying women and people of other ethnicities because it uses algorithms trained predominantly on white males. Use of facial recognition technology by law enforcement has become controversial for this reason.  AI also learns and adapts over time, and an algorithm that starts out making unbiased decisions can later skew toward biased decision-making depending on the data it receives.  

AI also raises concerns about privacy.  AI learns and develops by consuming and analyzing data, which is often deeply personal in nature, such as medical information or internet search history.  Amazon’s Alexa smart speaker devices drew criticism (and lawsuits) for the collection and use of recordings of people’s voices to improve the speech recognition software, including human review of the recordings to better train Alexa. The output from AI that has analyzed personal data can be similarly invasive.  Advertisers use predictive models into which they fee internet users’ data in order to feed users targeted advertisements. Facial recognition technology’s implementation runs the gamut from the creepy-but-relatively-benign recognition of someone in a photo posted to social media in order to suggest that a user “tag” that person, to surveillance by law enforcement and governments. A number of cities have already passed laws barring the use of facial recognition.

The use of AI algorithms in social media recently came under fire following testimony by Facebook whistleblower Frances Haugen, who identified an engagement algorithm designed to push content to those the algorithm calculated would be more likely to engage.  But it directed children and other users to potentially harmful content. In the hiring context, Amazon drew outrage when it disclosed that a recruiting tool it had developed favored men over women when assessing applicants. When Amazon looked into the reason for the biased outcome, it found that the AI was favoring applicants who used certain words in their resumes, which happened to be used more often by men than women. Amazon scrapped the tool. 

The replacement of human decision-making with AI raises ethical questions, such as whether a self-driving car should prioritize the life of a pedestrian or its passenger or whether a convicted criminal is likely to be a recidivist?  AI is unlikely to ever replace human decision makers in decisions of great importance, even where AI might be used to assist the human decision-maker.

Given AI’s ubiquity and the risks it poses, it has drawn interest from regulators around the world.  The European Union has proposed comprehensive regulation which would categorize AI into three classes: those that pose unacceptable risk and which would be banned, high-risk applications that will be subject to specific legal requirements, and anything not falling in the banned or high-risk categories, which will be largely unregulated.  By contrast, regulation of AI in the U.S. is in very early stages, with state and federal agencies adopting their own patchwork of measures.

Even with this decentralized approach, momentum for AI regulation in the United States is growing and comprehensive governance rules seem to be inevitable.  Organizations across all sectors utilize AI in one form or another, and such use is sure to increase in the coming years.  Organizations using AI should take affirmative steps now to assess their use of AI and to implement measures to monitor and ensure that AI is applied in a manner that is unbiased, ethical, and transparent.  Such proactive measures will not only facilitate more effective deployment of AI, they will also provide data to industries and regulators about possible paths forward for regulating AI, and will get out ahead of impending regulations.

I. AI at the Federal Level

The pace of AI regulatory activity at the federal level appears to be increasing. Below is a discussion of such developments.

National Artificial Intelligence Initiative

The National AI Initiative Act of 2020 (AI Initiative Act) became law on January 1, 2021. Under that Act, since the beginning of 2021, the federal government has established a National Artificial Intelligence Initiative Office (NAIIO), as well as a related task force and advisory committee.  The NAIIO’s stated mission is to “ensure continued U.S. leadership in AI research and development, lead the world in the development and use of trustworthy AI in the public and private sectors, and prepare the present and future U.S. workforce for the integration of AI systems across all sectors of the economy and society.”

The National AI Advisory Committee, which launched in April 2022, is charged with providing recommendations to the President and the NAIIO regarding AI-related topics such as competitiveness in the AI space, AI workforce issues, and the state of AI science. The Committee is an interdisciplinary group of 27 experts from academia, nonprofit organizations, and the civil and private sectors, including members from Google, Microsoft, Amazon Web Services, the AFL-CIO, and Stanford University.  The Advisory Committee held its first meeting in May 2022.  At that meeting, Committee Chair Miriam Vogel introduced five working groups into which committee members would be divided:  (1) Leadership in Trustworthy AI, (2) Leadership in Research and Development, (3) Supporting the U.S. Workforce and Providing Opportunity, (4) U.S. Leadership and Competitiveness, and (5) International Cooperation. The Committee will be establishing a subcommittee on AI and law enforcement. That subcommittee will advise the President regarding the use of AI in law enforcement, including with respect to data security, bias, AI adoption, and standards to ensure that AI use respects individuals’ rights and civil liberties.

The AI Initiative Act also led to the creation of the National Artificial Intelligence Research Resource (NAIRR) Task Force by the National Science Foundation in consultation with the White House Office of Science and Technology Policy.  The Task Force launched in June 2021 with the goal of “writing the road map for expanding access to critical resources and educational tools that will spur AI innovation and economic prosperity nationwide.”

The Task Force issued an interim report in May 2022, that sets out the Task Force’s vision for the NAIRR, to be developed after public meetings, engagement with numerous experts, and responses to requests for information. The report explains that, because of the massive computation power and amount of data required to explore AI, a resource divide has developed between large companies and universities who have the resources to research AI, and those who lack such resources. The Task Force envisions the NAIR as a “shared cyberinfrastructure that fuels AI discovery and innovation,” and that would be used by “a diverse set of researchers and students across a range of fields.” It would be accessible through a portal and run by a single management entity, with external bodies providing oversight and guidance. 

The Task Force noted that it intends to release a roadmap for implementing its vision for NAIRR around November 2022.  On July 25, 2022, the Task Force held its eighth virtual public meeting, where it discussed its implementation plans. The Task Force will issue its final report at the end of 2022. 

The Justice Against Malicious Algorithms Act

In October 2021, in the wake of Facebook whistleblower Frances Haugen’s Congressional testimony, House Democrats introduced the Justice Against Malicious Algorithms Act (JAMA).  The Energy and Commerce Committee press release announcing the bill includes statements from both Committee Chairman Frank Pallone, Jr. (D-NJ) and Consumer Protection and Commerce Subcommittee Chair Jan Schakowsky (D-IL) that the time for self-regulation by social media platforms is over. JAMA takes aim at online platforms that use algorithms to make personalized content recommendations by removing the immunity such platforms currently have under Section 230 of the Communications Act of 1934 where the platform “knowingly or recklessly uses an algorithm to recommend content to a user based on that personal information, and if that recommendation materially contributes to physical or severe emotional injury.” In other words, if content on a platform causes a user physical or severe emotional injury, the user would be able to hold the online platform liable for that harm.

Although the legislation may be well-intentioned, some have criticized it as the wrong approach to the problem it seeks to address.  The Electronic Frontier Foundation has raised concerns about the bill, noting that its application to platforms with more than five million monthly visitors that use vaguely defined “personalized algorithms” would leave many small and mid-sized websites—such as hobbyist and fitness sites—on the liability hook alongside the Facebooks and the Googles. The EFF predicts that if passed, JAMA would make it difficult to start or maintain a small company or website because the owner would lack the resources to defend against the lawsuits that will ensue if Section 230 immunity is removed. JAMA’s passage could incentivize censorship of users and lead to the elimination of tools that help users connect with one another and find useful content.  Even so, lawmaker comments accompanying the bill’s announcement signal an appetite within Congress for taking steps to regulate the use of algorithms.

The Algorithmic Accountability Act of 2022

In early 2022, both the House and the Senate introduced the Algorithmic Accountability Act of 2022. If passed, the Act would direct the FTC to regulate entities that use “augmented critical decision processes,” i.e., a process that uses an automated system to make a “critical decision,” which are those that have a “legal, material, or similarly significant effect on a consumer’s life relating to access to or the cost, terms, or availability of” things such as education, employment, and healthcare. (An “automated decision system” is “any system, software, or process (including one derived from machine learning…or artificial intelligence techniques…) that uses computation” and serves as a basis for a decision.) The Act would require the FTC to issue regulations requiring covered entities to conduct impact assessments of these tools. The impact assessment would include testing and evaluation of historical performance, including documenting “an evaluation of any differential performance associated with consumers’ race, color, sex, gender, age, disability, religion, family status, socioeconomic status, or veteran status, and any other characteristics the Commission deems appropriate.” The regulations would also require entities to “attempt to eliminate or mitigate, in a timely manner, any impact made by an augmented critical decision process that demonstrates a likely material negative impact that has legal or similarly significant effects on a consumer’s life.” The Act provides for enforcement by the FTC and state attorneys general. Commentators estimate that if the Act passes, it will likely not be effective for at least three to four years.

Blueprint for an AI Bill of Rights

On October 4, 2022, the White House Office of Science and Technology Policy (OSTP) released its Blueprint for an AI Bill of Rights. The Blueprint acknowledges risks posed by AI, including discrimination and “unchecked” data collection, but asserts that these “deeply harmful” outcomes “are not inevitable.” The OSTP asserts that the country’s progress in utilizing AI “must not come at the price of civil rights or democratic values,” before setting out five principles to “guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.”

The Blueprint identifies five principles:  (1) Safe and Effective Systems (i.e., individuals “should be protected from unsafe or ineffective systems,” including through testing and ongoing monitoring of automated systems to ensure they are safe and effective); (2) Algorithmic Discrimination Protections (i.e., individuals should not face algorithmic discrimination, and systems should be designed and used in an equitable manner; this protection includes equity assessments and use of representative data); (3) Data Privacy (i.e., built-in protections should protect individuals from abusive data practices, and individuals should be given agency over how their data is used; this principle includes a call to change “current hard-to-understand notice-and-choice practices for broad uses of data”); (4) Notice and Explanation (i.e., individuals should be informed when an automated system is being used and of how it impacts them, including via accessible, plain-language explanatory documentation); and (5) Human Alternatives, Consideration, and Fallback (i.e., individuals should be able to opt out of automated systems and instead have access to a human, where appropriate; individuals should also have access to a human fallback where possible; and automated systems deployed in sensitive areas such as healthcare and criminal justice should “be tailored to the purpose, provide meaningful access for oversight,” include training for humans using the system, and “incorporate human consideration for adverse or high-risk decisions”).

The OSTP describes this Blueprint as a “guide for society that protects all people” from the threats posed by AI. The OOSTP has also released a handbook, “from Principles to Practice,” to accompany the Blueprint and to facilitate practical implementation of the Blueprint’s principles. While the Blueprint is non-binding, the OSTP encourages stakeholders to use it “to inform policy decisions” where “existing law or policy…do[es] not already provide guidance,” and it underscores the federal government’s continued—and increasing—interest in AI.

The Federal Trade Commission

On June 16, 2022, the Federal Trade Commission (FTC) submitted a report to Congress warning about the dangers of using AI to combat online problems. The FTC issued the report in response to a Congressional directive to examine how AI might be used to address “online harms” such as fraud, media manipulation, online harassment, and more. The FTC report warns against using AI to address these harms and observes that relying on AI introduces its own problems. In the FTC’s press release coinciding with the report, the Director of the Consumer Protection Bureau described the report as “emphasiz[ing] that nobody should treat AI as the solution to the spread of harmful online content….  Combatting online harm requires a broad societal effort, not an overly optimistic belief that new technology—which can be both helpful and dangerous—will take these problems off our hands.”

The report notes that most of the problems Congress had charged the FTC with considering are not actually the product of AI, but stem instead from human factors, such as greed, hate, sickness, and violence.  The report also emphasizes that AI’s role with respect to these harms is not neutral, pointing to social media platforms, whose “engagement engines” rely on AI, “power human data extraction and deriv[e] from surveillance economics,” and amplify harmful content, as an example. As such, the FTC’s report suggests, companies could “use” AI to address the spread of harmful content by simply stopping their use of AI.

The report identifies weaknesses of AI, describing AI-based detection tools as “blunt instruments” and noting that they suffer from such problems as built-in imprecision and an inability to understand context, and may lead to biased, discriminatory, or unfair outcomes. The report cautions against overreliance on AI, and recommends that humans remain involved in monitoring the decisions made by AI tools, that AI decision tools be transparent (including being explainable and understandable), and that companies that rely on AI tools to address harmful content be held accountable for both their data practices and their results.

Notably, the FTC included a shot across the bow, stating that its work in this area “will likely deepen as AI’s presence continues to rise in commerce.” Some commentators interpreted this statement, along with the significant reservations the FTC expresses about AI in the report and the other measures it has taken to address the use of AI, as a signal that increased FTC regulation of AI is on the horizon.

To that end, in August 2022 the FTC adopted an Advanced Notice of Proposed Rulemaking to explore rules to address commercial surveillance and data security practices, seeking public comment on the harms of commercial surveillance and whether rules are necessary to protect personal data. The press release announcing the decision notes that enforcement of the FTC Act alone would likely not be sufficient to protect consumers, but that “rules that establish clear privacy and data security requirements across the board and provide the Commission the authority to seek financial penalties for first-time violations could incentivize all companies to invest more consistently in compliant practices.” On September 8, the FTC held a virtual public forum on the issue.

The Department of Commerce

The Commerce Department’s National Institute of Standards and Technology (NIST) is developing a risk management framework for AI (AI RMF). The NIST recently issued for public comment a second draft of the AI RMF.  It identifies seven “key characteristics” of a trustworthy AI system, namely, that the AI be: (1) valid and reliable, (2) safe, (3) fair and bias is managed, (4) secure and resilient, (5) accountable and transparent, (6) explainable and interpretable, and (7) privacy-enhanced.

The AI RMF is a voluntary framework that aims to provide organizations of all sizes and in all sectors with a process to address unique risks posed by AI. In other words, the framework is meant to fill gaps specific to AI. The NIST’s recent draft notes that the AI RMF is not a compliance mechanism, describing it as “law and regulation agnostic,” given the evolving nature of AI policy discussions. The AI RFM is also intended to be updated as technology and our understanding evolve. The NIST suggests that applying the AI RMF’s recommendations, particularly early in the life cycle of an AI system, may reduce the risk that the AI system’s use will lead to negative impacts, and increase potential benefits. The AI RMF is sure to be an important resource for any organization looking to address the risks posed by AI systems.

The Food and Drug Administration

The FDA has also made strides toward regulating AI in the medical device space.  In early 2021, the FDA released its Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan outlining the FDA’s approach to oversight of AI/ML-based software. The Action Plan identifies actions the FDA intends to take, including continuing to develop a proposed framework for regulating AI/ML-based medical software, supporting development of good machine learning practice (GMLP) to assess and improve machine learning algorithms, encouraging a patient-centered approach that includes transparency to both users and patients (including via device labeling), supporting efforts to develop methods to identify and eliminate algorithmic bias and promote algorithm robustness, and supporting the piloting of real-world monitoring of the performance of AI/ML-based SaMD.

In June 2022, the FDA issued guidance for what information should be included in premarket submissions for radiological devices that use quantitative imaging (i.e., devices that generate a quantitative measurement from medical images or imaging data). The information that manufacturers must provide includes a summary of the algorithmic training paradigm the manufacturer uses, as well as a description of how the quantitative imaging function works (including specifying any algorithms used), a technical assessment showing that the algorithm is working correctly and that performance specifications are met, and detailed labeling information that clearly describes functionality and identifies sources of substantial measurement error.

The Equal Employment Opportunity Commission

In October 2021, the EEOC launched the Artificial Intelligence and Algorithmic Fairness Initiative, an initiative to ensure that AI and algorithmic decision-making tools used in hiring comply with federal civil rights laws. Since then, the EEOC has sued English-language tutoring companies that allegedly programmed their job application software to reject female applicants over age 55 and male applicants over age 60, in violation of the Age Discrimination in Employment Act. Said EEOC Chair Charlotte A. Burrows: “Even when technology automates the discrimination, the employer is still liable,” and that the case represents an example of why the EEOC had launched its Artificial Intelligence and Algorithmic Fairness Initiative.

One week later, on May 12, 2022, the EEOC released guidance that explains how employers’ use of algorithmic decision-making software may violate the Americans with Disabilities Act (ADA) and provides “practical tips” for employers compliance with the ADA and for job applicants and employees who believe their rights may have been violated. The guidance notes common ways in which an employer might violate the ADA with an algorithm, such as where the employer does not provide a reasonable accommodation necessary for the applicant to be rated fairly by the algorithm, where the algorithm screens out (whether intentionally or unintentionally) an individual with a disability, or whether the decision-making tool violates the ADA’s restrictions on disability-related inquiries and medical examinations.

II. State AI Legislation 

AI regulation has generated substantial activity at the state level.  According to the National Conference of State Legislatures, at least 18 states and the District of Columbia have recently introduced bills or resolutions relating to AI. California, Colorado, Illinois, Vermont and Washington are among the states who have recently enacted legislation.

California

In September 2022, California passed S.B. 1018, the Platform Accountability and Transparency Act, which requires social media platforms to disclose to the public on an annual basis statistics regarding the extent to which content that violates the platform’s policies was “recommended or otherwise amplified by platform algorithms.” Violations are subject to a civil penalty of up to $100,000 per violation.

California also passed A.B. 2273, The California Age-Appropriate Design Code Act, which requires, among other things, that any business wishing to offer a new online service, product, or feature must first “complete a Data Protection Impact Assessment for any online service, product, or feature likely to be accessed by children,” including addressing whether algorithms used could harm children. 

California is also considering legislation to address the risk of addiction to social media.  A.B. 2408, the Social Media Platform Duty to Children Act, would prohibit social media platforms from using features that the platform knows or reasonably should have known cause children to become addicted to the platform.  The Act would also empower the Attorney General (as well as district, county, and city attorneys) to bring enforcement actions, including to impose civil penalties.

Colorado

In 2022, Colorado enacted S.B. 113, which creates a “task force for the consideration of facial recognition services.”  The task force is charged with examining and then reporting to Colorado’s joint technology committee “the extent to which state and local government agencies are currently using facial recognition services,” and to provide recommendations about the same.  The statute provides that the task force is to consider, among other things, the potential for abuse and threats that facial recognition poses to civil liberties, freedoms, privacy, and security, as well as ways to “facilitate and encourage” the use of facial recognition to benefit individuals, businesses, and institutions while guarding against abuse and threats.

Illinois

In 2021, Illinois amended its Artificial Intelligence Video Interview Act to require “employers that rely solely upon artificial intelligence to determine whether an applicant will qualify for an in-person interview” to report demographic information to the Illinois Department of Commerce and Economic Opportunity, and to require the Department “to analyze the data and report to the Governor and General Assembly whether the data discloses a racial bias in the use of artificial intelligence.”

Illinois also enacted H.B. 645, called the “Future of Work Act,” which creates a task force to, among other things, assess new and emerging technologies with significant potential to impact work and “compile research and best practices…on how to deploy technology to benefit workers and the public good.”

Vermont

Vermont passed H.B. 410 in 2022, which establishes the Division of Artificial Intelligence within the Agency of Digital Services “to review all aspects of artificial intelligence systems developed, employed, or procured in state government” as well as an Artificial Intelligence Advisory Council to advise the new Division’s Director with regard to the Division’s responsibilities, as well as to engage in public outreach and education about AI.

Washington

Washington passed legislation authorizing funding for the office of the chief information officer to create a work group “to develop recommendations for changes in state law and policy regarding the development, procurement, and use of automated decision systems by public agencies. The legislation tasks the work group with examining, among other things, when state agency use of automated decision making systems and AI-enhanced profiling systems should be prohibited.

Pending Legislation In Other States

AI-related bills are pending in other states.  For example, several states are considering legislation to establish commissions or task forces with such goals as researching technology’s impact on work and to address the use of AI in government decision-making.  Other states are considering studies on AI’s impact on work and the economy.

Some states have proposed legislation to address discrimination stemming from AI tools.  These include laws that would bar car insurance companies from discriminating based on socioeconomic factors in determining the algorithms used to, among other things, set premiums and rates (New York), as well as laws barring the use of data in predictive models in a way that discriminates based on a protected status, such as race or gender, or even outright barring the use of certain data in predictive models (Rhode Island, Illinois, District of Columbia).

Illinois is considering legislation governing AI in the health care context, which would require hospitals intending to use a diagnostic algorithm to first confirm that the algorithm “has been shown to achieve as or more accurate diagnostic results than other diagnostic means, and is not the only method of diagnosis available to a patient.”

Failed Legislation

A number of bills concerning AI regulation that had been pending throughout the country failed to pass. However, given that the failed legislation touches on many of the same subjects as legislation that has either passed or is currently pending, and that the passage of other AI-related legislation in the same states as bills that failed, the failure of some AI-related legislation to pass does not suggest that AI regulation is nonetheless on the rise.  For example, although measures in Vermont related to government use of automated decision systems and to establishing an advisory group to address bias in software failed to pass, Vermont passed legislation to establish a Division of Artificial Intelligence. Similarly, although some states’ proposed legislation seeking to create committees, commissions, and task forces failed to pass, other states’ measures have either passed or remain pending. And although a Minnesota measure to prohibit social media algorithms that target children failed to pass, as discussed above, a measure with a similar goal is pending in California, where social media-related legislation is gaining traction.

Local Regulation

New York City recently passed a law regulating the use of AI in employment decisions. The law prevents employers from using “automated employment decision tools” to make employment decisions unless the tool has been audited for bias and the results of the audit have been posted publicly on the employer’s website.

III. Mindful Use and Proactive Self-Monitoring of AI Tools in the Face of Increased Regulatory Activity

Increased regulation is plainly on the horizon.  In light of the surging appetite among government stakeholders for AI regulation, proactive self-regulation is a prudent course. 

Although AI is pervasive, it is not always obvious. As such, a first step for ensuring that an organization is using AI appropriately and that the AI is functioning properly is to determine whether the organization is utilizing AI in the first place.  Many companies may not realize how AI factors into their business, including aspects that are supplied by vendors. Once an organization has identified where and how it uses AI, it should consider potential issues posed by that use and how to address them. 

A 2021 article in the Harvard Business Review recently identified three major challenges of implementing AI in business: the risk of unfair outcomes, the need for transparency, and the evolving nature of AI, which by its very design learns and changes over time. The first concern, bias, is a particularly prevalent theme throughout the legislative and regulatory actions discussed above.  However, while it may be easy enough to identify an employment algorithm that filters out job applicants over a certain age, bias in algorithms may not always be apparent, which in turn makes it more difficult to address.  As the Harvard Business Review notes, an algorithm might be fair as applied to one population (i.e., the United States population) but display bias when applied to a different population (e.g., residents of Manhattan). Using average statistics can result in an algorithm that appears unbiased, until one looks at a specific region or subpopulation. And updating an algorithm to account for and eliminate such bias can drive up the cost of AI solutions, reducing advantages of scale or even making the solution prohibitively expensive.

Given the risk of bias in AI decision-making, organizations should consider whether AI is indeed the right fit for the task at issue.  AI might be well-suited to objective and analytical tasks, but less so to tasks requiring judgment or empathy. Indeed, one commonly proposed check on AI decision-making is to keep humans involved to monitor, assess, and sometimes even override the algorithm and its outputs.  Human involvement and intervention is likely to remain an important and enduring component of AI regulation, particularly when decisions with important consequences are involved.  In some cases, human intervention may be an effective safeguard against AI bias.  In others, AI may make more sense as the check on a human primary decision maker.  For example, after Amazon discovered that a recruitment tool had filtered out women candidates, Amazon shared its findings, scrapped plans for the tool, and used the AI to identify biases in its existing hiring process. 

Once a company has decided to use AI for a particular need, it should ensure that it understands how the AI works, including how it was developed and the data used to train it, whether the AI comes from a vendor or is created in-house.  Companies should also regularly monitor and evaluate their AI to ensure that it is working properly and to detect changes in the AI’s decision-making, as the AI tool continues to learn and adapt from the additional data it receives. Organizations should also establish a framework for conducting these regular assessments and addressing any issues identified, whether as part of the organization’s existing compliance function, or as a new function.

IV. Conclusion

AI already plays an important role in our society, and its prevalence, significance, and applications are constantly increasing.  So far, the United States has taken more of a piecemeal approach to AI regulation.  Given lawmakers’ increased interest in AI, and the risks that AI poses, organizations using AI will benefit from thinking proactively about what actions they can take to implement AI in a manner that is in line with the spirit of potential future regulations (for example, for implementing measures to detect and decrease biases in AI algorithms).  Such an approach may also have the added benefits of increasing society’s understanding of AI, and of how to more effectively address the risks it poses.



Filed under:


Tags mentioned: