Scarinci Hollenbeck, LLC

201-896-4100 info@sh-law.com

Will Sweeping AI “Equity” Regulations Do More Harm than Good?

Author: Christopher D. Warren|December 24, 2023

Biden Administration Focused on Equity in AI

Will Sweeping AI “Equity” Regulations Do More Harm than Good?

Biden Administration Focused on Equity in AI

Will Sweeping AI “Equity” Regulations Do More Harm than Good?

The Biden Administration is working feverishly to regulate artificial intelligence (AI), announcing a series of executive actions in recent months. The Administration’s latest Executive Order, announced on October 30, 2023, establishes broad new standards for AI tools, such as requiring companies like Google, Amazon, and OpenAI to share safety test results with the federal government. The Executive Order also calls for addressing algorithmic discrimination and minimizing the harms of AI for workers, building on prior executive actions addressing potential bias in AI.

Not surprisingly, the Biden Administration’s sweeping policies are facing pushback. While most agree that AI holds both significant risk and opportunity, striking the proper balance is far more challenging. If regulation is not nuanced, its compliance burdens will stifle innovation. Trying to hard code equity into AI is one example where regulations may do more harm than good.

The Biden Administration has steadily ramped up its efforts to regulate AI over the past year. The aggressive response comes as tech leaders, such as Elon Musk and OpenAI’s Sam Altman, have sounded the alarm about the potentially disastrous results of leaving the new technology unchecked.

Earlier this year, the Biden Administration issued an Executive Order directing agencies to combat algorithmic discrimination. Under the Executive Order, federal agencies must design, develop, acquire, and use artificial intelligence in the Federal Government “in a manner that advances equity.” The White House has also published a Blueprint for an AI Bill of Rights (AI Bill of Rights), which also calls for algorithmic discrimination protections. 

As outlined in the AI Bill of Rights, algorithmic discrimination occurs when “automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.” The document further advises that, depending on the specific circumstances, algorithmic discrimination may violate legal protections, such as Title VII of the Civil Rights Act of 1964.

To combat potential bias, the Federal Government wants designers, developers, and deployers of automated systems to take “proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems equitably.” It maintains that such protections should include “proactive equity assessments as part of the system design, use of representative data and protection against proxies for demographic features, ensuring accessibility for people with disabilities in design and development, pre-deployment and ongoing disparity testing and mitigation, and clear organizational oversight.”

Employers Already in Regulatory Cross-Hairs

Employers are one of the first to feel the scrutiny of state and federal AI oversight, particularly concerning equity. The Biden Administration’s latest Executive Order calls for the development of “best practices” to address harms that AI may cause workers, including job displacement, and requires a report on the labor market impacts of the new technology.

Federal, state, and local agencies have also begun to warn employers that the use of AI may result in liability. In May, the Equal Employment Opportunity Commission (EEOC) published guidance advising that if the use of an algorithmic decision-making tool harms individuals of a particular race, color, religion, sex, or national origin, or on individuals with a particular combination of such characteristics, then use of the tool will violate Title VII unless the employer can show that such use is “job-related and consistent with business necessity” under Title VII. The agency further advised that employers may face liability even if the selection tool was developed by an outside vendor.

The EEOC guidance advises that employers may rely on the four-fifths rule—a general rule of thumb for determining the selection rate for applicants—to help determine whether the use of an algorithmic decision-making tool harms a particular protected group. However, the EEOC guidance does not provide insight into how employers should establish that AI-based tools are job-related.

Striking the Proper Balance Is Essential to Regulating AI

While equity is always a laudable goal, it can come at great costs. I recently read “The Lessons of History,” in which William Durant discusses that tension, noting that throughout history, when a civilization pushes blindly into “equality,” that civilization is doomed to fail. “Nature smiles at the union of freedom and equality in our utopias,” he writes. “For freedom and equality are sworn and everlasting enemies, and when one prevails, the other dies.”

In the context of AI, the costs and benefits of tipping the scales in favor of equity must be addressed. For instance, the EEOC has advised that filtering candidates based on certain criteria could give rise to liability if it harms protected groups. Yet, at the same time, altering algorithms to address potential bias may also have the unintended side effect of limiting potential employees from standing out through AI based on merit.

Balancing Equity and Advancement: Rethinking AI Policy for a Fair Future

As the U.S. Supreme Court recently emphasized when striking down the race-conscious college admissions processes at Harvard and the University of North Carolina, attempting to increase equity for one group almost always disadvantages another. “College admissions are zero-sum. A benefit provided to some applicants but not to others necessarily advantages the former group at the expense of the latter,” Chief Justice John Roberts wrote in Students for Fair Admissions v. Harvard College and SFFA v. University of North Carolina.

Studies have also routinely shown that, despite being well-intentioned, affirmative action was not successful in ending discrimination and may have harmed the minority applicants it sought to help. In its current form, the Federal Government’s AI policy is poised to repeat the same mistakes. AI has the potential to be a revolutionary tool for businesses by reducing costs and increasing productivity, which would ideally lead to higher wages and profits for the enterprise. By trying to cripple a data-driven system by hard-coding “equity” into the software, the Federal Government is advocating that developers intentionally reduce the accuracy of the data provided to an AI end user for the benefit of a potentially marginalized group.

So where does that leave us? Rather than seeking to hard code equity into regulations or requiring AI tools to root out implicit bias, mandating transparency regarding the tools and how they are used may be a better way to ensure fairness. It will also help ensure that the technology can continue to advance without being bogged down with red tape.

What’s Next for AI Oversight?

AI is evolving so quickly that regulators can’t keep up (although the Biden Administration is certainly trying). On the federal level, lawmakers must reach a consensus on how best to regulate the new technology before any legislation can be enacted. Additionally, many of the Biden Administration’s initiatives rely on Congressional funding. Getting lawmakers on the same page will likely be a lengthy process. As evidenced by Congress’ inability to enact federal privacy regulations over the past several years.

In the meantime, additional executive actions by the Biden Administration, including federal agency rulemaking, will likely be used as placeholders. State and local level regulations, such as New York City’s ordinance regulating employer use of automated employment decision tools (AEDTs), will also likely grow in frequency in the absence of federal standards.

As the law continues to evolve, employers should tread carefully when implementing AI, particularly if used to aid employment-related decisions. As always, working with experienced counsel can go a long way in managing risk and staying on top of legal developments. Businesses should also provide input as state and federal regulators begin to incorporate AI policy into formal rulemaking.

Key Contacts

Let`s get in touch!

* The use of the Internet or this form for communication with the firm or any individual member of the firm does not establish an attorney-client relationship. Confidential or time-sensitive information should not be sent through this form.

Will Sweeping AI “Equity” Regulations Do More Harm than Good?

Author: Christopher D. Warren
Will Sweeping AI “Equity” Regulations Do More Harm than Good?

The Biden Administration is working feverishly to regulate artificial intelligence (AI), announcing a series of executive actions in recent months. The Administration’s latest Executive Order, announced on October 30, 2023, establishes broad new standards for AI tools, such as requiring companies like Google, Amazon, and OpenAI to share safety test results with the federal government. The Executive Order also calls for addressing algorithmic discrimination and minimizing the harms of AI for workers, building on prior executive actions addressing potential bias in AI.

Not surprisingly, the Biden Administration’s sweeping policies are facing pushback. While most agree that AI holds both significant risk and opportunity, striking the proper balance is far more challenging. If regulation is not nuanced, its compliance burdens will stifle innovation. Trying to hard code equity into AI is one example where regulations may do more harm than good.

The Biden Administration has steadily ramped up its efforts to regulate AI over the past year. The aggressive response comes as tech leaders, such as Elon Musk and OpenAI’s Sam Altman, have sounded the alarm about the potentially disastrous results of leaving the new technology unchecked.

Earlier this year, the Biden Administration issued an Executive Order directing agencies to combat algorithmic discrimination. Under the Executive Order, federal agencies must design, develop, acquire, and use artificial intelligence in the Federal Government “in a manner that advances equity.” The White House has also published a Blueprint for an AI Bill of Rights (AI Bill of Rights), which also calls for algorithmic discrimination protections. 

As outlined in the AI Bill of Rights, algorithmic discrimination occurs when “automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.” The document further advises that, depending on the specific circumstances, algorithmic discrimination may violate legal protections, such as Title VII of the Civil Rights Act of 1964.

To combat potential bias, the Federal Government wants designers, developers, and deployers of automated systems to take “proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems equitably.” It maintains that such protections should include “proactive equity assessments as part of the system design, use of representative data and protection against proxies for demographic features, ensuring accessibility for people with disabilities in design and development, pre-deployment and ongoing disparity testing and mitigation, and clear organizational oversight.”

Employers Already in Regulatory Cross-Hairs

Employers are one of the first to feel the scrutiny of state and federal AI oversight, particularly concerning equity. The Biden Administration’s latest Executive Order calls for the development of “best practices” to address harms that AI may cause workers, including job displacement, and requires a report on the labor market impacts of the new technology.

Federal, state, and local agencies have also begun to warn employers that the use of AI may result in liability. In May, the Equal Employment Opportunity Commission (EEOC) published guidance advising that if the use of an algorithmic decision-making tool harms individuals of a particular race, color, religion, sex, or national origin, or on individuals with a particular combination of such characteristics, then use of the tool will violate Title VII unless the employer can show that such use is “job-related and consistent with business necessity” under Title VII. The agency further advised that employers may face liability even if the selection tool was developed by an outside vendor.

The EEOC guidance advises that employers may rely on the four-fifths rule—a general rule of thumb for determining the selection rate for applicants—to help determine whether the use of an algorithmic decision-making tool harms a particular protected group. However, the EEOC guidance does not provide insight into how employers should establish that AI-based tools are job-related.

Striking the Proper Balance Is Essential to Regulating AI

While equity is always a laudable goal, it can come at great costs. I recently read “The Lessons of History,” in which William Durant discusses that tension, noting that throughout history, when a civilization pushes blindly into “equality,” that civilization is doomed to fail. “Nature smiles at the union of freedom and equality in our utopias,” he writes. “For freedom and equality are sworn and everlasting enemies, and when one prevails, the other dies.”

In the context of AI, the costs and benefits of tipping the scales in favor of equity must be addressed. For instance, the EEOC has advised that filtering candidates based on certain criteria could give rise to liability if it harms protected groups. Yet, at the same time, altering algorithms to address potential bias may also have the unintended side effect of limiting potential employees from standing out through AI based on merit.

Balancing Equity and Advancement: Rethinking AI Policy for a Fair Future

As the U.S. Supreme Court recently emphasized when striking down the race-conscious college admissions processes at Harvard and the University of North Carolina, attempting to increase equity for one group almost always disadvantages another. “College admissions are zero-sum. A benefit provided to some applicants but not to others necessarily advantages the former group at the expense of the latter,” Chief Justice John Roberts wrote in Students for Fair Admissions v. Harvard College and SFFA v. University of North Carolina.

Studies have also routinely shown that, despite being well-intentioned, affirmative action was not successful in ending discrimination and may have harmed the minority applicants it sought to help. In its current form, the Federal Government’s AI policy is poised to repeat the same mistakes. AI has the potential to be a revolutionary tool for businesses by reducing costs and increasing productivity, which would ideally lead to higher wages and profits for the enterprise. By trying to cripple a data-driven system by hard-coding “equity” into the software, the Federal Government is advocating that developers intentionally reduce the accuracy of the data provided to an AI end user for the benefit of a potentially marginalized group.

So where does that leave us? Rather than seeking to hard code equity into regulations or requiring AI tools to root out implicit bias, mandating transparency regarding the tools and how they are used may be a better way to ensure fairness. It will also help ensure that the technology can continue to advance without being bogged down with red tape.

What’s Next for AI Oversight?

AI is evolving so quickly that regulators can’t keep up (although the Biden Administration is certainly trying). On the federal level, lawmakers must reach a consensus on how best to regulate the new technology before any legislation can be enacted. Additionally, many of the Biden Administration’s initiatives rely on Congressional funding. Getting lawmakers on the same page will likely be a lengthy process. As evidenced by Congress’ inability to enact federal privacy regulations over the past several years.

In the meantime, additional executive actions by the Biden Administration, including federal agency rulemaking, will likely be used as placeholders. State and local level regulations, such as New York City’s ordinance regulating employer use of automated employment decision tools (AEDTs), will also likely grow in frequency in the absence of federal standards.

As the law continues to evolve, employers should tread carefully when implementing AI, particularly if used to aid employment-related decisions. As always, working with experienced counsel can go a long way in managing risk and staying on top of legal developments. Businesses should also provide input as state and federal regulators begin to incorporate AI policy into formal rulemaking.

Firm News & Press Releases

No Aspect of the advertisement has been approved by the Supreme Court. Results may vary depending on your particular facts and legal circumstances.