On day one of the new administration, President Trump revoked former President Biden’s 2023 executive order on U.S. AI Standards, which outlined AI safety, disclosure, and risk management principles. This lack of regulatory oversight, coupled with the continued explosion of AI and machine learning technologies, has put the future of AI development in the U.S. at a critical inflection point for tech companies, investors, and regulators alike.
The financial services industry is projected to spend $97 billion on AI by 2027, growing 29% from 2023. As artificial intelligence reshapes the financial landscape, we face a key question—will these technologies maintain the status quo of inequity, or will they be used to dismantle long-standing injustices and create a more equitable future?
This exponential growth can potentially equalize and democratize financial opportunities for marginalized groups. However, without conscious oversight by regulatory agencies and developers alongside commitments by investors against potential biases, AI could exacerbate existing inequities for low-income Black and Brown communities.
In an unequal society, AI tools can be at risk of simply reflecting existing biases. Bias and discrimination found in AI are often not the result of explicit design but can stem from factors like a lack of diverse design team members, unrepresentative or biased data, or just plain human oversight. This is evident in a slew of notable cases including facial recognition models that are unable to recognize darker skin tones, predictive policing systems that overtarget neighborhoods of color, and tenant screening algorithms that prevent formerly incarcerated individuals from obtaining housing.
Mortgage lending is a prime example of the risks and opportunities associated with AI. The Fair Housing Act of 1968 outlawed discrimination in mortgage lending for all protected statuses. Yet, according to a 2024 Urban Institute analysis of Home Mortgage Disclosure Act data, Black and Brown borrowers were more than twice as likely to be denied a loan than white borrowers. Lending discrimination has substantial consequences on Black and Brown communities. According to a 2022 UC Berkeley study on fintech lending, averaging across the distribution of these products in the U.S., African American and Latinx borrowers are charged nearly 5 basis points in higher interest rates than their credit-equivalent white counterparts—amounting to $450 million in extra interest per year.
With the rise of artificial intelligence and machine learning, credit risk assessment and decision-making for loan applications and refinancing are increasingly delegated fully to machines and algorithms. The proprietary nature of algorithms and the complexity of their constructions allow for discrimination to hide behind their supposed objectivity. These “black box” algorithms can produce life-altering outputs in lending with little knowledge of its underworkings. While fair lending laws in face-to-face lending are stringently enforced, AI and algorithmic tools have opened new avenues for discrimination.
For example, the same UC Berkeley study found that algorithm-driven pricing systems tend to raise prices when they sense that consumers are less likely to shop around. Many people of color face barriers when they live in areas with limited access to credit or lack strong relationships with banks, giving minimal opportunity for pricing comparisons. As a result, these algorithms, intentionally or unknowingly, may impose higher rates for disadvantaged communities with little to no alternatives, exploiting these systemic challenges for profit.
A 2023 study found that the risk models that attempt to account for fairness utilized by many major banks may perpetuate broader inequity across a protected class. Many banks utilize “group fairness” metrics to ensure parity in the outcomes for certain groups. However, groups, especially along racial or gender lines, are much more diverse than a pricing algorithm can account for, leading to unfair outcomes within subgroups that appear fair for the group as a whole—i.e. high-income minorities given significantly better rates while low-income minorities given disproportionately worse ones.
When used correctly and with appropriate oversight, AI still presents a promising opportunity for righting inequity. Lending represents a significant motor for economic mobility and opportunity for marginalized communities, especially for the 45 million Americans who are either credit-underserved or unserved. There are optimistic signs AI could help drive economic inclusivity. AI tools have shown improvements in fair approval and denial rates compared to face-to-face lending. A 2022 NYU study found that lending automation increased PPP loans to Black businesses by 12.1 percentage points.
Leading academic institutions are developing Less Discriminatory Algorithmic Models (LDAs) that account for fairness and equity in novel ways, like MIT’s SenSR model, XAI methods, and UNC’s LDA-XGB1 framework. Few have been put into commercial practice, but these initiatives must be supported by the investment community and beyond for AI to be deployed ethically
Enjoying this article?
Subscribe to Charting the Course, the monthly newsletter from our Business and Human Rights department, for the latest news on DEIA, ethical technology, and workers’ rights—all delivered straight to your inbox.
Oversight must keep pace with the ever-changing landscape of new AI and ML technologies without stifling innovation and progress. Unlike the European Union, the United States lacks widespread federal legislation on AI Ethics and bias prevention. Congress has shown some impetus to meet the regulatory needs for such growth in the coming year with a bipartisan AI roadmap published last May and the newly formed bipartisan Task Force on AI. The Biden administration also made strides in strengthening ethical AI commitments through agencies like the CFPB, FTC, and SEC. Much is to be seen on how Trump’s regulatory agenda will play out at the agency and state levels, though the trends of the last few years may see significant pushback.
With the new administration and challenges of political bureaucracy, the investment community has a unique position and responsibility to guide the future course of AI. Here’s how conscious investors can help drive ethical AI development and deployment in the financial industry and beyond:
- Attend events like RFKHR’s Summer Investor Conference to see how leaders in the industry are meeting the moment
- Consider employing and investing in Less Discriminatory Alternative Models like those mentioned above that seek to account for fairness along protected statuses
- Advocate for more explainability in algorithms designed by tech companies when deployed to your business models or those of your portfolio companies to ensure clear knowledge of the model’s inputs and outcomes
- Support partner organizations fighting for ethical AI and justice in the tech ecosystem like the Algorithmic Justice Leauge, All Tech is Human, and TechEquity Collaborative
- Stay up to date on thought leadership that explores developments in ethical AI and machine learning, including those from RFKHR’s Business and Human Rights team