
When implemented responsibly, the possibilities of Artificial Intelligence (AI) are vast, with the potential to revolutionize decision-making and resource management processes across industries. By harnessing large amounts of data, AI can enable banks, hospitals, and other institutions to make informed decisions that can have a significant impact on the wellbeing of all citizens and consumers. The recent surge in creative and generative AI tools from ChatGPT to text-to-image generation have captured the imagination of creators and entrepreneurs.
We have just begun to scratch the surface of what AI can empower us to do — but despite the promise of AI tools, the technology faces serious concerns regarding its potential for bias. This is especially worrying for minority populations, who may be disproportionately impacted by AI systems that could reinforce existing biases if left unchecked.
The Challenge
The skepticism surrounding AI is understandable. For those who feel disenfranchised by existing systems, faster and more efficient versions of those legacy systems threaten to exclude or mistreat them in the same ways, but at scale and at an accelerated pace. Biased and exploitative practices from the past can easily be encoded into automated decision-making. After all, the data AI models have trained on come from humans, whether publications, books, and social media trawled by large language models like ChatGPT, or legacy government databases influenced by historic inequities. When we feed these systems inaccurate or prejudiced data, the results are not credible nor equitable. Facial recognition tools, for example, can fail people with darker skin more often if they are trained on predominantly white datasets.
The data we create reflects who we are as a culture and society, for better or worse. Being able to recognize that biases exist in the data is a good thing, as it demonstrates a level of understanding and awareness we can grow from. That makes it our responsibility as leaders in the AI development space to establish frameworks that not only empower users but also mitigate potential harm that may arise from the technology.
The Goal
Empowering minority populations with AI is not just an admirable goal, it’s an essential considering how powerful and ubiquitous AI tools are becoming. If we don’t prioritize equity and trust in the rush to take AI tools to market, we risk inflicting serious damage to the most vulnerable among us.
This is not to say AI implementations should skew results in favor of certain groups out of some ideological preference. Instead, AI should be seen as an opportunity to remove bias from the equation through representative data samples and thorough analyses of the potential impact of a decision across populations.
Responsible and trustworthy AI requires more than mitigating the potential negative impacts. It is also about embracing the technology’s potential to create a more productive and equitable society. Our technology will only be made “better” by AI if it is developed by compassionate and fair-minded people rather than merely accelerating past trends informed by an incomplete history.
Best Practices
How do we make the necessary changes to chart a productive and ethical path forward in AI development? First, it’s essential to implement best practices throughout the development and deployment process. One crucial step is to ensure that the AI system is informed by representative data sources. While historical or legacy data can be important in developing capable systems, it requires extra caution and evaluation for bias and may need to be adjusted to reflect the populations the solution will ultimately affect.
Ideally, bias identification is built into the AI itself. An AI platform should not only be able to identify potentially biased data but also sensitive data that may need extra layers or protection or de-identification. Additional trustworthy AI capabilities include explainability, decision auditability and model monitoring, governance and accountability. Those capabilities increase confidence in an agency’s responsible AI efforts.
Another more immediate way to make a difference is to get involved. That’s what I did. I was motivated to get onboard the responsible AI movement as a way to reckon with the potential harm that automated systems, built on the foundations of our past, can cause if not handled with care. Given the history of discrimination faced by Black Americans, I am especially attuned to the lasting impact of past injustices and the importance of mitigating their negative effects on future generations. That’s why responsible innovation requires people from various backgrounds with unique experiences to participate in decision-making, sharing perspectives that others haven’t had to broaden a team’s understanding of the populations their solutions will affect. By including diverse voices and perspectives in the development process, businesses and agencies can create more inclusive and robust systems that promote trust and accountability.
Leaders in the AI space must be willing to adjust their approaches to ensure that the technology is deployed in a responsible and trustworthy manner. The SAS Data Ethics Practice helps SAS remain nimble as AI evolves, which is critical to our practice’s mission of empowering employees and customers to deploy data-driven systems that promote human well-being, agency and equity.
It’s not only the right thing to do from an ethical perspective, but can also have a serious impact on a company’s reputation and bottom line. In the early days of AI we have seen many examples of high-profile projects coming to sudden and ignominious ends thanks to glaring instances of bias. These incidents not only cause harm to the people affected but also erode trust in AI solutions overall, hindering efforts to build powerful tools that promote equity.
While AI can be harnessed to damaging ends, it also holds the promise to improve our ability to detect and mitigate unintended harm. By embracing best practices, being vigilant for potential discriminatory outcomes, and including diverse perspectives in the design and implementation process, we can harness the immense potential of AI to empower marginalized populations and create a more inclusive and equitable future for all. Ultimately, it is our responsibility to make AI reflect the best of us rather than repeating injustices of the past.
Reggie Townsend is the VP of the SAS Data Ethics Practice (DEP). As the guiding hand for the company’s responsible innovation efforts, the DEP empowers employees and customers to deploy data-driven systems that promote human well-being, agency and equity to meet new and existing regulations and policies. Townsend serves on national committees and boards promoting trustworthy and responsible AI, combining his passion and knowledge with SAS’ more than four decades of AI and analytics expertise.
Key Bills Advancing, but No Path to Avoid Shutdown Apparent
TSP Adds Detail to Upcoming Roth Conversion Feature
White House to Issue Rules on RIF, Disciplinary Policy Changes
DoD Announces Civilian Volunteer Detail in Support of Immigration Enforcement
See also,
How Do Age and Years of Service Impact My Federal Retirement
The Best Ages for Federal Employees to Retire
How to Challenge a Federal Reduction in Force (RIF) in 2025
Should I be Shooting for a $1M TSP Balance? Depends…