Elizabeth M. Adams | Stanford fellow, AI ethics and culture advisor

Elizabeth M. ADams AI ethics and culture advisor AI portrait with image influence Wangechi Mutu.
Elizabeth M. Adams AI Portrait with artist/image influence Wangechi Mutu | Madam Repeateat, 2010 Mixed media, ink, spray paint and collage on Mylar Paper

Learn about Elizabeth Adams and her work in AI ethics and inclusivity as a foundation for successful AI design. One of our 50 Women in AI.

AI Ethics and inclusivity

AI ethics and inclusivity

I want to advocate for people who don’t have a voice at the table. I want to study concepts around Information Systems Design, stakeholder theory, responsible leadership and research, design, and design science. Those are things that are intriguing to me, in order to be able to tell the story of why it’s so important for diverse people to participate in innovation and specifically in artificial intelligence.

This 50 Women in AI interview with Elizabeth M. Adams is quite different from the first version. I rewrote this as a result of a follow-up conversation we had. Elizabeth felt that the first version focused too much on her struggles and less on the positives of her work.  I found our first conversation inspirational. And while I viewed it as a great example of a woman overcoming many challenges in male dominated field, Elizabeth felt that the narrative surrounding successful women of color women too often focuses on the struggle rather than the positive impact realized along the way.  

In addition, with my background in human-centered design, I wanted to share more on how Elizabeth’s work has made a difference to many. Her work could be helpful for others trying to create better experiences and build more effective AI based tools.

Good design includes bringing in multiple points of view, so that all who are impacted by the design of a tool have a say in its creation. Too often, the design process includes a limited set of views, and even with the best intentions, it can be hard for those who design something, to understand how different people will be affected by it.  Elizabeth’s work focuses on helping organizations reduce bias to build better AI powered systems, one of her goals: to advocate for stakeholder inclusion of diverse groups.

 

Applying personal experiences with bias to help craft better AI

Elizabeth persevered to overcome a number of obstacles to  arrive to where is today, now an Affiliate fellow at Stanford University’s Institute for Human-Centered AI, while earning her Executive Doctorate in Business Administration with a research focus on Leadership of Responsible AI at Pepperdine Graziadio Business School. She works as advisor on ethics, and inclusion in AI.

She has served as an advisor to the UN and has contributed to the World Economic Forum’s Inclusion and Equity in AI report, on the topic of AI ethics.

But before these accomplishments, her 20 years as a technologist coupled with a few unfortunate years of lost income, gave her first hand perspective on the impacts of lack of diversity in decision making and design of technology.

Forming her leadership philosophy of bringing together diverse perspectives  

Rising to an impressive position, Elizabeth headed a systems integration lab at  a large government organization. She managed a seven figure annual budget and led a team of over 200 technologists, which included coordinating modeling and simulation experts, statisticians, analysts, business analysts, software engineers, architects, and developers. The project’s focus? Deliver highly trusted technology products. With this complex mission, she saw that she was able to bring many different people and views together so that their lofty goals were achievable. 

The experience inspired her leadership philosophy based on including diverse perspectives:   

If we can get on the same page and understand our mission, our goals, and make sure that we’re including all the different perspectives and voices, then we should really be able to turn out some fantastic technology.

Despite this success, sometimes life’s circumstance force us to change directions. Elizabeth had to return home to Minneapolis to help her family care for her aging father. She sought a similar role, but couldn’t find comparable work. No luck, she ended up a member of the working poor. 

Bias in the physical world leads to bias in the digital world

Experiencing bias as she searched for work and rebuilt her career, she gained first hand experience of  how the lack of diversity in decision making and design processes generates an unintended ripple effect.  Each stage of design process includes outcomes with scalable results. Small errors made early on can be magnified down the road.

These types of bias errors can be reduced by including diverse viewpoints during each phase and role of the AI lifecycle. It’s especially important to include those who may have been adversely impacted by AI bias, such as employees who have been affected by bias as members of society. Data scrutiny and validation also must be part of the plan, to further ensure prejudices aren’t creeping in, creating a ripple effect of biases, which would result in divisive rather than inclusive AI .

 

 

Each stage of the AI lifecycle provides an opportunity for inclusive actions. Image credit Elizabeth M. Adams

 

Elizabeth’s curiosities drove her forward.  She started planting the seeds to do what she could to help others who experienced bias that limited professional and personal opportunities.

…watching people build technology around me that had a voice in that process, and also observing people and how they interacted with technology, when they felt like they didn’t have a voice in the process, drives the work that I do today. I seek to make sure that technology is very inclusive, to include gender, ethnic diversity, racial diversity, and diversity of lived experiences, whenever I do the work that I do.

AI becomes the path forward

As Elizabeth was identifying the challenges posed by lack of diversity, she came across an influential video in 2018 called AI Ain’t I a Woman by Dr. Joy Buolamwini.

At that point, she was not quite sure where her path would lead. But she knew AI was going to be her way forward. She was inspired to learn about and then write books on what she herself needed to know.

So I, I saw that video. And I knew instantly that inclusion in AI was going to be my path forward. Now I didn’t know how, I didn’t know what, I just knew. So I took the resources that I had at the time, a laptop and PowerPoints and created two free ebooks. ‘The First 3 Ways to Super Charge Your Tech Life’. I was writing about that and following my own steps, which actually worked. I guess I sort of manifested opportunities. And then the second one was called ‘3 Reasons Black Women Should Care about Artificial Intelligence’. And the second book helped crystallize my message.

The impacts of AI bias

As she evolved her technology background into an AI focus, she started to gain an understanding of bias in AI and began working to help others reduce it. She noted that while there are no guarantees of eliminating bias, the first step is to be aware and start asking questions.

We talked about how policy decisions often make it harder for vulnerable populations to thrive in the age of AI. Such communities are more likely disadvantaged by technologies used for facial recognition or managing medical records.

I would say as a whole, governments and communities don’t know a lot about technology, but it’s an opportunity for them to learn. And because I do know about technology, I play this kind of liaison role. And so it was one of the recommendations that I shared with the City of Minneapolis, that you really need a technologist on staff that works with community, whether you’re talking about deciding, you know, the patterns for traffic, or you are deciding whether or not to use license plate readers discussions concerning what is happening with the data suggests you really need a technologist who understands the potential issues around data to work with the community as well.

 

Lack of visibility in the physical world leads to lack of visibility in the digital world

The same biases experienced in the physical world, can be seen replicated across digital and AI platforms.

In a field experiment that I conducted as part of my doctoral experience, I had to interview five leaders who are African American, who work in the space of responsible AI. And I asked them a series of questions about their professional and personal experiences. And one particular question I asked was ‘How does it make you feel knowing that AI bias exists?’ And overwhelmingly, the theme came back saying that I don’t already feel seen in regular spaces. And now I don’t feel seen with AI. Because training datasets don’t include me, or the people designing the technology don’t look like me.

Walking the inclusivity walk

So how to overcome this bias? Elizabeth noted that there are many highly qualified  professionals who can contribute to the responsible AI process. However they’re not being invited. A number of inclusive AI guidelines do not include diverse authorship, a basic step to ensuring an inclusive AI practice. 

Her dissertation focuses on responsible AI and the African American employee stakeholder experience. This same approach to inclusivity can be applied to others experiencing discrimination, such as those from marginalized communities or with physical disabilities. 

It’s not only those who are discriminated against that can be negatively impacted. Businesses developing and selling AI technology risk fewer sales or protests. And they will now be held responsible by regulatory bodies.

FTC Fairness Act – businesses to be held responsible for discriminatory AI

The negative impact of biased AI extends to businesses who sell AI based tools. Elizabeth’s work includes helping organizations who create AI platforms understand why some oppose what they do. For example, when communities ask to ban certain types of technology, what are their reasons and how can the technology be improved to address their concerns?

Beyond protests, there are now consequences for organizations that do not consider the impact of their work. Regulations have been put in place in 2021 in the USA to address AI bias.  

FTC’s Fairness Act provides a step forward to address accountability. The Fairness Act holds an organization responsible if its outcomes are discriminatory.  “The FTC Act prohibits unfair or deceptive practices. That would include the sale or use of – for example – racially biased algorithms.”

Balancing the quantitative with qualitative to bring benefits to business with AI leadership

Much of Elizabeth’s work looks at what’s behind the quantitative data. She examines the more human and qualitative aspects to provide understanding to the big data and numbers.   

I’m interested in the rich experiences of these employees and what it is that they’re feeling about responsible AI and leadership of responsible AI. There are not a lot of qualitative studies done in the area of artificial intelligence in general, but specifically around responsible AI.

The reasons to focus on all of this? Organizations who make the effort to develop inclusive AI practices are more likely to gain employee trust.  In addition, they learn more as an organization. Plus they create a better organizational culture, all thanks to  including them in the decision making process. 

So responsible AI becomes a shared leadership practice. Leaders gain, insights on what is working for employee stakeholders, or what is not working for them. So this is the organizational learning part. And so by doing this, you’re not just leaving everything up to AI to make the decisions for you.  You are not just looking at the data and saying, Oh, we have a better chance of moving into this market than we did last year. Because AI  has shown us when  diverse employee stakeholders can say yeah, but here, here’s something else you might consider. That’s where human intelligence and artificial intelligence meet to reason because employee stakeholders are invited to pull their own experiences into the conversation

 

How to combat bias in AI

Elizabeth wants to focus on the joy in her work. Her current interests include fostering responsible AI leadership. She’s also really interested in how leaders create a culture that invites diverse perspectives. Additionally, She feels that our basic human rights include being able to trust AI around us, from robots, to apps, to all that powers the AI that we see and the AI we don’t. Her goal is to use AI to bring us together, rather than divide us.

I want to wake up in a world where systems are not designed to keep me out, because of my gender or my color of my skin. I want to not just live in the age of AI but thrive with it and because of it.

You can reach Elizabeth M. Adams at EMA Advisory Services – Leadership of Responsible AI

 

Read more about the 50 Women in AI Project.