During her studies in Generative AI and Responsible AI, Dr. Clark encountered a concept that immediately resonated with decades of research and community work: AI is DEI. The more she explored the principles of AI Ethics, the clearer it became that these were not two separate fields converging — they were the same foundational commitments, arrived at from different directions.
One emerged from the technology sector. The other from the struggle for civil rights. Both are asking the same question: who does this system serve, and who does it leave behind?
AI systems must not produce outcomes that systematically disadvantage certain groups — just as equity demands that no community be left behind by design. Both recognize that neutral-seeming systems can cause deeply unequal harm. A system does not have to intend discrimination to produce it.
Who is responsible when the system fails? AI Ethics demands clear accountability; DEI demands that the people most affected by failure have a seat at the table where decisions are made. You cannot have one without the other. Accountability without inclusion is just documentation of harm.
Ethical AI requires openness about how systems are built, what data they use, and where they fall short. DEI demands the same of institutions — honest accounting of gaps, disparities, and the communities most impacted by them. Neither field can function on good intentions alone.
Removing bias from AI systems requires diverse, representative data. This is not a technical fix — it is a DEI imperative. Without diversity in the data, bias is not an accident. It is a design choice. The datasets we build reflect the values — and the blind spots — of the people who build them.
The ethical principle that AI should do good and avoid harm maps directly to the equity imperative that technology must serve everyone — especially communities that have historically been underserved, misrepresented, or excluded entirely. Doing good is not enough. The question is: good for whom?
For Dr. Clark, this is not an abstract observation. It is the thread that connects 25 years of research on digital equity, culturally specific technology adoption, and community-centered innovation to the frontier of artificial intelligence. Her 2002 dissertation — Hip Hop Headz and Digital Equity — was asking the same questions about representation, access, and who technology serves that AI Ethics is asking today.
The communities she has spent her career studying and serving are the same communities most at risk of being excluded from — or harmed by — AI systems built without them in mind. That is why she is in this conversation. And that is why it matters that people who understand both worlds are at the table.
← Back to introduction