Reimagining Collaboration in Mental Health, Societal Bias and Black Communities With Microsoft

May 5 @ 1:07 pm
By Desmond Upton Patton

CSSW Professor Desmond Upton Patton works with the tech giant to shape new AI strategies that take into account the impact of racism on mental health.

This blog post was originally prepared for SAFE Lab’s Medium.com channel. An edited version has been cross-posted here with the author’s permission.

On October 26, 2020, Walter Wallace, a Black man living with a mental health condition, was murdered by the Philadelphia police. This occurred just one day before Microsoft’s AI for Accessibility Workshop on Societal Bias, Mental Health and Black Communities. As a gay Black man with a PhD and affiliation with an Ivy League university, I had to muster up enough cognitive resources to have critical conversations about race, technology and mental health during the workshop. It was all too close to home.

Mental Health America captures the feelings I had on that day with this passage on its Black And African American Communities And Mental Health page:

Processing and dealing with layers of individual trauma on top of new mass traumas from Covid-19, police brutality and its fetishization in the new media adds compounding layers of complexity for individuals to responsibly manage.

While sitting with these very raw emotions, it is also important to underscore how research and data affect how we understand issues of race and racism. My Columbia School of Social Work colleague Dr. Courtney Cogburn suggests that “so much of what we understand about mental health and human behavior is based on White people.”

How did we get here?

In May 2020, I connected with Wendy Chisholm, a principal accessibility architect for Microsoft’s AI for Accessibility program, where I learned that Microsoft’s AI for Accessibility unit was deeply interested in the need for breakthrough innovation in the mental health space, especially in the midst of Covid-19. The team was eager to incorporate and focus on intersectionality within their mental health projects—but were surprised they did not receive grant proposals from underrepresented groups that addressed mental health and racism.

After that initial meeting, Wendy and I reconnected, and we talked about Microsoft AI’s particular interest in ensuring that traditionally underrepresented groups show up in mental health data sets that might be analyzed using artificial intelligence. I was then invited to a larger group meeting with Microsoft employees to discuss strategizing how to work with HBCU’s and non-profit organizations to confront the challenge of representation head-on. I was impressed that Wendy had assembled a diverse and interdisciplinary team from across Microsoft to meet with me, a social worker and professor from Columbia University. At the time I was also a visiting researcher with the Social Media Collective at Microsoft Research New England and at the very beginning stages of writing a book about Black youth, social media, mental health and AI. These topics were top of mind for me.

I appreciated the Microsoft team’s thoughtful and frankly vulnerable understanding of the problem. I took the time to highlight the vast amount of rigorous research happening among Black scholars, in mental health and AI space, that is often overlooked because those researchers were in different networks; social workers talking to social workers, psychiatrists talking to psychiatrists, and so forth. We had quite a reflective conversation about what research was valued and who might be deemed an “expert” on these topics. We all agreed on the need for a convening to disrupt the traditional way of building networks. It was time to bring folks together who may not usually be in the same room, to have critical and robust conversations about Black mental health, AI, racism, and data.

Subsequently, we worked together to design a two-day workshop that would frame the problem as one of structural racism and then amplified relevant anti-racist work happening in mental health to further ground how racism may be suffused in the application of AI for mental health research. Before the conference, I co-designed a pre-survey to identify themes that we might focus on for deeper discussions in breakout rooms. We created three breakout spaces focusing on:

  1. Understanding the landscape of mental health in Black communities.
  2. Designing AI for mental health with attention to opportunities and challenges.
  3. Designing interventions and interdisciplinary collaborations.

The workshop took place on October 27 and 29, 2020, with about 25 attendees. It was structured so that there would be a day in between the core content days to quell Zoom fatigue and create a relaxed networking opportunity.

READ: Highlights from the workshop keynotes and breakout discussions

What should Microsoft do next?

I was most excited about the conversations that focused on what Microsoft should do next. Where should Microsoft invest their vast resources? A few ideas:

  • Develop an AI course that helps researchers and practitioners understand the potential and limitations of AI
  • Engage in interdisciplinary praxis, leveraging the voices of social workers, clinicians, researchers, and community members.
  • Seek funding. Invest in supporting people’s time so that they can have time away from their everyday work to more deeply examine issues of AI and mental health in the Black community.
  • Co-create. Develop a fellowship that brings diverse voices together to engage in rigorous study, learning, and outputs that lead to innovative research and practice.

The workshop resulted in a report that outlines and summarizes the need to tackle racism in AI and mental health research while uncovering gaps in skill sets, inclusion and collaboration that can certainly strengthen these critical areas of research.

There is much work to be done and we can only make progress by working together. We must center the voices of marginalized communities, advocate for resources and bring diverse voices to the table. This journey must continue because our lives depend on it. We will be limited in AI innovation and disruption if we do not act now. I am hopeful we can create equitable solutions for everyone. Microsoft is committed to continuing this journey with you. Nothing about us, without us.

WATCH: Workshop lectures


Desmond Upton Patton performs research at the intersection of artificial intelligence, social media, empathy, race and society. He is an associate professor at the Columbia School of Social Work, where he also serves as the Associate Dean for Innovation and Academic Affairs. He is the founding director of SAFE Lab and the co-director of the Justice, Equity and Technology Lab. He is a member of Columbia University’s Data Science Institute and was recently appointed as their Associate Director of Diversity, Equity and Inclusion.


Related links:

Desmond Upton Patton

PIT-UN Grant Will Further the Aims of the School’s Justice, Equity & Technology Lab