Desmond Patton Assumes Diversity Role at Columbia’s Data Science Institute

June 2, 2021 @ 10:28 pm
By Communications Office

A leader in combining social work and technology, Patton will bring an equity orientation to cross-university collaborative projects involving the data sciences.

Associate Professor and Senior Associate Dean of Curriculum Innovation and Academic Affairs Desmond Upton Patton has been named associate director of diversity, equity, and inclusion for Columbia University’s Data Science Institute (DSI).

A self-described “social worker with an interest in technology,” Patton has partnered for several years with DSI on projects that address issues of racial equity and fairness in data science, such as those that manifest in training data, machine learning algorithms and models, and automated decision making. In his new role, he envisions a multistep process through which DSI will factor considerations of race and bias into all of its academic work, and develop the guidelines necessary to help practitioners with this task.

Dean Melissa Begg said, “Dr. Patton brings an innovative and essential new lens to the impact of systemic racism and inequities in data science and in the academy. His unique approach to research emphasizes diverse voices, forthright analysis of racial bias and its implications, and a new vision for achieving racial justice in an increasingly data-driven world. There are few experts in this ground-breaking research area as productive and original as Dr. Patton, and even fewer with such a strong commitment to diversity and racial equity. I am eager to see all he will achieve in this role.”

Professor Patton kindly agreed to answer a few questions about his partnership with DSI and the new appointment.


You have collaborated for several years with Kathleen McKeown, Henry and Gertrude Rothschild Professor of Computer Science and founding director of DSI. How has that collaboration enriched both of your perspectives?
Kathy and her colleagues have supported my work as the founder of SAFElab, a research initiative focused on examining the ways in which youth of color navigate violence on and offline. My team had been collecting social media posts by gang-involved Chicago youth when we approached Kathy and her team for help in developing tools and AI techniques to study the language and images in these posts. Working together, we were able to create algorithms to detect harmful words used in these posts before they spark violence in real life. One breakthrough finding occurred in 2018, when our analysis revealed a regular pattern of an expression of grief leading to an expression of aggression, with powerful implications for intervention. But while working together on this project, we also came to realize that existing data science techniques are not equipped to analyze the cultural nuances of language used among predominantly Black and Hispanic youth—a finding that has informed the perspective of Kathy and her team. Because the tools didn’t work with the vernacular in the social media posts, we had to work on creating new ontologies and dictionaries. We also had to bring in young people as experiential experts to help interpret data and give feedback.

It is interesting that as a social work researcher, you were attuned to the limitations of automated computational tools. About a year ago, you and your DSI colleagues published an article about a systematic way to address these limitations. Can you tell us more about that?
One of the hardest things about being a social worker and partnering with computer scientists is the need for binary classifications. There are lots of other behaviors and dynamics that are unfolding on social media, but we can only study them with tightly-wound codes. To address this problem, the team recently developed a multimodal social media analysis for gang violence prevention that includes the psychosocial codes of aggression, loss, and substance abuse. These additional codes were informed by the qualitative work I did in communities in Chicago, prior to coming to Columbia, where young people consistently mentioned these three codes. The approach we developed, called the Contextual Analysis of Social Media, is a seven-step process designed to address the inherent biases in the labeling and interpretation of social media content, with the goal of producing labels that are more robust, accurate, and useful.

Can you elaborate on the kind of work you expect to be doing in your new DEI role for the Data Science Institute?
I would like to see DSI researchers factor considerations of race and bias into all of their academic work, and will be working on developing the guidelines necessary to help practitioners with this task. I also have plans for running a lecture series and fellowship focused on race and data science, as well as a mentorship program to advance a more inclusive student body and faculty. Ultimately, I would like to see Columbia University become a bastion of innovative critical discussions around race and data science. Right now that’s a tall order—we are nowhere near that—but we have a lot of talented folks, and we’re in one of the most diverse cities in the world. I believe we can do it if we choose to do so.


Related links:

Reimagining Collaboration in Mental Health, Societal Bias and Black Communities With Microsoft

PIT-UN Grant Will Further the Aims of the School’s Justice, Equity & Technology Lab