To give female academics and others focused on AI their well-deserved — and overdue — time, TechCrunch is launching a series of interviews focusing on remarkable women who have AI has contributed to the revolution.
Anika Collier Navaroli is a Senior Fellow at Columbia University's Tow Center for Digital Journalism and a Technology Public Voices Fellow with the OpEd Project, organized in collaboration with the MacArthur Foundation.
He is known for his research and advocacy work within technology. Previously, she served as a Race and Technology Practitioner Fellow at the Stanford Center on Philanthropy and Civil Society. Previously, he led Trust and Safety at Twitch and Twitter. Navarro is perhaps best known for his congressional testimony about Twitter, where he spoke about the ignored warnings of impending violence on social media that would lead to the January 6 capital attack.
Briefly, how did you get your start in AI? What drew you to the field?
About 20 years ago, I was working as a copy clerk in the newsroom of my hometown newspaper during the summer when it went digital. At the time, I was studying journalism. Social media sites like Facebook flooded my campus, and I became obsessed with trying to understand how the rules set on the printing press would evolve with emerging technologies. That curiosity led me to law school, where I migrated to Twitter, studied media law and policy, and witnessed the Arab Spring and Occupy Wall Street movements. I put it all together and wrote my master's thesis on how new technology is changing the way information flows and how society exercises freedom of expression.
I worked at a couple of law firms after graduation, and then found my way to the Data and Society Research Institute, leading research for a new think tank that was then called “big data,” civil rights and It was called justice. My work there looked at how early AI systems such as facial recognition software, predictive policing tools, and criminal justice risk assessment algorithms were mimicking bias and producing unintended consequences that Affecting marginalized communities. I then worked at Color of Change and led the first civil rights audit of a tech company, developed the organization's playbook for tech accountability campaigns, and advocated for tech policy change to governments and regulators. From there, I became a senior policy official within the Trust and Safety teams at Twitter and Twitch.
What work in the AI field are you most proud of?
I am most proud of my work within technology companies that use the policy of correct bias within algorithmic systems that practically shift the balance of power and create culture and knowledge. On Twitter, I've run a few campaigns to affirm people who were surprisingly excluded from special affirmative action before, including black women, people of color, and queer people. It also included leading AI scholars such as Safia Noble, Alondra Nelson, Tamnet Gebro, and Meredith Broussard. This was in 2020 when Twitter was still Twitter. At the time, verification meant that your name and content became part of Twitter's core algorithm as tweets from verified accounts were included in recommendations, search results, home timelines, and in the creation of trends. was supported. So working on AI changed fundamentally to affirm new people with different perspectives whose voices were empowered as thought leaders and elevated new ideas into the public discourse during some really critical moments. was done.
I'm also very proud of the research I did at Stanford that came together as Black in Moderation. When I was working inside tech companies, I also noticed that as a black person working in trust and safety, no one was writing about the experiences I had every day, or was talking So when I left the industry and went back to academia, I decided to talk to black tech workers and get their stories out there. The research ended up being the first of its kind and spurred many new and important conversations about the experiences of tech employees with marginalized identities.
How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?
As a black woman, visiting male-dominated spaces and spaces where I am other has been a part of my life's journey. Within tech and AI, I think the most challenging aspect has been what I call “forced identity labor” in my research. I coined the term to describe frequent situations where employees with marginalized identities are perceived as voices and/or representatives of entire communities sharing their identity.
Because of the high stakes that come with the development of new technologies like AI, this labor can sometimes feel almost impossible to escape. I had to learn to set very specific boundaries for myself about what issues I wanted to engage with and when.
What are the most pressing issues facing AI as it evolves?
According to investigative reporting, current generative AI models have amassed all the data available on the Internet and will soon run out of data available to them. So the world's biggest AI companies are turning to artificial data, or information generated by AI itself, rather than humans to train their systems.
The thought led me down the rabbit hole. So, I recently wrote an Op-Ed arguing that I think this use of synthetic data as training data is one of the most important ethical issues facing new AI development. Generative AI systems have already shown that, based on their original training data, their output is replicating bias and creating false information. So the path to training new systems with synthetic data will mean continually feeding biased and incorrect results back into the system as new training data. I described it as potentially turning into feedback-to-hell.
Since I wrote this piece, Mark Zuckerberg has praised Meta's updated Llama 3 chatbot as partially powered by synthetic data and the “most intelligent” generating AI product on the market.
What issues should AI users be aware of?
AI is such a ubiquitous part of our lives today, from spellcheck and social media feeds to chatbots and image generators. In many ways, society has become the guinea pig for experiments with this new, untested technology. But AI users shouldn't feel powerless.
I'm arguing that technology advocates should come together and AI users should organize to demand that people stop AI. I think the Writers Guild of America has shown that with organization, collective action, and patience, people can come together to set meaningful boundaries for the use of AI technologies. I also believe that if we pause now to correct the mistakes of the past and create new ethical guidelines and regulations, AI should not threaten our future.
What is the best way to build AI responsibly??
My experience working inside tech companies showed me who in the room writes the policies, makes the arguments, and makes the decisions. My path also showed me that I developed the skills needed to succeed in the technology industry by starting in journalism school. I'm now back at Columbia Journalism School and I'm interested in training the next generation of people who will do technology accountability and responsibly use AI as watchdogs inside and outside tech companies. Will prepare.
I think [journalism] The school uniquely trains people to question information, seek the truth, consider multiple perspectives, make logical arguments, and separate facts and reality from opinion and misinformation. I believe this is a solid foundation for those who will be responsible for writing the rules of what the next iteration of AI can and cannot do. And I look forward to paving the way for those to come.
I also believe that in addition to skilled trust and safety workers, the AI industry needs external regulation. In the US, I argue this should come in the form of a new agency to regulate US technology companies with the power to set and enforce basic security and privacy standards. I'd also like to continue working to connect current and future regulators with former tech workers who can help those in power ask the right questions and develop new insights and practical solutions.