Women in AI: Sandra Wachter, Professor of Data Ethics at Oxford

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Image credit: Tech Crunch

To give women academics and others focused on AI their well-deserved — and overdue — time, TechCrunch is launching a series of interviews focusing on these remarkable women. who have contributed to the AI ​​revolution. We’ll be publishing several pieces throughout the year as the AI ​​boom continues, highlighting important work that often goes unrecognized. Read more profiles here.

Sandra Wachter is Professor and Senior Researcher in Data Ethics, AI, Robotics, Algorithms and Regulation at the Oxford Internet Institute. She is also a former fellow of The Alan Turing Institute, the UK’s national institute for data science and AI.

While at the Turing Institute, Wachter examined the ethical and legal aspects of data science, highlighting cases where fuzzy algorithms have become racist and sexist. It also looked at ways to audit AI to combat misinformation and promote fairness.

Question and Answer

Briefly, how did you get your start in AI? What drew you to the field?

I can’t remember a time in my life when I didn’t think that innovation and technology have incredible potential to improve people’s lives. Yet, I also know that technology can have devastating consequences for people’s lives. And so I was always – not least because of my strong sense of justice – to find a way to guarantee that perfect middle ground. Enabling innovation while protecting human rights.

I have always felt that law has a very important role to play. Law can be an enabling middle ground that both protects people but enables innovation. Law as a discipline came very naturally to me. I like challenges, I want to understand how the system works, to see how I can game it, find loopholes and close them later.

AI is an incredibly transformative force. It applies in finance, employment, criminal justice, immigration, health and the arts. This can be good and bad. And whether it is good or bad is a matter of design and policy. I was naturally drawn to it because I felt that law could make a meaningful contribution to ensuring that innovation benefited as many people as possible.

What work are you most proud of (in the AI ​​field)?

I think the work I’m most proud of right now is co-authored by Brent Mittelstadt (a philosopher), Chris Russell (a computer scientist) and myself as lawyers.

Our latest work on bias and fairness, “The Unfairness of Fair Machine Learning,” revealed the detrimental effects of implementing too many “group fairness” measures in practice. In particular, justice is achieved by “equalizing” or making everyone worse off, rather than helping disadvantaged groups. This approach is morally troubling in the context of EU and UK non-discrimination law. In a piece on Wired we discussed how lowering the threshold can be practically harmful — in health care, for example, it can mean enforcing fairness in a group. is to miss more cancer cases than strictly necessary while making the system less accurate overall.

For us it was terrifying and something that tech, policy and really every human being needs to know. In fact we have discussed with UK and EU regulators and shared our alarming findings with them. I sincerely hope that this will give policymakers the leverage they need to implement new policies that prevent AI from causing such serious harm.

How do you navigate the challenges of the male-dominated tech industry, and by extension, the male-dominated AI industry?

Interestingly, I never saw technology as something that belonged to men. It was only when I started school that society told me there was no place for people like me in tech. I still remember when I was 10, the curriculum said that girls had to knit and sew while boys made birdhouses. I also wanted to build a birdhouse and requested to be transferred to a boys’ class, but was told by my teachers that “girls don’t do that.” I also went to the headmaster of the school and tried to withdraw the decision but unfortunately failed at that time.

It’s very hard to fight against a stereotype that says you shouldn’t be part of this community. I wish I could say that such things don’t happen anymore, but unfortunately that’s not true.

However, I have been incredibly fortunate to work with allies like Brent Mittelstadt and Chris Russell. I was blessed with incredible teachers like my Ph.D. The supervisor and I have a growing network of like-minded people of all genders who are doing their best to improve the situation for everyone interested in tech.

What advice would you give to women aspiring to enter the AI ​​field?

Above all, try to find like-minded people and allies. Finding your people and supporting each other is very important. My most impactful work has always come from talking with open-minded people from other backgrounds and disciplines to solve common problems we face. Accepted wisdom alone can’t solve new problems, so women and other groups that have historically faced barriers to entry in AI and other tech fields have the opportunity to truly innovate and create something new. The tools are there to deliver.

What are the most pressing issues facing AI as it evolves?

I think there are many issues that require serious legal and policy consideration. To name a few, AI suffers from biased data that leads to discriminatory and unfair results. AI is inherently ambiguous and difficult to understand, yet it decides who gets a loan, who gets a job, who goes to jail and who is allowed to go to university.

There are problems with generative AI, but it also contributes to misinformation, is riddled with fraud, violates data protection and intellectual property rights, puts people’s jobs at risk and threatens the aviation industry. Contributes to more climate change.

We have no time to lose. We need to solve these problems tomorrow.

What issues should AI users be aware of?

I think there’s a tendency to believe a certain narrative along the lines of “AI is here and here to stay, get on board or be left behind”. I think it’s important to think about who is pushing this narrative and who is benefiting from it. It is important to remember where the real power lies. The power does not lie with those who innovate, it lies with those who buy and implement AI.

So consumers and businesses must ask themselves, “Does this technology actually help me and in what way?” Electric toothbrush now includes “AI”. this is for whom? Who needs it? What is being improved here?

In other words, ask yourself what is broken and what needs to be fixed and whether AI can actually fix it.

This kind of thinking will change the power of the market, and hopefully lead innovation in a direction that focuses on utility for the community rather than just profit.

What is the best way to build AI responsibly?

Having laws that demand responsible AI. Here, too, a very unhelpful and false narrative prevails: that regulation stifles innovation. This is not true. Suppresses regulation. harmful Innovation Good laws promote and nurture ethical innovation. This is why we have safe cars, planes, trains and bridges. Society is not lost if regulation stops.
Creation of AI that violates human rights.

Traffic and safety regulations for vehicles were also said to “stifle innovation” and “restrict autonomy”. These laws prevent people from driving without a license, prevent cars from being put on the market without seat belts and airbags, and penalize those who do not drive within the speed limit. Imagine what the safety record of the automotive industry would be like if we didn’t have laws to regulate vehicles and drivers. AI is currently at a similar inflection point, and heavy industry lobbying and political pressure mean it’s still unclear which path it will take.

How can investors best push for responsible AI?

I wrote a paper a few years ago called “How Fair AI Can Make Us Rich.” I strongly believe that AI that respects human rights and is unbiased, explainable and sustainable is not only legal, ethical and morally sound, but can also be profitable.

I really hope that investors understand that if they are pushing for responsible research and innovation, they will get better products. Bad data, bad algorithms and bad design choices lead to bad products. Even if I can’t convince you that you should do the moral thing because it’s the right thing to do, I hope you’ll see that the moral thing is also more profitable. Ethics should be seen as an investment, not an obstacle to be overcome.



WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment