Women in AI: Maryam Vogel emphasizes the need for responsible AI.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

To give women academics and others focusing on AI their well-deserved — and overdue — time in the spotlight, TechCrunch is publishing a series of interviews focused on notable women who contributed to the AI ​​revolution. Is. We're publishing these pieces throughout the year as the AI ​​boom continues, highlighting important work that often goes unrecognized. Read more profiles here.

Miriam Vogel is the CEO of EqualAI, a nonprofit organization created to reduce unconscious bias in AI and promote responsible AI governance. She also serves as chair of the recently launched National AI Advisory Committee, which is mandated by Congress to advise President Joe Biden and the White House on AI policy, and at the Center for Technology at Georgetown University Law Center. Has taught law and policy.

Vogel previously served as Associate Deputy Attorney General at the Department of Justice, advising the Attorney General and Deputy Attorneys General on a wide range of legal, policy and operational issues. As a board member of the Responsible AI Institute and senior adviser to the Center for Democracy and Technology, Vogels advises White House leadership on initiatives ranging from women's, economic, regulatory and food safety policy to criminal justice issues. .

Briefly, how did you get your start in AI? What drew you to the field?

I began my career working in government, initially as a Senate intern, the summer before eleventh grade. I got the policy bug and spent the next several summers working on Hill and then the White House. At the time I was focused on civil rights, which is not a traditional path to AI, but in retrospect it makes perfect sense.

After law school, my career progressed from an entertainment attorney specializing in intellectual property to include civil rights and social impact work in the Executive Branch. I had the honor of leading the Equal Pay Task Force when I served in the White House, and while serving as Associate Deputy Attorney General under former Deputy Attorney General Sally Yates, I worked on federal law. Led the creation and development of implicit bias training for implementation. .

I was asked to lead EqualAI based on my experience as a lawyer in tech and my background in policy addressing bias and systematic harm. I was drawn to this organization because I realized that AI offered the next frontier for civil rights. Without vigilance, decades of progress can be undone by regulation.

I've always been excited about the possibilities created by innovation, and I still believe that AI can offer amazing new opportunities for the development of more populations—but only if we're at this critical juncture. Be careful to ensure that as many people as possible can meaningfully participate. Creation and development.

How do you navigate the challenges of the male-dominated tech industry, and by extension, the male-dominated AI industry?

I fundamentally believe that we all have a role to play in ensuring that our AI is as efficient, effective and beneficial as possible. That means making sure we do more to support women's voices in its development (who, by the way, account for more than 85% of purchases in the U.S., and therefore their interests). and incorporating conservation is a smart business move), as well as the voices of other underrepresented populations of different ages, regions, races and nationalities who are not sufficiently participating.

As we work toward gender equality, we must ensure that more voices and perspectives are considered to develop AI that works for all users—not just AI that works for developers. Is.

What advice would you give to women aspiring to enter the AI ​​field?

First, it's never too late to start. Never the less I encourage all grandparents to try using OpenAI's ChatGPT, Microsoft's Copilot or Google's Gemini. To become an AI-driven economy, we all need to become AI-literate. And it's interesting! We all have a role to play. Whether you're starting a career in AI or using AI to support your work, women should try out AI tools, see what these tools can and can't do, see Whether they work for them and become aware of AI in general.

Second, the development of responsible AI requires more than just ethical computer scientists. Many people think that the AI ​​field requires a computer science or another STEM degree when, in fact, AI requires the perspectives and skills of women and men from all backgrounds. Jump in! Your voice and perspective are needed. Your engagement is important.

What are the most pressing issues facing AI as it evolves?

First, we need more AI literacy. We at EqualAI are “AI net positives,” meaning that we believe AI will create unprecedented opportunities for our economy and improve our daily lives—but only if those opportunities are shared by a large portion of our population. be equally available and beneficial to We need our current workforce, the next generation, our grandparents. we all – To be equipped with the knowledge and skills to take advantage of AI.

Second, we must develop standardized measures and metrics for evaluating AI systems. Standardized assessments will be critical to building trust in our AI systems and allowing consumers, regulators and downstream users to understand the limitations of the AI ​​systems they engage with and determine whether those systems meet our Worthy of trust. Understanding who a system is designed to serve and the envisioned use cases will help us answer the critical question: Who might it fail for?

What issues should AI users be aware of?

Artificial intelligence is just that: artificial. It was created by humans to “mimic” human cognition and empower humans in their pursuit. We should maintain an appropriate amount of skepticism and exercise due diligence when using this technology to ensure that we are placing our trust in systems that deserve our trust. AI can augment humanity but not replace it.

We should keep a clear eye on the fact that AI consists of two main components: algorithms (created by humans) and data (reflecting human conversations and interactions). As a result, AI reflects and adapts to our human flaws. Biases and pitfalls can be embedded throughout the AI ​​lifecycle, whether through algorithms written by humans or through data that is a snapshot of human lives. However, every human touch point is an opportunity to identify and mitigate potential harm.

Since one can only imagine as broadly as their own experience allows and AI programs are limited by the constructs under which they are built, the more people on the team with different perspectives and experiences. will, the more likely they are to capture biases and other safety concerns. Embedded in their AI.

What is the best way to build AI responsibly?

Building AI that is worthy of our trust is our responsibility. We cannot expect someone else to do it for us. We should start by asking three basic questions: (1) what is this AI system designed for (2), what were the envisioned use cases and (3) what might it fail for? Even with these questions in mind, there will inevitably be pitfalls. To mitigate these risks, designers, developers, and deployers must follow best practices.

At EqualAI, we promote good “AI hygiene”, which includes planning your framework and ensuring accountability, quality testing, documentation and routine auditing. We also recently published a guide to designing and operating a responsible AI governance framework, which defines the values, principles and framework for implementing AI responsibly in an organization. . This paper serves as a resource for organizations of any size, sector or maturity to adopt, develop, use and implement AI systems while doing so responsibly with internal and public commitment. of the.

How can investors best push for responsible AI?

Investors have a big role to play in ensuring our AI is safe, effective and responsible. Investors can ensure that companies receiving funding are aware of and thinking about mitigating potential risks and liabilities in their AI systems. Even asking the question, “How have you established AI governance practices?” is a meaningful first step to ensure better results.

This effort is not only good for public good. It is also in the best interest of investors who would like to ensure that the companies they invest in and are associated with are not associated with bad headlines or are the victims of lawsuits. Trust is one of the few non-negotiables for a company's success, and a commitment to responsible AI governance is the best way to build and maintain public trust. Strong and reliable AI has good business sense.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment