Follow these 5 principles to make AI more inclusive for everyone.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Opinions expressed by business partners are their own.

From creating fun pictures of the pope to algorithms that help sort job applications and ease the burden of hiring managers, artificial intelligence programs have taken the public consciousness and the business world by storm. Is. However, it is important not to overlook the potentially deep-rooted ethical issues associated with this.

These breakthrough tech tools generate content from existing data and other content, but if those sources are partially the result of racial or gender bias, for example, AI will likely replicate that. For those of us who want to live in a world where diversity, equity and inclusion (DEI) is at the forefront of emerging technology, we should all be concerned with how AI systems produce content. are doing and what is the impact of their output on society.

So, whether you’re a developer, an entrepreneur at an AI startup or just a concerned citizen like me, consider the principles that can be integrated into AI apps and programs to ensure they are more ethical and produce equal results.

Related: What would it take to create a truly ethical AI? These 3 tips may help.

1. Create user-centered design

User-centered design ensures that the program you’re developing includes its users. This may include features such as voice interactions and screen reader capability to assist the visually impaired. Speech recognition models, meanwhile, can be more inclusive of different types of voices (such as women’s, or by applying accents from around the world).

Simply put, developers must pay close attention to who their AI systems are addressing—making it a point to think outside the group of engineers who created them. This is especially important if they and/or the company’s entrepreneurs hope to scale the product globally.

2. Build a diverse team of evaluators and decision makers

The development team of an AI app or program is important not only in its creation but also from an evaluation and decision-making point of view. A 2023 report published by New York University’s AI Now Institute describes a lack of diversity at multiple levels of AI development. This included the remarkable statistic that at least 80% of AI professors are men and less than 20% of AI researchers at the world’s top tech companies are women. Without proper checks, balances, and representation in development, we run the serious risk of feeding AI programs with historical and/or biased data that perpetuates unfair attitudes toward certain groups.

3. Audit datasets and create accountability structures

It is not necessarily one’s direct fault if there is outdated data that perpetuates biases, but it is Someone’s fault if data isn’t audited regularly. To ensure that AI is producing the highest quality output with DEI in mind, developers need to carefully review and analyze the information they are consuming. They may be asking: How old is he? Where does it come from? What’s in it? Is it ethical or correct at the present time? Perhaps most importantly, the datasets must ensure that AI sustains a positive future for DEI and not a negative one derived from the past.

Related: These entrepreneurs are biased towards artificial intelligence.

4. Collect and validate diverse data.

If, after auditing the information an AI program is using, you realize there are inconsistencies, biases and/or biases, work to gather better content. This is easier said than done: collecting data takes months, even years, but it’s worth the effort.

To fuel this process, if you’re an entrepreneur running an AI startup and have the resources for research and development, create projects where team members create new data that represents diverse voices, faces, and attributes. I do This will result in more appropriate source material for apps and programs that we can all benefit from—essentially creating a brighter future that sees different people as multidimensional rather than one-sided or otherwise simplistic. shows

Related: Artificial intelligence can be racist, sexist and scary. Here are 5 ways you can combat this in your enterprise.

5. Engage in AI ethics training on bias and inclusion.

As a DEI consultant and proud creator of the LinkedIn course, Navigating AI through an Intersectional DEI Lens, I have learned the power and positive impact of centering DEI in AI development.

If you or your team are struggling to put together a relevant to-do list for developers, reviewers, and others, I recommend hosting relevant ethics training, including an online course that gives you real-time feedback. Can help solve problems.

Sometimes you just need a trainer to walk you through the process and address each issue one by one to create a lasting result that produces more inclusive, diverse and ethical AI data and programs. does.

Related: 6 Traits You Need to Succeed in an AI-Increased Workplace

Developers, entrepreneurs and others who care about reducing bias in AI should use our collective energy to educate ourselves on how to build diverse teams of reviewers who can check and audit data and such. Focus on designs that make programs more inclusive and accessible. The result will be a landscape that represents a wider range of users as well as better content.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment