Laws, school bans and the Sam Altman drama: Major breakthroughs in AI in 2023 | Technology

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

The artificial intelligence (AI) industry began in 2023 when schools and universities grappled with students using OpenAI’s ChatGPT to help them with homework and essay writing.

Less than a week into the year, New York City Public Schools banned ChatGPT — released weeks ago to great fanfare — in a move that would set much of the debate around generative AI in 2023. will set the stage.

As the buzz surrounds Microsoft-backed ChatGPT and rivals like Google’s Bard AI, Baidu’s Ernie Chatbot and Meta’s LLaMA, how to handle a powerful new technology that became accessible to the masses overnight? was done

While platforms such as AI-generated images, music, videos and computer code such as Stability AI’s Stable Diffusion or OpenAI’s DALL-E have opened up exciting new possibilities, they have raised concerns about misinformation, target harassment and copyright infringement. also gave air to

In March, a group of more than 1,000 signatories, including Apple co-founder Steve Wozniak and billionaire tech entrepreneur Elon Musk, called for a halt to further advanced AI development in light of its “profound threats to society and humanity”. .

While there has been no pause, governments and regulatory authorities have begun to enact new laws and regulations to set the bar for the development and use of AI.

While many issues surrounding AI remain unresolved heading into the new year, 2023 is likely to be remembered as an important milestone in the field’s history.

Drama in OpenAI

After ChatGPT amassed more than 100 million users in 2023, developer OpenAI returned to the headlines in November when its board of directors abruptly fired CEO Sam Altman — alleging that he ” were not consistently clear in their interactions with

Although the Silicon Valley startup did not specify the reasons for Altman’s dismissal, his departure was attributed to an ideological struggle within the company between safety and commercial concerns.

Altman’s dismissal triggered a five-day highly public drama in which OpenAI staff threatened mass walkouts and Altman was briefly employed by Microsoft, until he The maintenance and board should not be changed.

While OpenAI has tried to move on from the drama, the questions raised during the upheaval are valid for the industry at large — including how to weigh the drive for profitability and new product launches against concerns that AI is too Can quickly become powerful, or fall. in the wrong hands.

Sam Altman was summarily fired from OpenAI. [File: Lucy Nicholson/Reuters]

In a survey of 305 developers, policymakers and academics conducted by the Pew Research Center in July, 79 percent of respondents said they were either more worried than excited about the future of AI, or equally excited.

Despite AI’s potential to transform fields from medicine to education and mass communication, respondents expressed concern about risks such as mass surveillance, government and police harassment, job displacement and social isolation. .

Shaun McGregor, founder of the Responsible AI Collaborative, said 2023 revealed the hopes and fears that exist around creative AI, as well as deep philosophical divisions within the field.

McGregor told Al Jazeera, “The most hopeful thing is that the spotlight is now shining on the social judgments of technologists, although it’s concerning that many of my colleagues in the tech sector view such attention negatively. ” People’s needs were affected the most.”

“I still feel pretty positive, but these next few decades will be challenging because the conversation we have about AI safety is a fancy technical version of old social challenges,” he said.

Making future legislation

In December, EU policymakers agreed on broad legislation to regulate the future of AI, a year of efforts by national governments and international bodies such as the United Nations and the G7.

Key concerns include the sources of information used to train AI algorithms, much of which is removed from the Internet without regard to privacy, bias, accuracy or copyright.

The EU’s draft legislation requires developers to disclose their training data and compliance with block rules, with specific types of use and a channel for user complaints.

Similar legislative efforts are underway in the US, where President Joe Biden issued a sweeping executive order on AI standards in October, and the UK, which hosted an AI Safety Summit in November that brought together 27 countries and industry stakeholders. Holders included.

China has also taken steps to regulate the future of AI, issuing interim rules for developers to submit to a “security assessment” before releasing products to the public.

The guidelines also restrict AI training data and ban content that “advocates terrorism”, “undermines social stability”, “overthrows the socialist system”, or “harms the country’s image”. ” is seen as

Globally, 2023 also saw the first interim international agreement on AI safety, signed by 20 countries, including the US, UK, Germany, Italy, Poland, Estonia, Czech Republic, Singapore, Nigeria, Israel And Chile included.

AI and the future of work

Questions about the future of AI also extend to the private sector, where its use has already been fueled by class-action lawsuits in the US by authors, artists and news outlets alleging copyright infringement. .

Fears of jobs being replaced by AI were a driving factor behind the months-long strikes in Hollywood by the Screen Actors Guild and the Writers Guild of America.

In March, Goldman Sachs predicted that creative AI could replace 300 million jobs through automation and affect at least two-thirds of existing jobs in Europe and the US – making work more productive but also Also more automated.

Others have tried to temper more doomy predictions.

In August, the International Labor Organization, the United Nations labor agency, said that generative AI is more likely to augment rather than replace most jobs, with clerical work as the profession most at risk. is listed on

The Year of the ‘Deep Fake’?

The year 2024 will be a big test for creative AI, as new apps hit the market and new legislation comes into play against a backdrop of global political upheaval.

Over the next 12 months, more than two billion people are set to vote in elections in a record 40 countries, including geopolitical hotspots such as the United States, India, Indonesia, Pakistan, Venezuela, South Sudan and Taiwan.

While online disinformation campaigns are already a regular part of many election cycles, AI-generated content is expected to make matters worse as it becomes difficult and widespread to distinguish false information from the real thing. Copying is becoming easier.

AI-generated content, including “deepfake” images, has already been used to incite anger and confusion in conflict zones such as Ukraine and Gaza, and has been featured in heated election races such as the US presidential election. .

Meta told advertisers last month that it would block political ads on Facebook and Instagram that are created with creative AI, while YouTube announced that it would allow creators to show realistic AI-generated content. Labeling will be required.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment