Texas Legislature explores artificial intelligence rules – NBC 5 Dallas-Fort Worth

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Since its emergence in the mid-2000s, social media has changed politics forever. Artificial intelligence could have similar far-reaching effects. Researchers and lawmakers are now studying the latest AI, worried about the technology running wild online without safety or ethical regulations.

NBC 5 traveled to Austin to understand its future implications and spoke with the new chair of the Texas House Select Committee on Artificial Intelligence about the coming months.

In short, the future is ready here or not. Fake videos and photos of Tom Cruise, former President Obama, former President Trump and current President Biden are already circulating the Internet. Manipulating videos and audios of these leaders can confuse voters with fake videos of what they didn’t say.

“Without doing a lot of work, almost anyone can craft a piece of political propaganda that will persuade voters,” said Zilli Martin, a PhD researcher at the University of Texas’ Propaganda Research Lab.

Martin is one of about thirty people working with the lab in Austin to collect, analyze and publicize examples of AI influencing elections.

Texas House Speaker Dad Fallon has created a select committee on artificial intelligence to come up with changes to the law when the next legislative session begins in January 2025. This will come after the 2024 presidential election when AI could play a key role in the outcome.

Speaker Phelan appointed Rep. Giovanni Capriglione, R – Southlake, to chair the committee. Capriglione also co-chairs a new state advisory board on artificial intelligence with a fellow North Texas lawmaker, Sen. Tan Parker, R-Flower Mound.

Rep. Capriglione tells NBC 5 that making a fake video of him, a Republican, saying he hates former President Donald Trump and voting for President Joe Biden and using algorithms on the Internet is against the law. It should be.

“Anytime there’s a big technological advance, there’s a risk with it. Obviously with elections and other things we’re concerned with, deep fakes, changing people’s audio, and just creating new tweets and so on. This is a potential threat not only to the candidate but also to the voters themselves,” Capriglione said.

The chair says the new committee will present criminal fines, criminal penalties and guidelines to the full legislature next year. Their first interim report is due in May.

“Whether it’s social media companies or email providers, they need to know that there are things that should not be allowed to be transmitted or distributed,” Capriglione said.

The UT Austin researchers say the issue should have bipartisan support.

“The bottom line is that no one wants to be manipulated in the right way. So it’s like, we can find common ground,” said lab director Dr. Inga Trautheig.

Trauthigue and the team of Dr. Sam Woolley are trying to avoid what happened twenty years ago with the emergence of social media, which was largely taken for granted until its world-changing effects were already a reality. had become

“There was a lot of excitement about its democratic potential and not enough thought that it could be used by authoritarian regimes, by people who wanted to manipulate public opinion or were working to suppress freedom of expression.”

In short, the downside. “Which are big,” Woolley said.

His team is monitoring how big companies like OpenAi, Microsoft, and Meta roll out their technology, hoping to publicize and hold bad actors accountable.

“I personally think it’s important to catch the responsible actors who are involved,” Trauthog said, “just by sitting down and explaining how information is being manipulated, on which platforms, on which platforms.” With tools.Just provide information that is really helpful.

The beginnings of this idea are already in the works. Last month the United States Department of Commerce released a report calling on companies, local, state and national lawmakers to “expose problems and potential risks, and hold responsible entities accountable.”

Federal Commerce Department staff hope to provide guidance on best practices, require people to disclose when they use AI, and maintain legal liability so people can sue bad actors. .

“You can’t steal people’s identities in this country. You can’t defame people. Those laws are there. In some ways, people think you need to reinvent the wheel and make these news laws. I think we have to hold people accountable to the same laws we’ve always had but only online,” Woolley said.

A recent example that haunts Woolley and Trauthog runs through North Texas. New Hampshire’s attorney general named an Arlington man and company behind President Biden’s AI-generated robocalls that falsely urged voters to stay home during that state’s primary election.

In a statement to NBC 5, the NH AG’s office had no comment because the investigation is ongoing.

“Some of this innovation is happening in our state for better or worse. So we need to think about what that means for both Texas and American democracy,” Woolley said.

This technology can also greatly improve the way government works. The Texas Advisory Council on AI will focus on how AI can be used to improve services in the state. At the first council meeting, the Texas Department of Transportation told members about a pilot program that uses AI to monitor traffic cameras and automatically dispatch emergency crews when an accident is detected. The department has also reduced the time to generate receipts from weeks to seconds.

“I think we all want this technology to be successful. It’s incredibly innovative. We want it to be in Texas but at the same time we want to minimize those risks,” Capriglione said.

The overall objective of the Council and the Committee is to encourage positive usage and make laws to punish bad actors.

“We have an opportunity now to look at that and start making those rules now,” he said.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment