Safeguards to protect residents from artificial intelligence – NBC Connecticut

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

We’ve heard about the promise of artificial intelligence in ways like medical research, streamlining systems in finance and insurance, and many other ways.

But if it’s powerful and fast, as experts say, what’s to stop people from using it for illegal things like maybe your identity or your life savings, or the electrical grid or air traffic control systems? Command things?

Lawmakers are now trying to create some safeguards. NBC Connecticut’s Mike Heidek spoke with Senator Tony Huang about what’s in the works.

Mike Heidik: We appreciate your coming here. So when technology changes as fast as AI has, just literally in the last maybe 12 months, how do you begin to regulate? We’ve seen leaders in Washington struggle with this, and they see it as much.

Tony Huang: Well, technology has always advanced in our society. But AI has been profound in its impact, not just in terms of how we live, but how it’s going to affect us in the future. And the fact is, this is not only a practical effect, but also an advertising one. They are so hyperbolic in what it could be and it creates a lot of fear and uncertainty. Simply put, AI is high-level data points fed into an algorithm with a supercomputer. But the fact remains that it has profound growth and additional effects. It is truly changing and influencing how we live as a society.

Mike Heidik: So it could also streamline state systems, whether it’s getting your driver’s license or applying for state benefits, it seems like it could make them faster and more efficient. But I mean, how do you draft a law to say, is it a fact that you, maybe the leaders of our government, can only use AI in certain ways? Like how do you draft a law?

Tony Huang: Well, it’s the sensitivity of how we regulate First Amendment rights, and our technology companies that have had such a profound impact on how we manage our technology. But I think the other part is a social effect that we’re experiencing. One of the biggest concerns is deep fakes. Deep fake is a manipulation, and a theft, and a false premise of a desirable type that is using technology to misrepresent, and sometimes terror, to affect the way we live. should be done And we talked about how technology and social media have affected our world. This is another approach where sounds can be interpreted, images can be manipulated, audio can be, and videos can be changed. This is misrepresentation if not outright fraud.

Mike Haydik: So up to this point, we have an example. Listen once.

AI generated voice: This technology can be used to make it appear as if someone is saying things they never said. Just like now. I never actually said any of that. This is the deep lie of my voice.

Mike Heidik: So a staff member made you. How soon did it happen? And it must have been scary for you.

Tony Huang: Mike, this happened after a public hearing on the common law. My staff member creatively took about three minutes, five minutes of my audio clip that is in the public domain, typed into the script. And within 15 minutes, he managed to create it without any input from me. And Mike, I have to tell you when I first heard this, it was a terrifying feeling of invasion of privacy, identity theft and ultimately misrepresentation that could be used to harmful effect. And we are talking about a society in which information can be spread without any control. And if something goes wrong or misrepresented like this, there is no control. We’ve heard about this issue with President Biden in New Hampshire, Taylor Swift’s body switch pornography, our high school students being intimidated and misrepresented. It has serious social and societal implications. But more profoundly, it could have consequences come election time.

Mike Heidik: So where do we go from here? Are you working with the Democrats? What is going on in the Statehouse to try to impose a watchdog, for lack of a better term, or some kind of regulation?

Tony Huang: I think on a bipartisan basis in the House in the Senate, along with the administration, we know that something needs to be done. There are two task forces that I’ve been on with Senator James Maroney, the Connecticut Task Force on AI and a multi-state task force. We are looking at many arrays of technologies. But what the bill did indicate was First Amendment rights to be protected.

Mike Heidik: So your First Amendment rights, can it be almost the same as copyright? You know if somebody steals a song, if they’re stealing your voice or they’re stealing your likeness, I wonder if it can work.

Tony Huang: And that’s one of the questions because the technology companies that have provided a lot of advice, a lot of technology awareness, are saying, ‘Hey, we need to be protected from unsavory and unscrupulous people, you You know, we have legal proceedings against us.’ We need to be somewhat safe. So the legislation, and I was careful in my commentary, that the right to be able to sue, the right of people who feel as if their freedom, their personal image and fraudulent actions. Gone is the right to litigate. So there is a balance. Technology is one of the challenges we face. But how it affects people’s lives we still need all stakeholders to understand and act upon.

Mike Heidik: This is a different term. It is used for athletes, but name, image and likeness is how athletes are paid. But it can also be a legal seal to protect your identity.

Tony Huang: But the problem is with social media, our society has willingly given up that right, and how does that right escalate into these kinds of privacy violations, breaches of trust and fraudulent, dangerous behavior? When you damage people’s reputations, when you jeopardize and damage their reputations, it creates not only economic, as I said, and legal, but also profound psychological effects. are too late for us to recover from. I am very sensitive and worried about it.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment