‘Everything is AI now’: Amid AI boom, agencies navigate data security, stability and fairness

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

The future of the generative AI hype cycle is up in the air, especially after a Goldman Sachs report called into question the true value of AI tools. Still, these tools and platforms, whether built on generative AI or glorified machine learning, have flooded the market. In response, agencies are moving through them through sandboxes — secure, isolated and controlled spaces for testing — as well as internal AI task forces and client contracts.

Although artificial intelligence itself is a while old, the industry’s creative AI arms race began last year with the promise that the technology would make marketers’ jobs easier and more efficient. But the jury is still out on that promise because creative AI is in its infancy, and faces challenges with things like deception, bias and data security. (And that’s to say nothing of the energy issues associated with AI.) Plus, AI companies sit on reams of data, which can make hacking even more of a concern.

“There are so many advertising platforms out there. Everything is AI now. Is it really? Trying to stay ahead of it and think about it, is it really?,” said Tim Lippa, global chief product officer at marketing agency Assembly. It’s something we spend a lot of time with.

At this point, generative AI has gone beyond major language models like OpenAI’s ChatGPT, looking at everything from search functionality to image generation on Google and social media platforms. Agencies, too, have introduced their AI experiences for internal use as well as for client-facing operations. For example, in April, Digitas introduced Digitas AI, its generative AI operating system for clients. (Find a comprehensive timeline of creative AI’s breakout year here.)

For all the hullabaloo surrounding generative AI, everything is still in testing mode, according to agency executives. It’s especially important to consider that some AI efforts are geared toward generating quick headlines or pleasing the C-suite by allaying concerns about missing the boat on generative AI.

“Some of these solutions are still struggling when it comes to [intellectual property] and copyrights and how they protect that, and if they can disclose the datasets they’re using or training on,” said Elav Horwitz, evp and global head of applied innovation at McCann Worldgroup. Recall, for example, that OpenAI’s chief technology officer Meera Murthy made headlines in March when she refused to provide details about how OpenAI’s text-to-video generator Sora What data is being used for training?

According to Horowitz, a major problem with creative AI is illusion. It’s something McCain has been discussing with OpenAI, he said, trying to fine-tune what the tech company is doing to solve the problem because this bar The bar keeps coming up.

McCann has enterprise-level contracts with major players in the space, including ChatGPT, Microsoft Copilot, Claude.ai and Perplexity AI, all of which are considered secure environments by the agency’s legal, IT and finance teams. (Financial details of these deals were not disclosed.) Only after platforms are deemed secure can solutions be offered to internal stakeholders. And yet, Horwitz added, the agency creates its own sandbox environment on its servers to ensure the safety of sensitive information before signing any contracts with AI partners.

McCann is currently testing Adobe Custom Models, Adobe’s content production tool. “We can actually use our visual assets as part of it. We know it’s safe and secure because it’s trained on our own data. That’s when we know we can get it. can also be used commercially,” Horowitz said. Data Is He added that the agency’s own through research or client information.

It’s a similar story at Razorfish, where the agency has contracts with major platforms that sandbox its own and its clients’ data. According to Christina Lawrence, evp of user and content experience at Razorfish, an approved vendor list exists to ensure that AI platforms with which the agency partners are not trained on licensed or royalty-free assets. has gone

Or we need to make sure that confidential data that’s used for tools isn’t used to train fodder for LLMs, which we all know they do,” he said. He added.

Taking it beyond sandboxes, Razorfish has legal protections that require clients to sign off that they are aware that generative AI is being used for client work. You have to understand that we have multiple levels of checks because this is so new, and we want to be completely open and transparent,” Lawrence said.

Again, generative AI is still a new space for marketers. Tools like ChatGPT were originally released to the general public, Lawrence said, with platforms learning as technology evolves and changes. There has yet to be a societal consensus on how AI should be regulated. Lawmakers have been thinking about the intersection of AI and privacy of late, concerned about privacy, transparency and copyright protection.

Until that consensus is reached, the onus is on brands and their agency partners to put in place guardrails and parameters to ensure data security and scalability, and to navigate AI’s inherent biases.

My favorite is always making sure that the images and what’s going to come out of the creative side, they have the right number of fingers and toes and the core of all that stuff,” Lipa said. Everything has slapped the AI ​​logo on everything they do over the last year. In some cases, it really is. In some cases, it really isn’t.”

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment