This is what it looks like when AI eats the world.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Tech evangelists like to say. That AI will eat the world – a reference to a famous line about software by venture capitalist Marc Andreessen. Over the past few weeks, we've finally figured out what they mean.

This spring, tech companies made it clear that AI will be a defining feature of online life, whether people want it to be or not. First, Meta surprised users with an AI chatbot that lives in the search bar on Instagram and Facebook. It has since notified European users that their data is being used to train its AI. OpenAI released GPT-4o, billed as a new, more powerful and conversational version of its big language model. (Its announcement program featured an AI voice named Sky that Scarlett Johansson alleged was based on her own voice without her permission, an allegation by Sam Altman, CEO of OpenAI. (You can hear it yourself here.) At the same time, Google launched Then a bit further back in his search engine — “AI Overviews”. OpenAI has worked with a number of media organizations (incl Atlantic Ocean) and platforms like Reddit, which look like AI products will soon be a primary source of information on the Internet. (Atlantic OceanThe agreement with OpenAI is a corporate partnership. Editorial Division of Atlantic Ocean operates with complete independence from the business division.) Nvidia, a company that makes the microchips used to power AI applications, reported record earnings in late May and its market followed. Capitalization exceeded $3 trillion. Summing up the moment, Nvidia's centennial CEO Jensen Huang got the rock star treatment at an AI conference in Taipei this week and, uh, signed a woman's chest like a member of Mötley Crüe. did

The pace of implementation is dizzying, even alarming—including by some of those who understand the technology best. Earlier this week, employees and former employees of OpenAI and Google published a letter declaring that “strong financial incentives” have forced the industry to avoid meaningful oversight. The same incentives have apparently led companies to produce a lot of trash. Chatbot hardware products from companies such as Human and Rabbit were touted as attempts to demystify the smartphone, but they barely shipped in a functional state. Google's rush to launch an AI review—trying to compete with Microsoft, Perplexity, and OpenAI—resulted in hilariously poor and potentially dangerous search results.

Technology companies, in other words, race to get money and market share before their competitors and are making unforced errors as a result. But while tech corporations may have built the hype train, others are happy to get on board. Leaders across industries, fearful of missing out on the next big thing, are signing checks and signing deals, perhaps not knowing what they're getting into or if they're unwittingly part of the companies they're working with. are helping which will ultimately destroy them. The Washington PostThe company's chief technology officer, Vineet Khosla, has reportedly told staff that the company plans to have “ubiquitous AI” within the newsroom, even if its value to journalism is, in my view, unproven and ornamental. don't be We see the plane haphazardly gathering in the air.

As an employee of one of the publications that recently signed an agreement with OpenAI, I have a little insight into what it's like when creative AI turns its hungry eyes on your little corner of an industry. How does it feel when AI eats the world? It seems to be stuck.

TThere is an element here One of those media partnerships that feels like a fluke. Tech companies have trained their big language models with immunity, claiming that it's fair use to get Internet content to develop their programs. This is the logical conclusion of Silicon Valley's classic “ask for forgiveness, not permission” growth strategy. The funny way to read these contributions is that media companies have two choices: take the money offered, or still accept OpenAI scraping their data. These situations resemble hostage negotiations more than they do mutually agreeable business partnerships—an observation that media executives are making of each other privately and occasionally in public.

Publications can expressly reject these deals. They have other options, but these options are, to use the technical term, Not great. You can sue OpenAI and Microsoft for copyright infringement, which is what it is. The New York Times have done, and hope to set a legal precedent where extractive-generative-AI companies pay appropriately for any work used to train their models. The process is prohibitively expensive for many organizations, and if they lose, they are left with nothing but legal bills. Which leaves a third option: avoid the generative-AI revolution on principle, block the web-crawling bots of companies like OpenAI, and take a reasonable moral stand when your competitors surrender and take the money. This third path requires betting on the hope that the generative-AI era is far advanced, that Times Wins his case, or whether the government steps in to regulate this extractive business model — which he says is uncertain.

The situation publishers face perfectly illustrates a broader dynamic: No one knows what to do. This is hardly surprising, given that creative AI is a technology that has so far been defined by ambiguity and inconsistency. Google users encountering AI reviews for the first time may not understand what they're there for, or whether they're more useful than normal search results. There is also a difference between existing tools and the future we are being sold. The innovation curve, we are told, will be exponential. The pattern, we are warned, is about to change. Let's face it, regular people have little choice in the matter, especially as computers get bigger and more powerful: we can only experience a low level of disbelief when we see this promised. Do shadow boxes with future visualization. Meanwhile, the chat GPTS of the world are here, foisted upon us by tech companies who insist that these tools must somehow be useful.

But there is an alternative framing of these media partnerships that suggests a moment of cautious opportunity for troubled media organizations. There are publishers. in advance Algorithm providers, and media companies, have been getting a raw deal for decades, allowing platforms like Google to index their sites in exchange for only traffic referrals. Signing an agreement with OpenAI, under this logic, is not capitulation or good business: it's a way to fight back against platforms and set ground rules: You have to pay us for our content, and if you don't, we're going to sue you.

Over the past week, after speaking with several executives from various companies that have interacted with OpenAI, I've come to realize that tech companies are less interested in publisher data to train their models and more interested in news. is more interested in real-time access to Sites for OpenAI's upcoming search tools. (I agreed to keep these executives anonymous so they can speak freely about their companies' deals.) Access to publisher-partner data is helpful for the tech company in two ways: First, it allows OpenAI to provide third-party access to the data. allows organizations to be cited when a user asks a question on a sensitive issue, which means that OpenAI can claim that it is not making editorial decisions in its products. Second, if the company has ambitions to dethrone Google as the dominant search engine, it needs up-to-date information.

Here, I'm told, is where media organizations can leverage ongoing negotiations: OpenAI will, in theory, continue to generate up-to-date news information. Other search engine and AI companies, which want to compete, will also need this information, only now there is a precedent that they must pay for it. This will likely create a steady revenue stream for publishers through licensing. This is not unprecedented: record companies have fought platforms like YouTube over copyright issues and sought ways to get paid for their content; That said, news outlets aren't selling Taylor Swift songs. (spokespersons for both OpenAI and Atlantic Ocean He made it clear to me Atlantic OceanThe contract, which is for two years, allows the tech company to train its products. Atlantic Ocean But when the Agreement ends, OpenAI will not be permitted to use the Content unless it is renewed. Atlantic Ocean data for training new foundation models.).

ZOm out And even this optimistic line of thinking is fraught. Do we really want to live in a world where generative-AI companies have more control over the flow of information online? The transition from search engines to chatbots will be extremely disruptive. Google is imperfect, its product arguably degenerate, but it has provided a fundamental business model for creative work online by allowing the best content to reach audiences. Perhaps the search paradigm needs to change and it is natural for the web page to become a relic. Still, the magnitude of the disruption and the tech companies suggesting that everyone should be on board gives the impression that any AI developers concerned about finding a sustainable model for creative work to flourish do not have. As Judith Donath and Bruce Schneier recently wrote in this publication, AI “threatens to destroy the complex online ecosystem that allows writers, artists, and other creators to reach human audiences.” Follow this logic and things quickly become apparent: What incentive do people have to create work if they can't earn a living doing it?

If you feel like your brain is starting to grow inside your skull, you're getting the full experience of the AI ​​revolution brewing in your industry. That's the problem. in fact Sounds like. It is chaotic. It's early. You're told it's an exciting moment, full of opportunity, even if what that means in practice isn't entirely clear.

No one knows what's going to happen next. Generative-AI companies have created tools that, while useful and nominally useful to boost popularity, are a dim shadow of the ultimate goal of building human-level intelligence. And yet they're extremely well-funded, aggressive, and able to capitalize on a breathless hype cycle to charge with the express purpose of amassing power and making themselves a major player in any industry. are able Will the technological gains of the moment be worth the disruption, or will the hype slowly wear off, leaving the Internet even more broken than it already is? Nearly two years after the recent wave of AI hype, what's clear is that these companies don't need to build Skynet to be disruptive.

AI is eating the world. Champions of technology means a triumphant, exciting phrase. But this is not the only way to interpret it. One could read it dangerously, like a fast, furious colonial war. Lately, I've been hearing it with a tone of resignation, the kind that goes with shrugs and forced hands. What happens to the raw material—the food—after it is consumed and digested, its nutrients are removed? We don't say it out loud, but we know what it entails.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment