AI careers are one adaptation away from extinction.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Many people who experiment with creative artificial intelligence come to two immediate conclusions. First: It’s amazing, because it writes and gives reasons better than many of my colleagues! Second: It’s scary, because it writes and gives reasons better than many of my colleagues! There has been one saving grace for the Freters: AI messes up.

How badly? Hilariously. People who couldn’t take a short story to save their lives ask ChatGPT to review their latest novel and then the bot is happy to meet. My work is somewhat protected., they think. But if they’re smart, a little voice might add: So far.

This inability to trust the results of queries (even though they are accompanied by what reads like high assurance) has been at the heart of several panel discussions I’ve been on about AI and journalism. For now, the industry is very cautious with AI. But an obvious question hangs in the air: What if they solve the problem? After all, if there’s one thing that applies to all technology besides Microsoft updates, it’s that it keeps getting better.

A photo taken on February 26 shows the logo of the ChatGPT application developed by American artificial intelligence research organization OpenAI on a smartphone screen (L) and the letters AI on a laptop screen…


KIRIL KUDRYAVTSEV/AFP via Getty Images

I raised this concern with ChatGPT, as anyone would, and this is what I received: “I believe that as AI advances, its ability to verify the results of inquiries will improve significantly. .Currently, AI systems (cannot) independently verify the accuracy of the information they produce. Future generations are expected to… cross-reference information, review sources, and trustworthiness. and distinguish between unreliable sources.

“These developments could lead to AI systems that … provide more accurate and reliable answers,” he added. “Techniques such as fact-checking algorithms, knowledge graph integration, and probabilistic reasoning can enable AI systems to more effectively evaluate the credibility of information.”

Of course, “as an AI natural language model”—which he tirelessly reminds you—the bot has no agenda beyond pure analysis (though that may be a fine line, as anyone who has ever read “news analysis” can confirm this). But the very detail of that last sentence certainly seemed to contain a flash of brilliance. Perhaps a twinkle in the eye of the paymaster.

The claim is convincing—but, as we know, could be wrong. ChatGPT may have found a satirical article about probabilistic reasoning in the Onion. So, I consulted Alec Eisenberg, a successful London-based serial entrepreneur whose startup Scroll.AI develops tools that are relevant to this discussion.

“As we deal with the way forward, trust is becoming a big concern for AI,” confirms Alec, who doesn’t deny being human with opinions. “For journalists and other content creators, AI can be great if you use it right, and potentially catastrophic if used incorrectly. I agree. I think future iterations will address this in deeper ways.”

If it’s true – if it happens. Highly likely That the outcome of AI research is rock solid and inaccessible—the results will be dramatic. That would mean the small voice was fine.

Imagine a scenario where a creative AI system not only produces articles, reports, or marketing materials, but also includes footnotes, hyperlinks, and references to back up its claims. Imagine if editors started realizing that the chances of fooling the AI, or getting the wrong results, are slim to none. Short, in fact, of deliberate sabotage by a rogue reporter, which has occurred here and there, even on calmer days.

Consider the consequences if key stakeholders conclude that, from news articles and press releases to marketing campaigns and consulting reports, AI-generated content is indistinguishable from human-generated products in terms of accuracy and credibility. could go

Basically that would mean you could write this column AI instead of me and there would be no reason to suspect it was bullshit. I could argue that it wouldn’t involve the same degree of humor, “lived experience” and so on. Je ne sais quoiBut do you believe me? I am a person with an agenda.

(Or am I?)

This will mark seismic consequences for content creation industries such as journalism, marketing, advertising, and strategic consulting services – it will.

Distressed businesses will have the choice of hiring consultants from McKinsey & Company and spending months dealing with a squad of arrogant teenagers, or uploading a company profile, annual report, mission statement and financials and waiting about three seconds. Then ask for an adjustment, and wait about three seconds. Until it’s fixed.

It will be gave The inflection point that could force mass adoption of AI-generated content could lead to mass layoffs, job displacement, and the erosion of traditional roles across a wide array of industries.

So, how much time do we have?

OpenAI, the creator of ChatGPT, has abandoned any notion of social conscience caution from its initial messaging, and marches fearlessly into the night.

In recent events it introduced a series of improvements, including allowing each user to create their own customized version of the bot, which they can also sell in a dedicated app store. And the most hyped announcement – rightly so – concerns the issue of authentication. A new tool, called ChatGPT4 Turbo, is expected to be “much more accurate in long contexts,” according to CEO Sam Altman, as it takes large amounts of text to use as instructions. Can — today by a few thousand characters. Up to 300 pages of text.

Dealing with this will be a major social challenge. This may finally flip the narrative on the idea that Luddites are meant to be mocked. Yes, humanity was not only sustained but strengthened by the Industrial Revolution and the automation of the textile industry, exposing the real Luddites. But the challenge facing the new Luddites is of a different order.

We will need to find useful ways to reengineer workers in the future and present them in functions and patterns that creative AI cannot replicate. By and large, it will include those areas of endeavor that have to do with true personality, true inspiration and unique brilliance. This can turn huge demand into some form of manual labour. Politics and prostitution are also feared. Maybe even poetry. That one is not so clear.

Journalists may argue that while AI-powered content may be cost-effective and scalable, it lacks the human touch, investigative rigor, and ethical judgment that define quality journalism. If media organizations prioritize AI-generated content over original reporting, there is a risk of a decline in journalistic standards and a loss of public trust. In the end, it will be what the public wants. Current indications are that only a minority are willing to pay top dollar for true quality.

In public relations and marketing, one can argue that without human creativity, insight and emotional intelligence, AI-generated communications may fail to resonate with target audiences, undermining the effectiveness of marketing efforts. reaches and weakens brand identity. Here too the evidence will be found in the market.

A growing industry could be AI regulation, especially if the new Luddites gain political traction fast enough. They may argue that not only are ethical guidelines, regulations, and accountability mechanisms urgently needed to ensure that AI-generated content maintains integrity, consumer protection, and public safety, but hyper There is also a need to limit its use through regulation and punitive taxation.

Either way, society may face the end of tenure. Instead of letting this issue dominate the conversation, we move on to discussions about critical race theory. So the point is: humanity will get what it deserves.

Dan Perry, a retired computer programmer, is the former Cairo-based Middle East editor and London-based Europe/Africa editor for the Associated Press. But follow it. danperry.substack.com.

The views expressed in this article are the author’s own.