AI surpasses humans in creative thinking.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Abstract: ChatGPT-4 was subjected to three different thinking tests against 151 human participants, showing that the AI ​​demonstrated a high level of creativity. Tests designed to assess the ability to generate unique solutions showed the GPT-4 to provide more original and elaborate responses.

This study identifies the emerging capabilities of AI in creative domains, yet acknowledges the limits of AI’s agency and the challenges in measuring creativity. While AI shows potential as a tool to enhance human creativity, questions remain about its role and the future integration of AI into the creative process.

Important facts:

  1. The creative edge of AI: ChatGPT-4 outperformed human participants in various thinking tasks, exhibiting high originality and clarity in responses.
  2. Study caveats: Despite AI’s impressive performance, researchers highlight AI’s lack of agency and the need for human interaction to enable its creativity.
  3. The future of AI in creativity: The results suggest that AI can act as an inspirational tool, supporting human creativity and overcoming conceptual stagnation, yet the true potential of AI to replace human creativity remains unclear. The range is still uncertain.

Source: University of Arkansas

Score another one for artificial intelligence. In a recent study, 151 human participants were pitted against CHGPT-4 in three tests designed to measure divergent thinking, which is considered an indicator of creative thinking.

Divergent thinking is characterized by the ability to generate a unique solution to a question that has no predictable solution, such as “What is the best way to avoid talking to your parents about politics?” In the study, the GPT-4 provided more original and elaborate responses than human participants.

Overall, GPT-4 was more original and extensive than humans on each of the different thinking tasks, even when controlling for response fluency. Credit: Neuroscience News

The study, “Current state of artificial intelligence generative language models more generative than humans on various thinking tasks,” was published in Scientific reports and authored by the U of A Ph.D. psychological science students Kent F. Hubert and Kim N. Ava, along with Daria L. Zabelina, assistant professor of psychological science at the U of A and director of the Mechanisms of Creative Cognition and Attention Lab.

The three tests used were the alternative use task, which asks participants to come up with creative uses for everyday objects such as rope or forks. the Consequences task, which invites participants to imagine the possible outcomes of hypothetical situations, such as “What if humans didn’t need more sleep?”; and the Divergent Associations task, which asks participants to generate 10 nouns that are as far apart as possible. For example, there is not much semantic distance between “dog” and “cat” while words like “cat” and “ontology” have a great distance.

Number of responses, response length and semantic differences between words were evaluated. Ultimately, the authors found that “overall, GPT-4 was more original and extensive than humans on each of the different thinking tasks, even when controlling for response fluency. In other words, GPT -4 demonstrated high creativity across a battery of different thinking tasks.”

This finding comes with some caveats. “It is important to note that the measures used in this study are all measures of creativity, but engagement in creative activities or achievements is another aspect of measuring a person’s creativity,” the authors say.

The purpose of the study was to examine human-level creativity, not necessarily those who have established creative credentials.

Hubert and Ava further note that “unlike humans, AI lacks agency” and “depends on the assistance of a human user. Therefore, AI’s creativity is in a state of permanent stasis unless prompted.”

In addition, the researchers did not evaluate the adequacy of the GPT-4 response. So while the AI ​​may have provided more answers and more original answers, the human participants may have felt compelled by their answers that they needed to be grounded in the real world.

Ava also acknowledged that the human motivation to write elaborate responses was probably not high, and said that there were additional questions about “how do you make creativity work? Can we really say that for humans these Are the tests common to different people? Are they assessing a wide range of creative thinking? So I think it behooves us to critically examine what are the most popular measures of divergent thinking.

Whether or not these tests are perfect measures of human creativity is not really important. The point is that major language models are evolving rapidly and are outpacing humans in a way they never were before. Whether they threaten to replace human creativity remains to be seen.

For now, the authors continue to see “moving forward, the future prospects of AI acting as an aid to a person’s creative process, acting as a tool for inspiration or as a permanent are promising for mood control.”

About this AI and creative research news

the author: Harden Young
Source: University of Arkansas
contact: Hardin Young – University of Arkansas
Image: This image is credited to Neuroscience News.

Original Research: Open access.
“The current state of generative language models of artificial intelligence are more generative than humans on a variety of thinking tasks,” by Kent Hubert et al. Scientific reports


Abstract

The current state of artificial intelligence generating language models is more creative than humans on various thinking tasks.

The emergence of publicly accessible artificial intelligence (AI) major language models such as ChatGPT has sparked a global conversation on the implications of AI capabilities.

Emerging research on AI has challenged the assumption that creativity is a uniquely human trait. Thus, it seems that human cognition versus what AI is capable of creating is objective.

Here, our goal was to evaluate the creativity of humans versus AI. In the current study, human participants (N = 151) and the GPT-4 provided responses to an alternative use task, an outcome task, and a different associations task.

We found that AI was robustly more creative than human counterparts along every different thought measure. Specifically, when controlling for fluency of responses, AI was more original and pervasive.

The present results show that the current state of AI language models demonstrate superior creativity compared to human respondents.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment