Within Barnard's pyramid approach to AI literacy

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

At the Barnard College Workshops on Neurodiversity, Academic Integrity and Environmental Justice, a new offering debuted in the spring of 2023: a session called “Who's Afraid of Chat GPT?”

The workshop began shortly after the release of the generative AI tool, a change that stirred classrooms and raised concerns among faculty and students. An internal Barnard survey found that more than half of faculty and more than a quarter of students had not currently used generative artificial intelligence (AI).

Melanie Hebert, Barnard's director of instructional media and academic technology services, knew that needed to change.

“It just became clear to me that a conceptual framework is needed for how we're approaching this because it can feel [like] There are all these separate things going on,” he said.

The college turned to the pyramid-style literacy framework first used by the University of Hong Kong and updated it for Barnard students, faculty and staff. Rather than jumping head first into AI, as some organizations have done, the pyramid approach follows a gradual incline in technology, ensuring a solid foundation before moving on to the next step.

Barnard's current focus is on levels one and two at the base of the pyramid, building a solid understanding of AI.

Barnard College's pyramid approach to AI literacy.

Melanie Hibbert, Barnard College

Level one is as simple as understanding the definitions of “artificial intelligence” and “machine learning” and recognizing the advantages and disadvantages of the technology. Level two builds on this, going beyond just trying out creative AI tools and improving AI prompts and recognizing bias, deception and incorrect answers.

The college hosts workshops and sessions for students and faculty to tackle these first two levels, allowing them to tinker with AI and brings in guest speakers to educate the campus community.

“I think we have to be proactive and have a judgment-free zone to examine this and talk about it,” Hebert said, noting that the ultimate goal is to move up the pyramid. “In a couple of years we can say almost everything. [at our institution] Have used some AI tool, have some familiarity with it.

“Then we can be leaders with AI,” he said. “Being creative about how to use it in liberal arts institutions.”

That step will come at levels three and four of the pyramid, with students and faculty at level three being encouraged to look at AI tools in a broader context, including ethical considerations. Level four is the pinnacle of AI adoption, allowing students and faculty to build their own software to take advantage of AI and think of new uses for the technology.

Although Barnard has his own pyramid, many organizations are still busy creating policies and implementing AI frameworks.

Leo Gonk, CIO at Arizona State University, advises institutions not to zero in on AI and instead treat it like any other topic.

“I think the policy conversation, we have to get over that,” Gonk said. “Engaging faculty is something we all can and should do.”

“Whether it's questions about short-cutting and cheating, it's called academic integrity,” he said during the co-hosted Digital Universities conference. Within Higher Ed last month. “We don't need a new policy for AI when we systematically look at all the 'gotchas' out there.”

Hal Abelson, a professor of computer science and engineering at the Massachusetts Institute of Technology, said that dealing with AI frameworks should be similar, for example in English or history departments focusing on consistency policies.

“You want something coherent and consistent within the university,” he said. “Take English composition—something [institutions] Some would say that there are rules up to individual faculties and some would say that there is a central policy.

He added that the policies of the universities will obviously differ, but they should reflect the guiding principles of the institution. For example, MIT doesn't have an “ease into it” type policy with AI and instead encourages faculty, students, and staff to experiment with it in a basically straight-forward, trial-and-error type of way. And be on top of technology by jumping up to four. of Barnard's pyramid.

“We put a lot of emphasis on creating with AI, but that's where MIT is,” he said. “It's about making things. Other places have a very different view of it.

A Systematic Review of AI Frameworks, published in the June 2024 issue of Science Directlooked at 47 articles that focused on enhancing AI literacy through frameworks, concepts, implementation, and evaluation.

The researchers behind the review—two from George Washington University and one from King Abdulaziz University in Saudi Arabia—found six key concepts of AI literacy: Recognize; knowing and understanding; use and apply; estimate; to create; And navigate ethically.

The review found that those six pillars, regardless of whether they end up in a pyramid shape or not, can serve as a blueprint for institutions still grappling with their framework. According to the paper, the pillars “can be used to analyze and design AI literacy methods and assessment tools.”

But the most important thing is just an AI policy, Hibbert and Abelson agreed.

Even if a university isn't currently using AI, “at some point, it's a powerful tool that will be used,” Abelson said. “Something has to happen.”

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment