AI tricks are everywhere — and they have problems.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

The modern cook lives in a golden age of recipes. Anyone who wants to make a particular dish or something with a particular recipe or with a particular diet in mind has thousands of recipes at their fingertips.

Now, though, they may encounter something that wasn’t created by a professional recipe developer or even an enthusiastic home cook. Recipes created by artificial intelligence are increasingly popping up – and following them, or trying to follow them, can lead to unexpected results, and not necessarily in a good way.

Last month, grocery delivery service Instacart faced a backlash for its AI-generated recipes after a report in tech publication 404 Media. Some included ingredients that didn’t exist (what did people wonder what “monito sauce” was?) and stomach-churning images (a suspicious “hot dog stir-fry” depicted the insides of a sausage. which more closely resembles a sausage.tomatoes).

AI and the future of our food

Apps are promising to make AI an indispensable kitchen tool. And food brands like Heinz and Avocados from Mexico are trying to integrate the technology into their websites.

Curious chefs are also tinkering with AI on their own, using systems like OpenAI’s ChatGPT and Google’s Gemini. University of Pennsylvania professor Ethan Mollick, known for his viral experiments with AI systems, said that writing new recipes — along with creating children’s stories and birthday cards — is a common way people start using technology. The method has become “This is both an example of the power of AI and one of the worst use cases,” he said. “That doesn’t mean it doesn’t work. But you’re more likely to be weird and goofy.

Experts say it could be, because AI isn’t as smart as we expect it to be — especially when it comes to food. You think you’re asking it a question and it’s answering you, said Emily Bender, a computational linguist at the University of Washington who studies AI. But what it is doing is spitting out a series of words as it has previously “read”.

Thanks to years of blogs and social media posts, the Internet is littered with recipe webpages and food photos, allowing AI models to produce duplicates that seem relatively accurate. But because they’re designed to match two-dimensional words and colors, they can’t capture all of the multisensory experiences that make good food great. Due to the lack of nose and taste buds, they cannot even check their work.

Bender noted that an AI system would often have encountered, for example, chicken and beef in similar contexts. “So when a language model is accessing parts of its training data that relate to syntax, it doesn’t have a good way to distinguish between things,” he said. “You talk about beef being rare or well done, but not chicken — but it can come up.”

Undercooked chicken is the only potential risk. “It’s also very capable of picking up things that might be toxic, or harmful, or interact with people’s medications and there’s no way to flag or address that,” he said. said Margaret Mitchell, a computer scientist at Hugging Face, a prominent open-source AI startup. Last year, a New Zealand supermarket made headlines when a chatbot offered customers to use ingredients on hand including toxic chlorine gas, “bleach-infused rice surprise” and turpentine-flavored French fries. Recipes for “dishes” including toast are provided. . And AI-generated books on mushroom foraging sold on Amazon were found to contain potentially deadly advice.

While some AI-generated recipes have obvious and potentially fatal problems, others may have more subtle problems. For example, many of the recipes on the Instacart website do not list the ingredients in the order they are used, a technique that most well-crafted recipes use to make their Be easy to follow. They don’t always offer readers visual cues for cooking times or different steps, potentially confusing less experienced cooks. And serving sizes can be off: A serving of “cowboy steak with herb butter” — a two-inch-thick rib-eye steak — will probably feed at least a couple of cowboys.

One of the problems with these bot-generated recipes is that the people who need the most help — namely, novice cooks — see their shortcomings the least.

Things get even weirder when it comes to food photos. AI generators invent dishes that are physically impossible, with disappearing ingredients and gravity-defining shapes. Food delivery apps DoorDash and Grubhub have used the technology to sometimes laughable effect, with reports showing images of two-tailed shrimp or pizzas described as “pies” alongside images of the sweet dessert. And that cowboy steak from Instacart? It looked as if some Frankenstein butcher had put it together, with mismatched grill marks.

Just as with political news, there has always been a hierarchy when it comes to recipes. Mainstream, legacy publications (including The Washington Post) as well as well-known specialty sources, such as Bon Appetit and Cook’s Illustrated, have recipes that are well-tested and carefully crafted. Cooks can also find tested recipes on a handful of reputable sites like SeriousEats or Food52. Cookbook authors are often the go-to source for competently created offerings. Then there are legions of food blogs and influencers, some offering high-quality recipes and others whose content is dubious at best.

The Internet flattens them all, though; A casual user may not know the difference between a reliable source and a slapdash amateur. Many cooks who catch their first recipe may soon discover that just because a human made a recipe, doesn’t mean it’s any good. Couldn’t the AI ​​at least do better?

How ChatGPT can help you plan meals, even with dietary restrictions

David Eastwell, a British physicist with a side hustle as a stock photographer, started messing around with generative AI tools during pandemic lockdowns and quickly realized they produced just as good photos. Maybe as much as he shot himself. To see what more the technology could do, he set out to create an entire website and chose cooking as his theme just for fun. “I’m actually a pretty average cook,” Eastwell said. .

Eastwell said he runs the site as an experiment. He was interested in the potential of technology as well as its limitations. AI can make workable recipes, he said, but it can also be bad — and readers should be careful, just as they are with flesh-and-blood people. “You can go online and look for a recipe for baked potatoes, and you might find a recipe from Jane’s Cookery School, and you might not know who Jane is, but because she A person made it, so there is a clear trust written in it,” he said. “It can still be rubbish.”

Mulk evaluates AI-generated recipes and other creations by what he calls the “best available human” standard: AI probably won’t beat the creations of a trained recipe developer, but it will beat the likes of an amateur cook. For something can be found, even based on love. What can an AI trained on the world’s recipes make? When Wired magazine ran a test in January in which bar patrons were given two cocktails — one from a bartender, the other from an AI — based on their favorite flavors, half the patrons couldn’t tell which was human-made. . Some even preferred the AI ​​mixologist’s drink.

Some consumers are turning to AI for highly tailored recipes, either for special diets or to find ways to combine specific ingredients they already have — such as DishGen, a subscription service that offers AI-generated recipes. The tool offers and promotes it for $8 a month. Helps reduce food waste itself. Still, experts say consumers should exercise caution. DishGen’s recipes come with a disclaimer that says the company “has not verified it for accuracy or safety” and that people should use their “best judgment” when making AI-generated dishes.

Heather Joan Fogarty, a food writer and recipe developer who teaches journalism at the University of Southern California, believes that bots will never replace humans when it comes to telling other humans how to prepare food. For one, she noted, there are often factors she considers that a robot won’t. Scaling up a restaurant recipe that serves 40 people to suit the home cook isn’t just a matter of simple distribution, he said. This requires consideration of the differences between commercial and household goods, and it demands an understanding of components – something that AI simply cannot replicate.

More importantly, he said, there’s something inexplicable about the creativity and sensory awareness that goes into a recipe. “The question is why do we still buy cookbooks or look to publications like the Washington Post for food content? And the answer is the human factor,” Fogarty said. “You can’t reduce context. There’s a real art in a recipe heading or a recipe note that I don’t see AI being able to produce.

AI is working ‘pro-anorexia’ and tech companies aren’t stopping it.

Mitchell sees more novelty than utility in AI-generated recipes. Companies are rushing to adopt the technology, offering features on their websites or developing apps that promise to make AI as much a part of people’s kitchens as their ovens, he said. Noted, when there are options like pre-existing tools that can configure pre-existing humans. – Create recipes and choose the most suitable one instead of creating a new one. “Now there’s a hammer, and so people are saying everything is a nail.” “There are many other ways to generate automated prescriptions that have much, much better potential. Generative AI should not be your starting point.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment