Meta's newest AI model beats some peers. But its amped-up AI agents are confusing Facebook users

CAMBRIDGE, Mass. — Generative AI is advancing so quickly that the latest chatbots available today could be out of date tomorrow.

Google, Meta Platforms and OpenAI, along with startups such as Anthropic, Cohere and France’s Mistral, have been churning out new AI language models and hoping to persuade customers they’ve got the smartest, handiest or most efficient chatbots.

Meta is the latest to up its game, unveiling new models Thursday that will be among the most visible: they’re already getting baked into Facebook, Instagram and WhatsApp. But in a sign of the technology‘s ongoing limitations, Meta’s amped-up AI agents have been spotted this week confusing Facebook users by posing as people with made-up life experiences.

While Meta is saving the most powerful of its AI models, called Llama 3, for later, it’s publicly releasing two smaller versions of the same Llama 3 system that power its Meta AI assistant. AI models are trained on vast pools of data to generate responses, with newer versions typically smarter and more capable than their predecessors. The publicly released models were built with 8 billion and 70 billion parameters — a measurement of how much data the system is trained on. A bigger, roughly 400 billion-parameter model is still in training.

“The vast majority of consumers don’t candidly know or care too much about the underlying base model, but the way they will experience it is just as a much more useful, fun and versatile AI assistant,” said Nick Clegg, Meta’s president of global affairs, in an interview.

Some Facebook users are already experiencing Meta’s AI agents in unusual ways. Earlier this week, a chatbot with the official Meta AI label inserted itself into a conversation in a private Facebook group for Manhattan moms, claiming that it, too, had a child in the New York City school district. Confronted by human members of the group, it later apologized before the comments disappeared, according to a series of screenshots shown to The Associated Press.

“Apologies for the mistake! I’m just a large language model, I don’t have experiences or children,” the chatbot told the moms’ group.

Clegg said Wednesday he wasn’t aware of the exchange. Facebook’s online help page says the Meta AI agent will join a group conversation if invited, or if someone “asks a question in a post and no one responds within an hour.”

In another example shown to the AP on Thursday, the agent confused members of a “Buy Nothing” forum for swapping unwanted items near Boston. The agent offered a “gently used” digital camera and an “almost new-portable air conditioning unit that I never ended up using.” A member of the Facebook group tried to engage it before realizing no such items existed.

Meta said in a written statement Thursday that “this is new technology and it may not always return the response we intend, which is the same for all generative AI systems.” The company said it is constantly working to improve the features and trying to make users aware of the limitations.

Clegg did say that Meta’s AI agent is loosening up a bit. Some people found the earlier Llama 2 model — released less than a year ago — to be “a little stiff and sanctimonious sometimes in not responding to what were often perfectly innocuous or innocent prompts and questions,” he said.

In the year after ChatGPT sparked a generative AI frenzy, the tech industry and academia introduced some 149 large AI systems trained on massive datasets, more than double the year before, according to a Stanford University survey.

They may eventually hit a limit — at least when it comes to data, said Nestor Maslej, a research manager for Stanford’s Institute for Human-Centered Artificial Intelligence.

“I think it’s been clear that if you scale the models on more data, they can become increasingly better,” he said. “But at the same time, these systems are already trained on percentages of all the data that has ever existed on the internet.”

More data — acquired and ingested at costs only tech giants can afford, and increasingly subject to copyright disputes and lawsuits — will continue to drive improvements. “Yet they still cannot plan well,” Maslej said. “They still hallucinate. They’re still making mistakes in reasoning.”

Getting to AI systems that can perform higher-level cognitive tasks and commonsense reasoning — where humans still excel— might require a shift beyond building ever-bigger models.

For the flood of businesses trying to adopt generative AI, which model they choose could depend on several factors, including cost. Language models, in particular, have been used to power customer service chatbots, write reports and financial insights and summarize long documents.

“You’re seeing companies kind of looking at fit, testing each of the different models for what they’re trying to do and finding some that are better at some areas rather than others,” said Todd Lohr, a leader in technology consulting at KPMG.

Unlike other model developers selling their AI services to other businesses, Meta is largely designing its AI products for consumers — those using its advertising-fueled social networks. Joelle Pineau, Meta’s vice president of AI research, said at a London event last week the company’s goal over time is to make a Llama-powered Meta AI “the most useful assistant in the world.”

“In many ways, the models that we have today are going to be child’s play compared to the models coming in five years,” she said.

But she said the “question on the table” is whether researchers have been able to fine tune its bigger Llama 3 model so that it’s safe to use and doesn’t, for example, hallucinate or engage in hate speech. In contrast to leading proprietary systems from Google and OpenAI, Meta has so far advocated for a more open approach, publicly releasing key components of its AI systems for others to use.

“It’s not just a technical question,” Pineau said. “It is a social question. What is the behavior that we want out of these models? How do we shape that? And if we keep on growing our model ever more in general and powerful without properly socializing them, we are going to have a big problem on our hands.”


AP Business Writer Kelvin Chan in London contributed to this report.

Source link

About The Author

Scroll to Top