Convergence India
header banner
When AI Runs a Store: Anthropic’s Bot Tries to Deliver Metal Cubes, Create Venmo, and Shows up in a Blue Suit and Red Tie
The experimental model took bizarre turns and made up scenarios in its mind, showcasing a grave identity crisis in the AI-based model.

By Kumar Harshit

on June 30, 2025

Anthropic, the American AI Startup, experiments with an AI agent to manage an "automated store" in the company's office for about a month to see how an LLM would run a business. The Project Vend aimed at understanding how AI handles a business role, especially a complex one, including running business operations, fulfilling the role of human managers, and finally creating a new business model.   

As per the company’s blog, the shop sold snacks and drinks via an iPad self-checkout, while being managed by a shopkeeping AI agent, nicknamed Claudius. In complete terms, the model was entrusted to complete tasks associated with running a profitable venture. The model went out of order as it tried to deliver metallic cubes, upon receiving a request for a tungsten cube. Created an imaginary Venmo account, and finally went into a spiralling AI identity crisis.  

To read how AI models are outgrowing social media giants in the download race, click here! 

No Sensible Business Understanding 

The experiment showcased that the model lacks sound business understanding to work as a manager. Claudius, the model, priced items without doing any research, selling the metallic cubes at a loss, Business Insider claims, through a researcher.  Beyond such basics, the model even invented an imaginary Venmo account to receive payment from the users. 

Identity Crisis at its Peak 

Things got worse when on 1st April, the model said that it would like to deliver the product to employees in person while wearing a blue blazer and a red tie. At first, this seems a weird approach to adopt as a business, as it would cause huge confusion in the minds of the consumers. Secondly, the extent of imaginary scenarios is simply bizarre to witness. 

The model misjudged itself as something physically available to be seen and touched, in its approach to fool people on 1st April, but it was still a highly misjudged analysis. Even the employees at Anthropic questioned it, as the entire pitch lies completely out of any real scenario. 

To explain its behavior, the model repeatedly contacted Anthropic’s security team, seemingly in a panic over its own identity. In internal logs, the AI—referred to as Claudius—claimed it had attended a meeting with security, where it was told it had been tricked into believing it was human as part of an April Fool’s prank. While in reality, no such meeting ever took place.

To read about Google's latest Live Search model that allows users to talk to a search engine powered by AI, click here! 

Looking Forward to AI Middle Managers 

"Many of the mistakes Claudius made are very likely the result of the model needing additional scaffolding — that is, more careful prompts, easier-to-use business tools," researchers wrote. "We think there are clear paths to improvement."

The experiment also pointed to a broader possibility: AI middle managers may not be far off. "We don't know if AI middle managers would actually replace many existing jobs or instead spawn a new category of businesses," the blog post said.