← Back to homepage
Ethics & Regulation

AI 'Killed' a Michelin-Starred Restaurant That Was Still Open

A Cheltenham rooftop bar published an AI-drafted blog post mourning the 'closure' of a nearby Michelin-starred restaurant — which was very much still open. The story is small. The lesson for every business publishing AI content is not.

By TreffikAI Editorial4 min read
Empty restaurant interior with set tables

A small story from Cheltenham this week is, in miniature, the entire enterprise AI risk register. A rooftop bar published an AI-assisted blog post that announced the closure of a nearby restaurant — a Michelin-starred restaurant that has been open continuously since 2009 and is still very much trading. The post was real, the AI was real, the damage potential was real. The closure was not.

What happened

The Nook on Five, a rooftop venue in Cheltenham, ran a blog post effectively mourning the "demise" of nearby restaurant Lumiere, calling it one of their favourite spots in town and saying they were sad it had closed. According to the BBC, Lumiere's head chef Jon Howe — who runs the restaurant with his wife Helen — found the post while routinely searching Google for new reviews.

Helen Howe's reaction is worth quoting: "We've been here 17 years, and we're very much alive and kicking."

She also pinpointed what likely went wrong. Lumiere had recently been closed for a week while the couple were away. The most plausible reading is that an AI tool, prompted to write something about local restaurants from existing web content, conflated a temporary closure with a permanent one — and nobody caught the difference before publication.

The Nook's response — a textbook post-incident playbook

The Nook on Five's apology landed quickly and is genuinely worth studying as a small but well-executed crisis-comms move:

  • They issued a public apology the same day across social channels.
  • They amended the blog post and explicitly stated the original information was "entirely false" on a follow-up graphic.
  • They reached out directly to Lumiere to help fix the situation.
  • They acknowledged the root cause: "the error arose from an AI-generated draft produced from our website which was not appropriately reviewed before publication."
  • They committed to reviewing internal processes so it doesn't recur.

Helen Howe was generous in her summary: "AI is so useful when used correctly, but it's not foolproof. You have to put that due diligence in and you have to read what it produces for you and you have to take responsibility for that."

Why this matters far beyond one restaurant

It is tempting to read this as a charming local anecdote. It is actually a clean, low-stakes preview of a problem that is going to land on a lot of marketing teams much harder.

A few things to notice:

  • The harm was asymmetric. Lumiere is a small, family-run business. A blog post falsely declaring them closed could meaningfully cut bookings before the correction propagates. Multiply that pattern across thousands of AI-drafted posts and you get a steady drumbeat of small reputational hits to small businesses — exactly the kind of thing search engines and review sites then re-index.
  • The failure mode was banal. No prompt injection, no jailbreak, no edge case. The model just summarised stale or misleading source material and confidently stated it as current fact. This is the most common AI failure in production content workflows, and it is very easy to ship by accident.
  • The fix is editorial, not technical. Better models will reduce the rate, but not the category. Anyone publishing AI-drafted content needs a human review step that is required, not aspirational, with explicit fact-check obligations on any claim about a third party — especially closures, prices, ownership, or legal status.

A short checklist for anyone running AI-drafted content

Treat this story as a free fire-drill. If your content pipeline includes any AI drafting, ask:

  1. Is there a named human reviewer for every AI-drafted piece before it is published, with sign-off recorded?
  2. Are claims about third parties — competitors, neighbours, public figures, businesses — flagged for explicit verification against primary sources, not just the open web?
  3. Do you have a fast retract-and-correct workflow that you have rehearsed at least once, including a path to notify the affected party directly?
  4. Are your prompts and source material dated? "Restaurant X closed" is true once and then false forever.
  5. Do you log which AI tool produced which draft, so post-mortems are not guesswork?

The takeaway

The Cheltenham story is the gentlest possible version of a hard truth: at scale, AI-drafted content will sometimes confidently misstate facts about real people and real businesses, and the legal, reputational and regulatory liability lands squarely on the publisher. The Nook on Five did almost everything right after the fact — and even then, the affected restaurant called it "really serious" and "huge amounts of damage" in potential.

The cheap lesson is to put the human back in the loop. The expensive one is to wait until your version of this story makes the news.

(Source: BBC News. Photo: Unsplash, licence.)

Tags:#ai-mistakes#media#governance#adoption
Share: