An incorrect allergen declaration can cause anaphylaxis. A misidentified ingredient can harm a child, a pregnant woman, or anyone with a severe allergy. We take this responsibility seriously.
The GNV platform is deliberately restrictive in how and where AI is used. We do not use AI for safety-critical food data.
Our Position on AI in Food Operations
AI systems are probabilistic. They hallucinate. Even a 0.01% error rate in allergen classification means real people — children, parents, vulnerable individuals — are put at risk. That is not acceptable.
What AI Does Not Do
For this reason, GNV does not use AI for:
— Allergen detection or classification
— Nutrition calculation or estimation
— Ingredient substitution recommendations
— Food safety decisions
— Compliance determinations
— Any output that could directly affect consumer health
All allergen data, nutrition values, and safety-critical information on the platform comes from verified, authoritative sources — supplier specifications, laboratory analysis, and certified databases. Never from AI generation.
Where AI Is Actually Used
AI is used in a small number of strictly bounded, non-safety-critical presentation tasks:
Menu description writing — generating short descriptive text for menu boards (e.g. "Herb-flecked rice with charred peppers"). These are presentation copy only and do not contain allergen, nutrition, or safety information.
Food photography generation — creating representative dish images for digital menu displays where no photograph exists.
Photo enhancement — improving the quality of existing food photographs for menu board presentation.
That is the complete list. AI does not touch allergens, nutrition, ingredients, supplier data, verification, traceability, or any operational data that affects food safety.
Safety Controls on AI Outputs
Even for presentation tasks, we enforce strict controls:
Human approval required — no AI-generated text appears on a live menu without explicit staff approval
Ingredient fidelity checking — AI descriptions are validated against the actual recipe ingredients to prevent hallucinated content
Allergen contradiction detection — AI outputs are automatically checked for language that could contradict declared allergens (e.g. claiming "dairy-free" when milk is declared)
Side-by-side review — staff see original and AI-generated text together before approving
Audit trail — original text is preserved; all AI modifications are logged with who approved them and when
Feature-flagged — AI features can be disabled entirely per tenant
Permanent Boundaries
Regardless of how AI technology evolves, the following boundaries are permanent:
— AI will never autonomously classify or modify allergen declarations
— AI will never generate or alter nutrition data
— AI will never make food safety determinations
— AI will never override verified data from authoritative sources
— AI will never publish content to live menus without human approval
These are not aspirational guidelines. They are architectural constraints enforced at the system level.
Verification Confidence
The platform distinguishes between levels of data confidence:
Verified — confirmed through authoritative source, laboratory analysis, or certified database
Supplier-provided — declared by the supplier with supporting specification
AI-generated (presentation only) — created by AI for display purposes, clearly marked, requires human approval
Safety-critical data (allergens, nutrition) must always be at the Verified or Supplier-provided level.
Why This Matters
We operate in an environment where trust is not abstract — it is physiological. A person with a severe nut allergy trusts that the allergen information on their menu is correct. A parent trusts that their child's school meal is safe.
That trust cannot be delegated to a probabilistic system. It must be earned through verified data, authoritative sources, and human accountability. AI can help present information clearly. It cannot be trusted to determine what is safe to eat.