Market Pulse

Beware of the AI Lipstick: Distinguishing Real Innovation from Cosmetic Hype

12 November 2025 | AIMG
As the enterprise software market accelerates into the generative AI era, the line between true innovation and opportunistic rebranding has become dangerously blurred. “AI lipstick” - the superficial addition of large language model (LLM) features to legacy systems without meaningful architectural change - has become a defining characteristic of this cycle. AIMG’s multi-source analysis, including 37 in-depth interviews with senior executives, data scientists, and technology leaders across sectors, reveals a widening gap between the AI capabilities vendors claim and those they can demonstrably deliver. The findings show that while genuine AI-first builders are embedding autonomy, governance, and measurable performance into their platforms, a majority of “AI-enhanced” products remain thin veneers designed to exploit investor and customer enthusiasm rather than transform enterprise workflows

1. From Transformation to Ornamentation

In the aftermath of the transformer revolution of 2017, when the paper “Attention Is All You Need” reshaped the trajectory of artificial intelligence, true innovators began re-engineering their products around deep learning architectures and scalable model infrastructure. Those early entrants – who invested before the commercial potential was evident – built durable competitive advantages.

By contrast, post-2022 entrants have increasingly opted for “cosmetic innovation”: embedding simple retrieval-augmented generation (RAG) chat interfaces or pre-trained model integrations into existing systems. As one Chief Data Officer of a $10B manufacturing firm told AIMG, “Most of what we see marketed as AI-first could be rebuilt by a small engineering team in 60 days – it’s an interface, not an innovation.”


2. The Rise of “AI Washing” and Its Enterprise Cost

AIMG defines AI washing as the exaggeration or misrepresentation of AI capabilities to create market differentiation where none exists. The practice spans sectors – from software vendors rebranding automation scripts as “agents” to service providers claiming AI-driven decisioning that in reality depends on manual human processes.

Regulators are already taking notice. The U.S. SEC has pursued enforcement actions against firms for false AI claims, and European authorities are embedding AI governance standards (including ISO/IEC 42001) into compliance regimes. The message is clear: misrepresenting AI is not just bad marketing – it’s a governance failure with legal and financial implications.


3. The “Copilot” Illusion

The most visible expression of AI lipstick is the copilot proliferation – where nearly every vendor now markets an “AI assistant” irrespective of genuine capability.

AIMG’s research shows that fewer than 10% of these copilots exhibit measurable autonomy or integration beyond basic text generation. In most cases, they are natural-language front ends calling off-the-shelf APIs. As one senior product manager at a global SaaS provider admitted in interview, “We had to say we had an AI Copilot; otherwise, we’d be seen as behind. The feature was built in three months by connecting OpenAI’s API to our documentation search.”

This phenomenon echoes what AIMG calls the “Student-Level Project Problem”: when an AI feature can be replicated by a graduate software team using open APIs, it is unlikely to represent defensible intellectual property.


4. Agents vs. Wrappers: Where True Differentiation Lies

In AIMG’s taxonomy, genuine AI systems move from assistive to autonomous – planning, reasoning, and executing tasks with governance controls and measurable outcomes. The distinction is not linguistic but architectural:

  • Lipstick AI adds LLM-based summarization or query features without altering data, workflow, or security architecture.

  • Substantive AI embeds model orchestration, policy enforcement, red-teaming, telemetry, and feedback loops into the product fabric.

A Chief Product Officer at a $5B software company summarized: “If it can’t plan, act, and be audited, it’s not an agent – it’s a chatbot in a tuxedo.”


5. Expert Voices: The Buyer’s Reality

AIMG’s 37 expert interviews reveal consistent patterns across industries:

  • Adoption gaps: Despite widespread deployment announcements, active usage of AI copilots remains below 5% in most enterprise contexts. One former Azure specialist reported “roughly 4% active Copilot usage across departments, largely due to friction in prompting and workflow fit.”

  • Security and compliance barriers: A Chief Information Officer in financial services described the “more painful process” as convincing risk and legal teams to approve production deployment—often requiring private cloud isolation, AES-256 encryption, and $10M liability coverage.

  • Governance as differentiator: Multiple CISOs emphasized that “AI innovation now depends on operational discipline.” Vendors without ISO/IEC 42001 alignment or NIST AI RMF crosswalks are increasingly excluded from procurement shortlists.

  • ROI bottlenecks: Senior engineering leaders at large technology firms confirmed that most AI pilots stall due to lack of measurable ROI. As one put it, “The conversation has shifted from ‘what can LLMs do’ to ‘what do they do in production without breaking compliance or budget.’”

These practitioner insights (37 expert interviews) underscore that genuine value creation now depends less on model sophistication than on integration depth, governance maturity, and cost efficiency.


6. The Anatomy of Genuine AI Capability

AIMG’s due diligence framework identifies six “signals of substance” that distinguish authentic AI-first vendors from opportunistic adopters:

  1. Dedicated AI Research Lab or Centre of Excellence — staffed with identifiable researchers, operating budgets, and peer-reviewed outputs.

  2. Multi-Year Pre-2017 AI Roadmap — evidence of strategic commitment before the hype cycle.

  3. Patent and Publication Record — verified through USPTO/EPO filings or DOI/arXiv citations.

  4. ISO/IEC 42001 or Equivalent Governance Certification — demonstrating adherence to AI risk management standards.

  5. R&D Pipeline Transparency — with model telemetry, lineage tracking, and measurable outcomes.

  6. Production ROI Evidence — telemetry linking model outputs to business KPIs under regulated conditions.

In short, genuine innovators can produce artefacts; AI lipsticks can only produce demos.


7. Quantifying the Lipstick: AIMG’s Field Evidence

AIMG’s field assessments across 42 enterprise vendors found:

  • Only 31% could demonstrate red-teaming or adversarial testing beyond internal QA.

  • Just 18% had verifiable AI patents filed before 2020.

  • Fewer than 25% maintained formal AI governance frameworks aligned with ISO/IEC 42001 or NIST standards.

  • Only 11% could trace model outputs to business outcomes with production telemetry.

By contrast, vendors with sustained AI investment (typically those with in-house labs and proprietary data ecosystems) reported productivity uplifts of 30–40% and measurable reductions in processing times across call centres, software engineering, and compliance functions.


8. The New Moats: Where Real Value Forms

AIMG identifies six durable zones of defensible advantage in enterprise AI:

  1. Unique, Permissioned Data: Proprietary datasets and regulated feedback loops (e.g., in healthcare or finance) create switching costs.

  2. Embedded Distribution: Integration into daily workflows (e.g., Microsoft 365, ServiceNow, Snowflake) drives adoption and stickiness.

  3. Proprietary Reasoning IP: Domain-specific logic or editorially enhanced corpora reduce hallucination rates.

  4. Efficient Inference: On-device and rack-scale optimisation lowers total cost of ownership.

  5. Compliance Assets: ISO/IEC 42001 certifications and auditability are becoming de facto procurement requirements.

  6. Ecosystem Control: Standards like the Model Context Protocol (MCP) and native connector libraries generate platform lock-in.

The implication is clear: genuine AI differentiation now resides not in language model access but in governance, data control, and workflow integration.


9. From Hype to Discipline: The Procurement Imperative

Enterprise buyers must adopt the rigor of private equity AI due diligence – demanding verifiable artefacts, not slideware. AIMG recommends:

  • Governance evidence: ISO/IEC 42001 certificates, AI risk registers, and conformity assessments.

  • Technical audits: red-teaming reports, latency benchmarks, and lineage mapping.

  • Financial validation: cost-per-task metrics and post-deployment ROI.

  • Security confirmation: proof of private tenancy, encryption, and compliance with regulatory frameworks (EU AI Act, DORA).

As one Chief Architect at a global bank put it, “AI due diligence has become table stakes – it’s not about believing a demo, it’s about verifying a system.”


10. Conclusion: Innovation or Illusion?

AI is not a marketing function; it is an engineering discipline. The organisations that treat it as such – investing in R&D, governance, and integration – will shape the next decade of enterprise software. Those applying cosmetic enhancements to aging architectures will fade as quickly as the gloss they apply.

The AI revolution remains real, but so is the temptation to fake it. As enterprise buyers, investors, and policymakers confront an industry awash with AI claims, the mantra must be clear:

Don’t buy the lipstick. Buy the capability beneath.

Source: AIMG Research