FIU Professor Joel Carnevale Coauthors Study in Academy of Management Discoveries

A new study from Florida International University reveals that undisclosed AI usage significantly damages brand reputation and consumer trust. Published in the Academy of Management Discoveries, the research indicates a global shift where “human-made” provenance is becoming a premium asset, forcing multinational corporations to rethink transparency strategies in the creative economy.

We are standing at a peculiar inflection point in the global market. For the last decade, the narrative around Artificial Intelligence was one of unbridled efficiency—faster code, cheaper graphics, limitless scale. But as we move through early 2026, the tide is turning. It is no longer just about what the machine can do; it is about what the consumer feels when they know a machine did it. Earlier this week, a pivotal study out of Florida International University (FIU), led by researcher Joel Carnevale, dropped a bombshell on the corporate world: using AI without disclosure doesn’t just save money; it actively erodes trust.

Here is why that matters for the global economy. We are not talking about a niche sentiment among art critics in New York or London. We are talking about the fundamental valuation of intellectual property in a post-truth era. When a brand in Tokyo, a studio in Berlin, or an agency in São Paulo utilizes generative models without signaling it, they are gambling with their most valuable currency: reputation.

The Rise of the “Human Premium” in Global Trade

The FIU study highlights a phenomenon I call the “Human Premium.” Much like the organic food movement of the early 2000s, we are seeing the emergence of a certified “human-made” label in the creative industries. What we have is not merely a cultural preference; it is an economic reality that is reshaping supply chains.

The Rise of the "Human Premium" in Global Trade

Consider the luxury goods sector. For years, the value proposition was craftsmanship. Now, as generative AI floods the market with “good enough” designs, the scarcity of human intent becomes the differentiator. This creates a bifurcation in the market. On one side, you have high-volume, low-cost AI-generated content. On the other, you have verified human creation commanding a significant price markup.

But there is a catch. Verifying humanity is expensive. It requires blockchain provenance, watermarks, and rigorous auditing. This creates a barrier to entry for smaller creative firms in developing nations who may lack the infrastructure to prove their work is human-made, potentially widening the gap between Global North and Global South creative economies.

“We are witnessing the commoditization of trust. In a world saturated with synthetic media, the only scarce resource left is authentic human intent. Companies that fail to disclose their use of AI are effectively short-selling their own brand equity.” — Sarah Chen, Senior Fellow at the Center for International Governance Innovation (CIGI)

The implications for international trade are stark. If the US market, driven by studies like Carnevale’s, begins to penalize undisclosed AI use, multinational corporations must adapt their global supply chains. A marketing campaign produced in India for a US audience now carries a “reputation risk” if the AI involvement is hidden. This forces a standardization of disclosure protocols across borders.

Regulatory Friction: Brussels vs. Silicon Valley

While the US study focuses on consumer sentiment, the regulatory landscape is hardening elsewhere. The European Union’s AI Act, fully enforceable by this point in 2026, mandates strict transparency for generative content. This creates a friction point for US-based tech giants operating globally.

The divergence is clear. In the US, the approach has historically been innovation-first, letting the market decide. In Europe, it is rights-first, protecting the consumer and the creator. This regulatory arbitrage is forcing companies to choose: do they adopt the stricter EU standard globally to maintain a unified brand voice, or do they risk fragmenting their operations?

The FIU data suggests that market forces might accomplish what regulators intended. If consumers punish non-disclosure, companies will disclose, regardless of the law. This is the “soft power” of consumer sentiment aligning with the “hard power” of EU regulation.

To understand the scope of this shift, we must look at how different jurisdictions are handling the definition of “authorship” and “disclosure.” The table below outlines the current geopolitical stance on AI transparency as of March 2026.

Region Primary Regulatory Framework Disclosure Mandate Market Sentiment (2026)
European Union EU AI Act (Full Enforcement) Strict mandatory labeling for synthetic media High demand for “Human-Made” certification
United States State-level bills & FTC Guidelines Voluntary (driven by consumer backlash) Growing skepticism of undisclosed AI
China Generative AI Measures Mandatory watermarking for public release Focus on content security and alignment
United Kingdom Pro-Innovation Approach Context-specific (Copyright focused) Mixed; strong creative sector pushback

The Supply Chain of Trust

Let’s dig deeper into the mechanics of this shift. The “reputation damage” cited in the study is not abstract. It translates directly to stock volatility and customer churn. In the digital age, trust is the ultimate liquidity.

The Supply Chain of Trust

We are seeing the rise of third-party verification bodies, similar to Fair Trade or ISO certifications, but for digital content. These entities are becoming the new gatekeepers of the creative economy. For a global brand, ignoring this is akin to ignoring labor laws in the 1990s. It is a liability that will eventually crystallize into a lawsuit or a boycott.

this impacts the talent market. Top-tier creative talent is increasingly demanding contracts that protect their “human signature.” They do not want their style trained into a model without compensation or credit. This is leading to a fragmentation of talent pools, where “AI-safe” agencies command higher retainers.

The geopolitical angle here is subtle but profound. Nations that invest heavily in human-centric creative education and protect IP rights may locate themselves becoming the “Switzerland” of the creative world—neutral grounds of high-trust production. Conversely, regions that allow rampant, undisclosed AI scraping may find their creative exports devalued in premium markets.

Strategic Takeaways for the Global Observer

So, where does this leave us as we navigate the rest of 2026? The warning from Florida is clear: efficiency cannot come at the cost of authenticity. The market is correcting itself. The initial hype cycle of “AI can do everything” is settling into a more mature “AI should do what it’s told, but humans must take the credit (or the blame).”

For investors and policymakers, the signal is to watch for the “Transparency Premium.” Companies that adopt radical transparency regarding their AI stack will likely outperform those that hide it. The era of the black box is ending. The era of the glass box has begun.

As we move forward, the question is not whether to use AI, but how to share the story of its use. In a world of infinite synthetic content, the most valuable thing you can offer is the truth.

Photo of author

Omar El Sayed - World Editor

Eiza González Wears Black Carolina Herrera Dress at Mike & Nick & Nick & Alice Premiere

Interstellar Object 3I/ATLAS: High Deuterium Levels Spark Debate & Fusion Fuel Potential

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.