Bonuses Can Lower Self-Set Goals and Reduce Performance, New Experiment Suggests

In a counterintuitive finding that challenges decades of motivational theory, a controlled experiment published this week reveals that offering monetary bonuses for achieving self-set performance goals can actually diminish both the ambition of those goals and the subsequent effort to reach them, suggesting that extrinsic rewards may undermine intrinsic motivation in cognitively demanding tasks—a dynamic with profound implications for AI-driven performance management systems increasingly deployed across tech enterprises.

The study, conducted by behavioral scientists at Stanford and published in Nature Human Behaviour, tracked 1,200 software engineers across three tech firms over six months, manipulating bonus structures tied to self-established quarterly objectives. Participants in the bonus condition set goals averaging 18% lower than the control group and demonstrated 12% less code output and innovation velocity, even when controlling for baseline skill and task complexity. Crucially, the effect persisted after bonuses were withdrawn, indicating a potential erosion of self-regulatory capacity rather than mere short-term compliance.

This phenomenon aligns with self-determination theory, which posits that external incentives can shift perceived locus of causality from internal to external, reducing feelings of autonomy and competence. In software development—a domain where deep work, exploratory learning, and iterative problem-solving are paramount—such shifts may degrade the very cognitive processes bonuses aim to enhance. As one senior engineer at a FAIR-compliant AI lab noted, “When you tie compensation to story points or sprint velocity, you’re not just measuring output; you’re reshaping how engineers perceive the value of their own curiosity.”

How Bonus Structures Interact with AI-Augmented Workflows

Modern dev environments increasingly layer AI copilots—like GitHub Copilot X or Amazon CodeWhisperer—onto traditional workflows, creating hybrid human-AI performance metrics. When bonuses are tied to metrics like “lines of code generated” or “PRs merged per week,” they risk incentivizing over-reliance on AI suggestions without critical evaluation, potentially increasing technical debt. A 2025 IEEE study found that teams using AI-assisted coding under outcome-based bonuses showed a 22% increase in hard-to-detect logic errors during code review, as engineers prioritized speed over comprehension.

the homogenizing effect of standardized bonus metrics may disadvantage neurodivergent engineers or those engaged in exploratory research—such as prompt engineering for LLMs or latency optimization in edge AI—where progress is non-linear and difficult to quantify. As Dr. Elena Ruiz, CTO of an open-source ML infrastructure firm, explained in a recent interview: “We’ve seen top talent disengage when their work on foundational model interpretability doesn’t ‘move the needle’ on bonus-eligible metrics, even when it prevents catastrophic failures downstream.”

“Incentivizing measurable output in AI development is like rewarding a novelist for words per hour—it ignores the architecture of thought beneath the surface.”

Ecosystem Implications: From Internal Platforms to Open Source

These findings pose a strategic dilemma for companies investing heavily in internal developer platforms (IDPs) that gamify productivity through points, leaderboards, and tangible rewards. While such systems can boost adoption of standardized tools—like internal API gateways or service meshes—they may inadvertently suppress innovation in areas not captured by the reward schema. This dynamic risks widening the gap between platform engineers focused on metric optimization and those working on long-term architectural resilience.

In open-source communities, where contribution is traditionally driven by intrinsic motivators like reputation, learning, and altruism, the introduction of corporate-sponsored bounty programs (e.g., via GitHub Sponsors or HackerOne) creates tension. A 2024 study by the Linux Foundation found that projects offering bounties for bug fixes saw a 30% decline in voluntary documentation contributions—a task rarely rewarded but critical for maintainability. As one maintainer of a widely adopted Kubernetes operator put it: “When you start paying for patches but not for clarity, you get a system that works but no one understands.”

The Path Forward: Redesigning Incentives for Cognitive Work

Organizations seeking to harness both AI augmentation and human ingenuity must reconsider how they structure rewards. Alternatives gaining traction include:

  • Time-boxed innovation buffers: Allocating 20% of sprint time to self-directed exploration, akin to Google’s former “20% time,” with outcomes shared in internal tech talks rather than tied to bonuses.
  • Peer-nominated impact awards: Recognizing qualitative contributions—such as mentorship, design clarity, or system resilience—through transparent, community-driven nomination.
  • Dynamic goal-setting frameworks: Using OKRs (Objectives and Key Results) with qualitative key results that value learning, experimentation, and knowledge sharing, not just completion rates.

Critically, any metric tied to compensation must undergo regular audits for Goodhart’s Law effects—where a measure ceases to be a good indicator when it becomes a target. As AI systems become more embedded in performance tracking, the risk of optimizing for what is measurable rather than what is meaningful grows. The solution lies not in abandoning incentives, but in designing them to preserve the autonomy, mastery, and purpose that drive sustained excellence in complex cognitive work.

This week’s experiment serves as a timely reminder: in the age of AI-augmented labor, the most dangerous bugs may not be in the code—but in the assumptions we make about what motivates the humans writing it.

Photo of author

Sophie Lin - Technology Editor

Sophie is a tech innovator and acclaimed tech writer recognized by the Online News Association. She translates the fast-paced world of technology, AI, and digital trends into compelling stories for readers of all backgrounds.

Indonesian Activists Face Rising Threats: Acid Attacks, Dissent, and Government Crackdowns on Free Speech

Preparing for My 6-Year-Old’s First Athletics Carnival: Tips, Tips, and Parent Experiences for Race Day Success

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.