Bjarne Stroustrup: “There are only two kinds of programming languages: those people complain about and those nobody uses” – Computer Hoy

Bjarne Stroustrup’s famous quip—“You’ll see only two kinds of programming languages: those people complain about and those nobody uses”—resonates more sharply than ever in 2026, as C++ turns 40 and remains the bedrock of performance-critical systems from AI accelerators to financial trading engines, despite ongoing debates about its complexity and safety trade-offs in an era increasingly dominated by memory-safe languages like Rust and Zig.

This week, the ISO C++ Committee finalized C++26, introducing contracts, std::execution for unified parallelism, and improved constexpr evaluation—features aimed at closing the gap between C++’s raw power and modern developer ergonomics. Yet adoption remains uneven: while 68% of AAA game studios and 82% of HPC centers report using C++23 or newer, only 31% of enterprise SaaS teams have migrated past C++17, citing build times, toolchain fragmentation, and a steep learning curve for concepts like concepts and coroutines.

The Contracts Compromise: Safety Without Sacrificing Speed

One of the most anticipated additions in C++26 is native support for contracts—preconditions, postconditions, and assertions that can be checked at compile time or runtime. Unlike Rust’s borrow checker, which enforces safety through ownership semantics, C++26 contracts are opt-in and designed to incur zero overhead when disabled via [[expects: audit]] or [[ensures: axiom]] labels. This approach aims to satisfy safety-conscious teams without forcing a paradigm shift on performance-critical codebases.

The Contracts Compromise: Safety Without Sacrificing Speed
The Contracts Compromise Safety Without Sacrificing Speed One

Early benchmarks from Microsoft’s Visual C++ team show that enabling contract checks in debug mode adds an average of 7.3% runtime overhead in synthetic benchmarks—comparable to assert()—while release-mode builds with [[expects: audit]] show no measurable difference. “Contracts let us express invariants directly in the interface, which catches more logic errors than unit tests alone,” says

Anna Petrov, Senior Engineer at NVIDIA’s CUDA compiler team, in a private briefing shared with Archyde.

“We’ve seen a 22% reduction in silent data corruption bugs in our internal math libraries since enabling contract checking on GPU-accelerated kernels.”

Still, adoption hinges on toolchain support. GCC 14 and Clang 19 offer partial implementation, but full contract checking requires linking against a new runtime library—libcontract—which remains absent from many embedded toolchains. For safety-critical industries like avionics and automotive, where DO-178C and ISO 26262 compliance are mandatory, this fragmentation creates hesitation.

Parallelism, Finally Unified: std::execution and the Fight Against Fragmentation

For years, C++ parallelism has been a fractured landscape: OpenMP, MPI, Intel TBB, and vendor-specific CUDA/HIP APIs each demanded different mental models. C++26’s std::execution attempts to unify this by defining a polymorphic execution model based on senders, receivers, and schedulers—concepts borrowed from NVIDIA’s stdexec prototype, which has been under development since 2020.

Parallelism, Finally Unified: std::execution and the Fight Against Fragmentation
Intel Finally Unified

The model allows developers to write asynchronous, parallel, or vectorized work once and dispatch it to CPUs, GPUs, or FPGAs by swapping the scheduler. A simple std::execution::seq runs tasks sequentially; std::execution::par_unseq enables vectorized parallelism; std::execution::cuda offloads to NVIDIA GPUs. Crucially, the design avoids mandating a specific hardware backend, preserving C++’s “zero-overhead” principle.

Parallelism, Finally Unified: std::execution and the Fight Against Fragmentation
Intel Elena Rodriguez Lead Architect

“Here’s the first time C++ has offered a truly portable way to express heterogeneous compute without locking into a vendor’s ecosystem,” says

Dr. Elena Rodriguez, Lead Architect at Arm’s HPC division, in a recent IEEE Micro interview.

“We’re seeing early adopters in weather modeling and quantum simulation achieve 1.8x speedups on Arm Neoverse N3 cores just by switching from OpenMP to std::execution::par—no code changes beyond the scheduler.”

Yet complexity remains a barrier. The sender/receiver model introduces new terminology and debugging challenges. Unlike OpenMP’s pragmatic pragmas, std::execution requires understanding lazy execution graphs—a shift that has slowed uptake outside of research labs and performance-obsessed studios.

Ecosystem Tensions: C++’s Role in the AI Infrastructure Wars

Beyond language features, C++’s relevance is being stress-tested by the AI boom. Training frameworks like PyTorch and TensorFlow rely heavily on C++ backends for kernel execution, memory management, and operator fusion. NVIDIA’s CUDA, AMD’s ROCm, and Intel’s oneAPI all expose C++ interfaces for custom kernel development—making fluency in modern C++ a de facto requirement for AI systems engineers.

Ecosystem Tensions: C++’s Role in the AI Infrastructure Wars
Intel Bjarne Stroustrup

This has created an ironic twist: while application developers flee C++ for safer, more productive languages, the infrastructure layer of AI—where latency and throughput are non-negotiable—depends on it more than ever. “We’re not writing C++ for business logic,” admits

James Wu, Platform Engineer at CoreWeave, in a public talk at QCon 2026.

“We’re writing it to squeeze every cycle out of H100s and GB200s. If we switched to Rust or Go tomorrow, our training throughput would drop 15–20%.”

Meanwhile, open-source tensions simmer. The C++ Standards Committee faces criticism for sluggish progress on modules and package management—areas where Rust’s Cargo and Python’s pip have set high expectations. While import (modules) landed in C++20, adoption is hampered by poor IDE support and inconsistent binary module distribution. A 2025 survey by the Software Engineering Institute found that only 19% of C++ developers use modules regularly, citing “build system complexity” and “lack of centralized registry” as top blockers.

The 30-Second Verdict: Enduring, Not Elegant

C++ in 2026 is less a language of choice and more a language of necessity—indispensable where performance, hardware proximity, and legacy intertwine. Its evolution reflects a pragmatic compromise: adding safety and expressiveness where possible, without sacrificing the control that made it dominant in systems programming for four decades.

The real story isn’t whether C++ will be replaced—it won’t be, not soon—but whether its stewards can make it less hostile to newcomers while preserving the virtues that keep it irreplaceable. Contracts and std::execution are steps in that direction. Whether they’re enough remains the multibillion-dollar question powering everything from autonomous vehicles to large language model inference.

Photo of author

Sophie Lin - Technology Editor

Sophie is a tech innovator and acclaimed tech writer recognized by the Online News Association. She translates the fast-paced world of technology, AI, and digital trends into compelling stories for readers of all backgrounds.

12 Months Before Closure: What a Program Suspension Looks Like

34 People Across 13 States Sickened by Same Salmonella Strain in Ongoing Outbreak

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.