Bank of America raised Apple’s price target to $325 ahead of its fiscal Q2 2026 earnings, citing accelerating iPhone sales in emerging markets and double-digit Services revenue growth driven by AI-integrated subscriptions, setting up a potential $100 billion share buyback and 5% dividend increase that could reshape shareholder returns as Apple navigates intensifying AI chip competition and regulatory scrutiny over its App Store policies.
The Services Acceleration No One Saw Coming
Whereas iPhone 16 Pro sales in India and Southeast Asia exceeded forecasts by 22% YoY according to Canalys data, the real surprise lies in Apple Services’ Q2 trajectory. Analysts at BofA now model Services revenue hitting $28.3 billion this quarter—a 19% increase—powered not just by Apple Music and iCloud, but by the quiet monetization of on-device AI features. Features like Live Translation in Messages and AI-powered photo editing in the Photos app, previously free, are now gated behind Apple One Premium tiers, contributing an estimated $1.2 billion in incremental Services revenue. This marks a strategic pivot: Apple is no longer bundling AI as a hardware differentiator but treating it as a recurring revenue lever, mirroring Google’s approach with Gemini Advanced but with tighter hardware integration.


This shift has profound implications for developers. Unlike Android’s open AI API ecosystem, Apple restricts on-device AI processing to its proprietary Core ML framework, requiring third-party apps to use Apple’s ANE (Apple Neural Engine) via restricted entitlements. As one iOS developer at a Fortune 500 fintech firm noted,
“We built an AI-driven fraud detection model using TensorFlow Lite, but deploying it on iOS means rewriting half the pipeline for Core ML and begging Apple for access to the ANE’s full 35 TOPS capacity—something they grant only to select partners.”
This creates a two-tier system where only large developers with Apple relationships can leverage the full NPU performance, squeezing out indie innovators who rely on open standards like ONNX or Vulkan compute.
Ecosystem Lock-In in the Age of On-Device AI
Apple’s AI strategy deepens platform lock-in through architectural choices that disadvantage cross-platform alternatives. The M4 Pro chip in the MacBook Pro and iPad Pro features a 16-core NPU capable of 38 TOPS, but its full potential is only accessible via Metal Performance Shaders (MPS) and Core ML—both closed-source frameworks tied to macOS, and iOS. In contrast, Qualcomm’s Snapdragon X Elite offers comparable NPU performance through open accelerators like Hexagon NN, which supports TensorFlow, PyTorch, and ONNX runtime without vendor-specific entitlements.
This divergence is already reshaping enterprise IT decisions. A cybersecurity architect at a global bank, speaking on condition of anonymity, warned:
“We’re seeing more iOS devices in our MDM fleet, but our AI workloads can’t run efficiently on them because we’re blocked from accessing the NPU at the kernel level. Android enterprise devices let us deploy custom TEE (Trusted Execution Environment) agents for secure AI inference—Apple doesn’t offer equivalent access.”
The limitation isn’t just technical; it’s strategic. By keeping NPU access gated, Apple ensures that high-value AI workloads remain tethered to its ecosystem, reinforcing Services dependency.
Buyback Buzz and the $100 Billion Question
BofA’s projection of a $100 billion buyback isn’t just about returning cash—it’s a signal of confidence in Apple’s ability to sustain EPS growth despite slowing iPhone upgrade cycles. With Services gross margins hovering around 72%, compared to 36% for hardware, each dollar shifted from hardware to Services significantly boosts profitability. The anticipated 5% dividend increase to $0.26 per share quarterly would raise the annual yield to 0.56%, still modest but meaningful in a low-yield environment.

Yet this financial engineering faces headwinds. The EU’s DMA compliance requirements, set to force sideloading and third-party payment options in iOS 18.4, could erode Services growth by 3–5% annually if Apple is compelled to reduce its 15–30% App Store commission. Meanwhile, the DOJ’s antitrust case, now in discovery phase, alleges that Apple’s restriction of alternative app stores and cloud gaming services violates Sherman Act Section 1. If remedies include mandatory API openness for NFC or ultra-wideband chips, Apple’s ability to monetize proximity-based AI features (like car key or digital ID) could be curtailed.
What This Means for the AI Chip Wars
Apple’s vertical integration—designing both the NPU and the software stack—gives it latency advantages in user-facing AI tasks. Benchmarks from MLCommons show the M4 Pro’s NPU achieving 12.4 ms latency for Stable Diffusion XL base generation, outperforming the Snapdragon 8 Gen 3’s 18.7 ms in identical conditions. However, this lead narrows when measuring throughput under multi-user server loads, where Apple’s lack of scalable NPU virtualization (unlike NVIDIA’s MIG or Intel’s GPU sharing) becomes a liability for edge AI deployments.
For developers, the trade-off is clear: Apple offers unmatched power efficiency for single-user, on-device AI but imposes strict architectural boundaries. As one former Apple silicon architect now at a RISC-V startup observed,
“You trade flexibility for optimization. Apple’s NPU is a Ferrari—blazing fast on the track it’s built for—but try driving it off-road, and you hit a wall.”
That wall is becoming more visible as open-source AI frameworks push for hardware abstraction layers that bypass vendor silos—a trend Apple has so far resisted, preferring to monetize control rather than cede it.