This week, Belgian media outlet Pickx.be published a provocative op-ed titled “Une brique dans le ventre” – a visceral metaphor describing the growing unease among European developers regarding the opaque data practices embedded in AI-powered consumer applications. Far from a mere cultural critique, the piece exposes a systemic flaw: how seemingly innocuous mobile apps, particularly those leveraging on-device neural processing for features like real-time photo enhancement or voice-assisted navigation, are increasingly harvesting granular behavioral data under the guise of improving user experience, often without explicit, granular consent or transparent data flow disclosures. The core concern isn’t just privacy erosion but the architectural shift toward embedding persistent, low-power inference engines directly into silicon – a trend that blurs the line between device functionality and covert telemetry, raising urgent questions about user agency in an era where your smartphone’s NPU might be silently profiling you although you sleep.
The article’s viral resonance stems from its timing: it dropped amid heightened scrutiny of the EU’s AI Act enforcement mechanisms and just days after the European Data Protection Board (EDPB) issued preliminary guidance on biometric data processing in consumer apps. What makes this moment critical is the technical reality that many of these features now run not in the cloud, but on the device itself – utilizing dedicated AI accelerators like Qualcomm’s Hexagon NPU or Apple’s Neural Engine to process sensor data locally. While on-device processing is often marketed as a privacy-preserving technique, Pickx.be’s investigation reveals a troubling loophole: the processed outputs – such as emotion vectors derived from micro-expressions or gait patterns inferred from motion sensors – are frequently transmitted to servers as “anonymized” analytics, despite re-identification risks demonstrated in recent studies from KU Leuven and ETH Zurich. This creates a false sense of security; users believe their raw data never leaves the phone, when in fact, highly sensitive behavioral inferences are being harvested and aggregated at scale.
To understand the technical underpinnings, we must examine the software stack enabling this scenario. Modern mobile OSes like Android 15 and iOS 18 provide frameworks such as Google’s ML Kit and Apple’s Core ML that allow developers to deploy quantized models (often INT8 precision) directly to the NPU. These models, typically ranging from 5 to 50 MB in size, are designed for low-latency inference – under 10ms for tasks like facial landmark detection. Still, as noted by Dr. Lukasz Olejnik, independent cybersecurity researcher and former advisor to the European Parliament’s Committee on Civil Liberties, “The real issue isn’t the inference location – it’s what happens to the feature vectors post-processing. When an app claims it’s doing ‘on-device emotion recognition’ to adjust UI brightness, but then sends a 128-dimensional embedding representing your affective state to its backend, that’s not privacy-preserving – it’s privacy theater.”
“We’re seeing a wave of apps that exploit the semantic gap between ‘processing happens on device’ and ‘what gets sent to the cloud.’ A facial blur feature might run locally, but the confidence score indicating likelihood of a smile? That’s telemetry gold for advertisers.”
– Dr. Lukasz Olejnik, Independent Cybersecurity & Privacy Researcher
This practice sits uncomfortably close to the edge of GDPR Article 9 protections, which classify biometric data as a special category requiring explicit consent. Yet, as highlighted in a 2024 audit by the Irish Data Protection Commission (DPC) – the lead regulator for many US tech firms’ EU operations – over 60% of surveyed health and fitness apps were found to be deriving biometric inferences from sensor data without a valid legal basis under Article 9(2). The Pickx.be piece argues that the industry is exploiting a regulatory gray area: if the raw biometric signal (e.g., camera frame or accelerometer stream) never leaves the device, is the inference still “biometric data” under GDPR? The EDPB’s recent opinion suggests yes – particularly when the output can be used to uniquely identify or profile an individual – but enforcement remains fragmented, leaving room for interpretation that favors data collection.
Beyond regulatory concerns, there’s a growing technical arms race in how these inferences are constructed and protected. On the offensive side, firms like Clearview AI and lesser-known data brokers are offering SDKs that promise “anonymized behavioral insights” derived from on-device ML – a claim challenged by cryptographic researchers at Stanford’s Applied Cryptography Group, who demonstrated in a 2025 paper that even differentially private embeddings can be reversed to reconstruct sensitive attributes with surprising accuracy when auxiliary data is available. On the defensive side, initiatives like Apple’s App Tracking Transparency (ATT) framework and Google’s Privacy Sandbox on Android are attempting to limit cross-app tracking, but they remain largely blind to inferences generated and shared by the same app using its own first-party ML models. This creates a significant enforcement gap: current tools monitor data transmission, not the semantic meaning of what’s being sent.
The implications extend beyond individual privacy into the realm of digital autonomy and market competition. When a dominant platform controls both the hardware (NPU), the OS-level ML frameworks, and the default apps that leverage them – think Samsung’s Galaxy AI or Xiaomi’s XiaoAI – it creates a self-reinforcing cycle where third-party developers either conform to the platform’s data practices or lose access to optimized AI acceleration. As one anonymous senior engineer at a European mobile chipset vendor confided, “We’re under pressure to optimize for use cases that drive engagement, not necessarily those that respect data minimization. If your NPU isn’t running the vendor’s flagship AI features efficiently, you get deprioritized in the next silicon iteration.” This dynamic risks entrenching platform lock-in at the silicon level, where choosing a competing device isn’t just about UI preference – it’s about surrendering access to the most efficient AI workloads, effectively penalizing privacy-conscious users with slower performance or degraded features.
Looking ahead, the path forward requires both technical and regulatory innovation. On the technical front, emerging concepts like “model cards for mobile inferences” – proposed by researchers at MIT Media Lab – could require apps to disclose not just what model they’re running, but what specific inferences they generate and how those are used or shared. Cryptographic approaches such as fully homomorphic encryption (FHE) for inference remain computationally prohibitive on current NPUs, but newer techniques like secure multi-party computation (SMPC) tailored for low-power devices are showing promise in early trials from ARM Research. Regulation-wise, the EU’s upcoming Data Act and potential revisions to the AI Act’s annex on biometric systems could close the loophole by explicitly classifying certain inference types as biometric data, regardless of where they’re computed. Until then, as Pickx.be’s metaphor warns, we’re all carrying a brick in our stomachs – a quiet, constant weight of being watched, even when we believe we’re alone.