As AI adoption outpaces public trust, Ahmad Shadid of ORGN makes the case that confidential computing and verifiable execution offer the cryptographic proof thatAs AI adoption outpaces public trust, Ahmad Shadid of ORGN makes the case that confidential computing and verifiable execution offer the cryptographic proof that

Confidential Computing Is How AI Earns Back The Trust It Has Already Lost — And Why It Needs To Become The New Standard

2026/03/20 18:50
8 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com
Why AI's Trust Problem Will Not Be Solved By Better Privacy Policies — And What Cryptographic Proof Can Do Instead

AI systems are moving fast into sensitive workflows — writing code, handling customer data, and supporting decisions in regulated sectors such as finance and healthcare. The speed of that integration has created a structural problem that the industry has yet to adequately address.

The challenge is trust. A study conducted by the University of Melbourne in collaboration with KPMG, surveying more than 48,000 people across 47 countries, found that while 66% of respondents use AI regularly, fewer than half — just 46% — say they are willing to trust AI systems. Usage and confidence are moving in opposite directions, and the gap between them is widening.

The data privacy dimension of this trust deficit is particularly acute. According to Stanford’s 2025 AI Index, global confidence that AI companies protect personal data fell from 50% in 2023 to 47% in 2024, while fewer people now believe that AI systems are unbiased and free from discrimination compared to the previous year. That decline is taking place precisely as AI becomes more deeply embedded in daily life and professional environments, making the stakes of misplaced trust considerably higher.

Ahmad Shadid, CEO of ORGN, the world’s first confidential development environment, argues that the next phase of AI will not be built on trust — it will be built on proof. Confidential computing and verifiable execution are making it possible to demonstrate exactly how data is processed, rather than simply promise that it is safe. 

In a conversation with MPost, he explained how these technologies address the privacy and trust gaps that conventional security measures leave open in AI workflows, and what it would take for them to become mainstream.

How AI Companies Typically Protect Data Today — And Why It Is Not Enough

Most AI companies currently rely on a combination of encryption, access controls, and governance policies to protect sensitive data. Encryption is applied to data at rest and in transit using established algorithms, while role-based access controls, logging, and anomaly detection govern who can interact with systems and under what conditions. These measures represent the industry baseline, and for many use cases, they are sufficient.

The problem arises at a specific and largely overlooked moment: when data is decrypted inside memory for model training or inference. At that point, a window of exposure opens. Confidential computing addresses this directly by encrypting data while it is actively being processed, within the hardware itself, so that even the infrastructure operator cannot see what is happening inside the machine.

Shadid identifies a structural vulnerability that standard security approaches do not fully close. When data is decrypted on a server that a customer does not directly control — a public cloud environment or a third-party AI platform, for instance — the customer has no technical means of verifying what actually happens to it. They are, in practice, relying on the vendor’s word.

This concern is not limited to end users. In regulated environments, CISOs, compliance auditors, and regulators face the same problem. They typically rely on ISO 27001 certificates, SOC 2 reports, and policy documents — instruments that, as Shadid puts it, prove intent more than they prove what actually happens to data in use. Confidential computing with attestation changes that equation by providing tamper-resistant cryptographic evidence that a specific model version ran inside an approved trusted execution environment with an approved software stack. The assurance shifts from documented intention to verifiable technical fact.

The regulatory momentum behind this shift is already visible. According to the IDC’s July 2025 Confidential Computing Study, the introduction of the EU’s Digital Operational Resilience Act led 77% of organisations to become more likely to consider confidential computing, with 75% already adopting it in some form. The primary benefits reported were improved data integrity, proven confidentiality assurances, and stronger regulatory compliance.

What Verifiable Execution Means In Practice

For a non-technical audience, Shadid describes verifiable execution as receiving a cryptographic receipt after an AI system processes data. That receipt demonstrates, in a mathematically verifiable way, that the AI ran on genuine certified hardware, that it executed the expected version of the software and nothing else alongside it, and that the environment was appropriately secured before any sensitive data was unlocked. The integrity of the process no longer rests on trusting the provider’s assurances — it rests on verifying the evidence.

At a technical level, this is achieved through three interconnected mechanisms. Trusted execution environments, or TEEs, allow the processor to carve out a sealed enclave — memory and execution isolated at the silicon level — so that neither the operating system, the hypervisor, nor the cloud operator can read what is happening inside. Remote attestation then allows an external party to verify that a genuine TEE is running an approved software stack before any decryption keys or sensitive inputs are released. Finally, verifiable outputs allow some systems to sign their results with an attestation-linked certificate, so that anyone receiving the output can confirm it came from the expected application inside a protected environment and was not altered in transit.

Shadid argues that the advantages of confidential computing extend across the entire AI value chain. AI developers gain the ability to train and run models on sensitive or regulated datasets in shared cloud environments without exposing raw data to the platform operator. For enterprises, the technology reduces legal and reputational exposure by providing demonstrable proof that personal data remains protected during AI processing — supporting GDPR-class privacy requirements and sector-specific regulations. It also opens the door to cross-organisational data collaboration, because each party can verify that its data is only processed inside attested, policy-compliant environments, removing one of the principal barriers to joint AI projects.

For end users, the benefit is stronger and more tangible assurance that their personal data cannot be accessed by operators, insiders, or other cloud tenants while AI systems are running. It also makes higher-value services viable — personalised healthcare guidance or detailed financial advice, for instance — that were previously considered too sensitive to deliver via cloud infrastructure.

Shadid draws on his own experience as a software engineer to illustrate one of the less-discussed risks. Developers routinely paste proprietary code, configuration files, API keys, and tokens into AI coding tools, often with limited visibility into how that data is stored or used. The pace of the industry makes these tools difficult to avoid. It was precisely this tension — needing to move quickly while being acutely aware of the IP exposure — that led him to build ORGN, a confidential development environment constructed on confidential computing principles.

Why Mainstream Adoption Has Not Yet Arrived

Despite 75% enterprise adoption in some form, the IDC study found that only 18% of organisations have incorporated confidential computing into production environments. Shadid identifies three principal barriers: the complexity of attestation validation, a persistent perception of the technology as niche, and a shortage of engineers with the relevant skills.

Attestation validation, he explains, is considerably more involved in practice than it appears on paper. Attestation evidence arrives as binary structures or JSON objects containing measurements, certificates, and collateral that must be parsed, checked against vendor roots, and validated for freshness and revocation. Developers must then determine what counts as trusted — which firmware versions, image hashes, and application measurements are acceptable — and wire that logic into their own control plane or key management system. Major cloud providers including AWS, Azure, and Oracle already offer confidential compute at costs broadly comparable to standard infrastructure, so the barrier is not access or price. It is the engineering depth required to operationalise attestation correctly.

Shadid’s view is that broader adoption will depend on three converging forces. First, attestation validation needs to become significantly more accessible, either through standardisation or through open-source tooling that abstracts the complexity away from individual development teams. Second, regulatory pressure will continue to drive adoption in the way that DORA already has — if frameworks in other sectors follow a similar trajectory, the business case for confidential computing will become increasingly difficult to set aside. Third, and perhaps most fundamentally, public awareness of what happens to data inside AI systems needs to grow. Most people, Shadid contends, have no clear picture of what occurs when they submit a prompt to a consumer AI tool. Greater awareness of that exposure — among developers and general users alike — would generate the kind of social pressure that accelerates adoption far more effectively than technical arguments alone.

Looking further ahead, he suggests that if confidential computing and verifiable execution become default infrastructure, the way AI services are designed, sold, and governed will change materially. Customers would receive cryptographic evidence of how their data was handled rather than policy assurances, enabling enterprises to demonstrate compliance to regulators and boards in concrete rather than documentary terms. The analogy Shadid draws is to storage and network encryption, which moved from optional security measure to universal baseline over a relatively short period. The direction for confidential execution, he argues, is the same — and once it arrives, every inference, every fine-tuning job, and every data handoff will carry a cryptographic attestation, making the integrity of the pipeline a matter of verifiable fact rather than institutional trust.

The post Confidential Computing Is How AI Earns Back The Trust It Has Already Lost — And Why It Needs To Become The New Standard appeared first on Metaverse Post.

Market Opportunity
Intuition Logo
Intuition Price(TRUST)
$0.0669
$0.0669$0.0669
-1.12%
USD
Intuition (TRUST) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Russia’s Central Bank Prepares Crackdown on Crypto in New 2026–2028 Strategy

Russia’s Central Bank Prepares Crackdown on Crypto in New 2026–2028 Strategy

The Central Bank of Russia’s long-term strategy for 2026 to 2028 paints a picture of growing concern. The document, prepared […] The post Russia’s Central Bank Prepares Crackdown on Crypto in New 2026–2028 Strategy appeared first on Coindoo.
Share
Coindoo2025/09/18 02:30
UK crypto holders brace for FCA’s expanded regulatory reach

UK crypto holders brace for FCA’s expanded regulatory reach

The post UK crypto holders brace for FCA’s expanded regulatory reach appeared on BitcoinEthereumNews.com. British crypto holders may soon face a very different landscape as the Financial Conduct Authority (FCA) moves to expand its regulatory reach in the industry. A new consultation paper outlines how the watchdog intends to apply its rulebook to crypto firms, shaping everything from asset safeguarding to trading platform operation. According to the financial regulator, these proposals would translate into clearer protections for retail investors and stricter oversight of crypto firms. UK FCA plans Until now, UK crypto users mostly encountered the FCA through rules on promotions and anti-money laundering checks. The consultation paper goes much further. It proposes direct oversight of stablecoin issuers, custodians, and crypto-asset trading platforms (CATPs). For investors, that means the wallets, exchanges, and coins they rely on could soon be subject to the same governance and resilience standards as traditional financial institutions. The regulator has also clarified that firms need official authorization before serving customers. This condition should, in theory, reduce the risk of sudden platform failures or unclear accountability. David Geale, the FCA’s executive director of payments and digital finance, said the proposals are designed to strike a balance between innovation and protection. He explained: “We want to develop a sustainable and competitive crypto sector – balancing innovation, market integrity and trust.” Geale noted that while the rules will not eliminate investment risks, they will create consistent standards, helping consumers understand what to expect from registered firms. Why does this matter for crypto holders? The UK regulatory framework shift would provide safer custody of assets, better disclosure of risks, and clearer recourse if something goes wrong. However, the regulator was also frank in its submission, arguing that no rulebook can eliminate the volatility or inherent risks of holding digital assets. Instead, the focus is on ensuring that when consumers choose to invest, they do…
Share
BitcoinEthereumNews2025/09/17 23:52
XRP Multi-Year Accumulation Signals Potential 1000% Breakout

XRP Multi-Year Accumulation Signals Potential 1000% Breakout

The post XRP Multi-Year Accumulation Signals Potential 1000% Breakout appeared on BitcoinEthereumNews.com. XRP Builds Multi-Year Base as Whales Accumulate and Volume
Share
BitcoinEthereumNews2026/03/21 00:04