Information safety firm Fortanix Inc. introduced a brand new joint answer with NVIDIA: a turnkey platform that permits organizations to deploy agentic AI inside their very own information facilities or sovereign environments, backed by NVIDIA’s "confidential computing" GPUs.
“Our aim is to make AI reliable by securing each layer—from the chip to the mannequin to the information," mentioned Fortanix CEO and co-founder Anand Kashyap, in a current video name interview with VentureBeat. "Confidential computing provides you that end-to-end belief so you’ll be able to confidently use AI with delicate or regulated data.”
The answer arrives at a pivotal second for industries reminiscent of healthcare, finance, and authorities — sectors desirous to embrace AI however constrained by strict privateness and regulatory necessities.
Fortanix’s new platform, powered by NVIDIA Confidential Computing, permits enterprises to construct and run AI programs on delicate information with out sacrificing safety or management.
“Enterprises in finance, healthcare and authorities wish to harness the facility of AI, however compromising on belief, compliance, or management creates insurmountable danger,” mentioned Anuj Jaiswal, chief product officer at Fortanix, in a press launch. “We’re giving enterprises a sovereign, on-prem platform for AI brokers—one which proves what’s working, protects what issues, and will get them to manufacturing sooner.”
Safe AI, Verified from Chip to Mannequin
On the coronary heart of the Fortanix–NVIDIA collaboration is a confidential AI pipeline that ensures information, fashions, and workflows stay protected all through their lifecycle.
The system makes use of a mixture of Fortanix Information Safety Supervisor (DSM) and Fortanix Confidential Computing Supervisor (CCM), built-in straight into NVIDIA’s GPU structure.
“You may consider DSM because the vault that holds your keys, and CCM because the gatekeeper that verifies who’s allowed to make use of them," Kashyap mentioned. "DSM enforces coverage, CCM enforces belief.”
DSM serves as a FIPS 140-2 Degree 3 {hardware} safety module that manages encryption keys and enforces strict entry controls.
CCM, launched alongside this announcement, verifies the trustworthiness of AI workloads and infrastructure utilizing composite attestation—a course of that validates each CPUs and GPUs earlier than permitting entry to delicate information.
Solely when a workload is verified by CCM does DSM launch the cryptographic keys essential to decrypt and course of information.
“The Confidential Computing Supervisor checks that the workload, the CPU, and the GPU are working in a trusted state," defined Kashyap. "It points a certificates that DSM validates earlier than releasing the important thing. That ensures the fitting workload is working on the fitting {hardware} earlier than any delicate information is decrypted.”
This “attestation-gated” mannequin creates what Fortanix describes as a provable chain of belief extending from the {hardware} chip to the applying layer.
It’s an strategy aimed squarely at industries the place confidentiality and compliance are non-negotiable.
From Pilot to Manufacturing—With out the Safety Commerce-Off
In response to Kashyap, the partnership marks a step ahead from conventional information encryption and key administration towards securing complete AI workloads.
Kashyap defined that enterprises can deploy the Fortanix–NVIDIA answer incrementally, utilizing a lift-and-shift mannequin emigrate present AI workloads right into a confidential surroundings.
“We provide two type elements: SaaS with zero footprint, and self-managed. Self-managed generally is a digital equipment or a 1U bodily FIPS 140-2 Degree 3 equipment," he famous. "The smallest deployment is a three-node cluster, with bigger clusters of 20–30 nodes or extra.”
Prospects already working AI fashions—whether or not open-source or proprietary—can transfer them onto NVIDIA’s Hopper or Blackwell GPU architectures with minimal reconfiguration.
For organizations constructing out new AI infrastructure, Fortanix’s Armet AI platform gives orchestration, observability, and built-in guardrails to hurry up time to manufacturing.
“The result’s that enterprises can transfer from pilot tasks to trusted, production-ready AI in days relatively than months,” Jaiswal mentioned.
Compliance by Design
Compliance stays a key driver behind the brand new platform’s design. Fortanix’s DSM enforces role-based entry management, detailed audit logging, and safe key custody—parts that assist enterprises exhibit compliance with stringent information safety laws.
These controls are important for regulated industries reminiscent of banking, healthcare, and authorities contracting.
The corporate emphasizes that the answer is constructed for each confidentiality and sovereignty.
For governments and enterprises that should retain native management over their AI environments, the system helps totally on-premises or air-gapped deployment choices.
Fortanix and NVIDIA have collectively built-in these applied sciences into the NVIDIA AI Manufacturing facility Reference Design for Authorities, a blueprint for constructing safe nationwide or enterprise-level AI programs.
Future-Proofed for a Submit-Quantum Period
Along with present encryption requirements reminiscent of AES, Fortanix helps post-quantum cryptography (PQC) inside its DSM product.
As world analysis in quantum computing accelerates, PQC algorithms are anticipated to turn into a vital element of safe computing frameworks.
“We don’t invent cryptography; we implement what’s confirmed,” Kashyap mentioned. “However we additionally be sure that our clients are prepared for the post-quantum period when it arrives.”
Actual-World Flexibility
Whereas the platform is designed for on-premises and sovereign use instances, Kashyap emphasised that it may possibly additionally run in main cloud environments that already assist confidential computing.
Enterprises working throughout a number of areas can preserve constant key administration and encryption controls, both by means of centralized key internet hosting or replicated key clusters.
This flexibility permits organizations to shift AI workloads between information facilities or cloud areas—whether or not for efficiency optimization, redundancy, or regulatory causes—with out shedding management over their delicate data.
Fortanix converts utilization into “credit,” which correspond to the variety of AI situations working inside a manufacturing facility surroundings. The construction permits enterprises to scale incrementally as their AI tasks develop.
Fortanix will showcase the joint platform at NVIDIA GTC, held October 27–29, 2025, on the Walter E. Washington Conference Middle in Washington, D.C. Guests can discover Fortanix at sales space I-7 for stay demonstrations and discussions on securing AI workloads in extremely regulated environments.
About Fortanix
Fortanix Inc. was based in 2016 in Mountain View, California, by Anand Kashyap and Ambuj Kumar, each former Intel engineers who labored on trusted execution and encryption applied sciences. The corporate was created to commercialize confidential computing—then an rising idea—by extending the safety of encrypted information past storage and transmission to information in energetic use, in keeping with TechCrunch and the corporate’s personal About web page.
Kashyap, who beforehand served as a senior safety architect at Intel and VMware, and Kumar, a former engineering lead at Intel, drew on years of labor in trusted {hardware} and virtualization programs. Their shared perception into the hole between research-grade cryptography and enterprise adoption drove them to discovered Fortanix, in keeping with Forbes and Crunchbase.
At present, Fortanix is acknowledged as a world chief in confidential computing and information safety, providing options that shield information throughout its lifecycle—at relaxation, in transit, and in use.
Fortanix serves enterprises and governments worldwide with deployments starting from cloud-native companies to high-security, air-gapped programs.
"Traditionally we offered encryption and key-management capabilities," Kashyap mentioned. "Now we’re going additional to safe the workload itself—particularly AI—so a whole AI pipeline can run protected with confidential computing. That applies whether or not the AI runs within the cloud or in a sovereign surroundings dealing with delicate or regulated information.