Be part of the occasion trusted by enterprise leaders for practically 20 years. VB Rework brings collectively the individuals constructing actual enterprise AI technique. Be taught extra
As AI transforms enterprise operations throughout numerous industries, crucial challenges proceed to floor round information storage—irrespective of how superior the mannequin, its efficiency hinges on the flexibility to entry huge quantities of knowledge shortly, securely, and reliably. With out the suitable information storage infrastructure, even probably the most highly effective AI programs could be dropped at a crawl by gradual, fragmented, or inefficient information pipelines.
This matter took middle stage on Day One in every of VB Rework, in a session targeted on medical imaging AI improvements spearheaded by PEAK:AIO and Solidigm. Collectively, alongside the Medical Open Community for AI (MONAI) challenge—an open-source framework for growing and deploying medical imaging AI—they’re redefining how information infrastructure helps real-time inference and coaching in hospitals, from enhancing diagnostics to powering superior analysis and operational use circumstances.
>>See all our Rework 2025 protection right here<<Innovating storage on the fringe of medical AI
Moderated by Michael Stewart, managing companion at M12 (Microsoft’s enterprise fund), the session featured insights from Roger Cummings, CEO of PEAK:AIO, and Greg Matson, head of merchandise and advertising and marketing at Solidigm. The dialog explored how next-generation, high-capacity storage architectures are opening new doorways for medical AI by delivering the velocity, safety and scalability wanted to deal with huge datasets in medical environments.
Crucially, each firms have been deeply concerned with MONAI since its early days. Developed in collaboration with King’s Faculty London and others, MONAI is purpose-built to develop and deploy AI fashions in medical imaging. The open-source framework’s toolset—tailor-made to the distinctive calls for of healthcare—consists of libraries and instruments for DICOM help, 3D picture processing, and mannequin pre-training, enabling researchers and clinicians to construct high-performance fashions for duties like tumor segmentation and organ classification.
An important design purpose of MONAI was to help on-premises deployment, permitting hospitals to take care of full management over delicate affected person information whereas leveraging commonplace GPU servers for coaching and inference. This ties the framework’s efficiency intently to the information infrastructure beneath it, requiring quick, scalable storage programs to totally help the calls for of real-time medical AI. That is the place Solidigm and PEAK:AIO come into play: Solidigm brings high-density flash storage to the desk, whereas PEAK:AIO focuses on storage programs purpose-built for AI workloads.
“We had been very lucky to be working early on with King’s Faculty in London and Professor Sebastien Orslund to develop MONAI,” Cummings defined. “Working with Orslund, we developed the underlying infrastructure that permits researchers, docs, and biologists within the life sciences to construct on high of this framework in a short time.”
Assembly twin storage calls for in healthcare AI
Matson identified that he’s seeing a transparent bifurcation in storage {hardware}, with completely different options optimized for particular phases of the AI information pipeline. To be used circumstances like MONAI, comparable edge AI deployments—in addition to eventualities involving the feeding of coaching clusters—ultra-high-capacity solid-state storage performs a crucial position, as these environments are sometimes area and power-constrained, but require native entry to huge datasets.
As an example, MONAI was in a position to retailer greater than two million full-body CT scans on a single node inside a hospital’s present IT infrastructure. “Very space-constrained, power-constrained, and really high-capacity storage enabled some pretty outstanding outcomes,” Matson stated. This type of effectivity is a game-changer for edge AI in healthcare, permitting establishments to run superior AI fashions on-premises with out compromising efficiency, scalability, or information safety.
In distinction, workloads involving real-time inference and energetic mannequin coaching place very completely different calls for on the system. These duties require storage options that may ship exceptionally excessive enter/output operations per second (IOPS) to maintain up with the information throughput wanted by high-bandwidth reminiscence (HBM) and guarantee GPUs stay totally utilized. PEAK:AIO’s software-defined storage layer, mixed with Solidigm’s high-performance solid-state drives (SSDs), addresses each ends of this spectrum—delivering the capability, effectivity, and velocity required throughout your entire AI pipeline.
A software-defined layer for medical AI workloads on the edge
Cummings defined that PEAK:AIO’s software-defined AI storage know-how, when paired with Solidigm’s high-performance SSDs, permits MONAI to learn, write, and archive huge datasets on the velocity medical AI calls for. This mixture accelerates mannequin coaching and enhances accuracy in medical imaging whereas working inside an open-source framework tailor-made to healthcare environments.
“We offer a software-defined layer that may be deployed on any commodity server, remodeling it right into a high-performance system for AI or HPC workloads,” Cummings stated. “In edge environments, we take that very same functionality and scale it all the way down to a single node, bringing inference nearer to the place the information lives.”
A key functionality is how PEAK:AIO helps get rid of conventional reminiscence bottlenecks by integrating reminiscence extra instantly into the AI infrastructure. “We deal with reminiscence as a part of the infrastructure itself—one thing that’s typically ignored. Our answer scales not simply storage, but in addition the reminiscence workspace and the metadata related to it,” Cummings stated. This makes a major distinction for purchasers who can’t afford—both when it comes to area or value—to re-run massive fashions repeatedly. By protecting memory-resident tokens alive and accessible, PEAK:AIO permits environment friendly, localized inference while not having fixed recomputation.
Bringing intelligence nearer to the information
Cummings emphasised that enterprises might want to take a extra strategic strategy to managing AI workloads. “You possibly can’t be only a vacation spot. It’s a must to perceive the workloads. We do some unimaginable know-how with Solidign and their infrastructure to be smarter on how that information is processed, beginning with learn how to get efficiency out of a single node,” Cummings defined. “So with inference being such a big push, we’re seeing generalists turning into extra specialised. And we’re now taking work that we’ve finished from a single node and pushing it nearer to the information to be extra environment friendly. We wish extra clever information, proper? The one approach to try this is to get nearer to that information.”
Some clear tendencies are rising from large-scale AI deployments, significantly in newly constructed greenfield information facilities. These amenities are designed with extremely specialised {hardware} architectures that convey information as shut as potential to the GPUs. To attain this, they rely closely on all solid-state storage—particularly ultra-high-capacity SSDs—designed to ship petabyte-scale storage with the velocity and accessibility wanted to maintain GPUs repeatedly fed with information at excessive throughput.
“Now that very same know-how is principally occurring at a microcosm, on the edge, within the enterprise,” Cumming defined. “So it’s turning into crucial to purchasers of AI programs to find out how you choose your {hardware} and system vendor, even to be sure that if you wish to get probably the most efficiency out of your system, that you simply’re operating on all solid-state. This lets you convey enormous quantities of knowledge, just like the MONAI instance—it was 15,000,000 plus pictures, in a single system. This allows unimaginable processing energy, proper there in a small system on the finish.”