Predictive Memory—Explained
Server Memory Is Expensive
Modern applications like AI and large-scale databases are putting tremendous pressure on compute resources—driving up the cost of computing. Server memory (DRAM) is one of the largest cost components.
Memory Is Often Poorly Utilized
Studies from major cloud providers have shown that, for many workloads, memory utilization can drop to 50%. Over half of the memory isn't being actively utilized and can be considered cold! This means that organizations are often deploying a much larger DRAM footprint that they truly need—resulting in potentially millions of dollars in wasted spend.
Enter MEXT
MEXT is tackling the memory utilization problem with an unprecedented approach: transparent, AI-powered memory tiering. MEXT begins by offloading cold memory pages from DRAM into a 20x lower-cost tier, Flash.
Continuous AI-Powered Prediction
The MEXT AI engine continually predicts which memory pages in Flash are likely to be requested by the application, and transparently moves them back to DRAM—before the application even requests them. This keeps application performance intact.
Achieve 40% Lower Costs
As a result, customers can run applications within a much smaller DRAM footprint—yielding substantially lower costs.
Expand Memory Capacity
For customers on the other end of the spectrum who are running applications that are running out of memory, MEXT can help create very large-memory systems, cost-effectively.
Get the Latest
Sign up to receive the latest news about MEXT.