GSI Technology is positioning its associative processing unit, or APU, as a potential alternative to traditional GPUs in artificial intelligence processing. The approach moves computation directly into memory, a shift that could improve speed and efficiency across AI workloads. The concept was explored in a new Cornell University study, published in the ACM journal and presented at the Micro ’25 conference, which analyzed how GSI’s Gemini-I APU performed against conventional CPUs and GPUs, including Nvidia’s A6000, on retrieval-augmented generation workloads.
The Cornell team tested datasets ranging from 10 to 200GB to simulate realistic AI inference scenarios. The results indicated that by embedding computation within static RAM, the APU can significantly reduce the back-and-forth data transfer between processor and memory — one of the biggest contributors to power consumption and latency in GPU-based architectures. This architectural difference allowed the APU to deliver comparable throughput to high-end GPUs while consuming dramatically less energy. According to GSI, the APU used up to 98 percent less energy than a standard GPU and completed retrieval operations up to 80 percent faster than high-end CPUs. These results highlight its potential for edge applications such as drones, robotics, IoT systems, and defense environments where energy efficiency and thermal constraints are critical.
GSI’s compute-in-memory technology has been under development for several years, but this independent academic validation provides new data points for the broader AI hardware community. While the technology promises major efficiency gains, experts note that it faces challenges in scaling to compete with the well-established GPU ecosystem. GPUs from vendors like Nvidia benefit from mature software frameworks, developer tools, and deep integration with AI platforms such as TensorFlow and PyTorch. In contrast, compute-in-memory devices still require extensive optimization work, and programming environments are not yet standardized, which could delay adoption in large-scale data centers and enterprise settings.
GSI Technology, however, remains confident about the scalability and future of its architecture. The company has already introduced a next-generation model, Gemini-II, which it claims delivers ten times higher throughput and lower latency compared to the first generation. In parallel, GSI is developing another design, known as Plato, aimed at embedded and edge systems requiring even faster compute performance under strict power budgets. Lee-Lean Shu, Chairman and Chief Executive Officer of GSI Technology, said that Cornell’s findings validate the company’s long-standing vision for compute-in-memory. He emphasized that the APU delivers GPU-class performance at a fraction of the power cost, making it an attractive choice for memory-intensive AI inference workloads. Shu added that Gemini-II’s silicon demonstrates roughly ten times faster throughput and reduced latency, positioning the technology for a growing share of the global AI inference market, estimated at over $100 billion.
With further refinement and ecosystem development, compute-in-memory devices like the APU could play a meaningful role in reshaping how AI workloads are processed, balancing high performance with efficiency across emerging applications in both edge and enterprise computing.
Follow the SPIN IDG WhatsApp Channel for updates across the Smart Pakistan Insights Network covering all of Pakistan’s technology ecosystem.