Israeli AI startup NeuReality closes $20M in funding to democratize AI access amidst GPU scarcity

The race to dominate the AI space heats up as NeuReality, an Israeli startup, secures a massive $20 million in funding from global backers. Investors are flocking to promising generative AI ventures, eager to challenge established players in the field. Today, NeuReality stands at the forefront of this funding frenzy, poised to reshape AI inference and data center infrastructure.

Based in San Jose, CA, and Tel Aviv, Israel, NeuReality announces its latest funding round, led by the European Innovation Council (EIC) Fund alongside Varana Capital, Cleveland Avenue, XT Hi-Tech, and OurCrowd. Notable participation comes from Cardumen Capital, Glory Ventures, and Alumni Venture Group, underscoring investor confidence in the startup’s vision.

The injection of fresh capital will turbocharge NeuReality’s deployment of its NR1™ AI Inference Solution, propelling the company beyond early adopters into broader markets and regions. With this funding, NeuReality aims to accelerate its growth trajectory and meet the burgeoning demand for generative AI applications.

This funding round catapults NeuReality’s total funding to an impressive $70 million, following the successful development of its 7nm AI inference server-on-a-chip, the NR1 NAPU™, in collaboration with TSMC. This breakthrough technology forms the cornerstone of NeuReality’s AI Inference Solution, offering unparalleled efficiency compared to traditional GPU-centric architectures.

Founded in 2019 by a seasoned team of system engineers—Moshe Tanach (CEO), Yossi Kasus, and Tzvika Shmueli—NeuReality specializes in purpose-built AI inference system architecture, hardware, and software. The company’s innovative approach addresses the scalability challenges of current and future AI applications.

According to Naveen Rao, VP of Generative AI at Databricks and a NeuReality board member, NeuReality’s system-level optimization represents a crucial milestone in democratizing access to compute for generative AI. Rao emphasizes the urgency of removing market barriers, praising NeuReality’s innovative architecture as a game-changer in the industry.

Enterprises grappling with AI inference complexities and scalability issues find solace in NeuReality’s disruptive technology. Traditional CPU and GPU-based systems struggle to cope with the demands of live AI data processing, resulting in inefficiencies and bottlenecks. NeuReality’s CEO, Moshe Tanach, highlights the company’s departure from conventional CPU-bound approaches, offering breakthrough performance, cost savings, and energy efficiency.

Pointing to the paltry 30% to 40% utilization rate of AI accelerators, NeuReality’s CEO Moshe Tanach said:

“Our disruptive AI Inference technology is unbound by conventional CPUs, GPUs and NICs. We didn’t try to just improve an already flawed system. Instead, we unpacked and redefined the ideal AI Inference system from top to bottom and end to end, to deliver breakthrough performance, cost savings and energy efficiency.”

Tanach dismisses the notion of simply pouring more resources into existing architectures, likening it to installing a faster engine in a congested vehicle. Instead, NeuReality provides a streamlined pathway for AI pipelines, efficiently routing tasks to purpose-built AI devices while conserving resources.

NeuReality’s NR1-M™ and NR1-S™ systems, integrated seamlessly into server racks, boast 100% AI accelerator utilization. By eliminating the CPU requirement and connecting directly to Ethernet, these systems efficiently manage AI queries from vast data pipelines. Supported by industry giants such as AMD, IBM, and Lenovo, NeuReality’s products are poised to revolutionize AI inference infrastructure.

Since its Series A funding in 2022, NeuReality has made significant strides in AI deployment, collaborating with cloud service providers and enterprise customers across various sectors. The latest funding round positions NeuReality for broader deployment of AI inference solutions, catering to the escalating demand for both conventional and generative AI applications.