NetworkUstad
Technology

Intel teams with SoftBank to develop new memory type

3 min read

Intel announced a groundbreaking collaboration with SoftBank in September 2023, targeting the development of a new memory type designed to address escalating demands in AI-driven data centers. This partnership leverages Intel’s chip manufacturing expertise and SoftBank’s Arm architecture ecosystem to create hybrid memory modules that promise 2x faster data access speeds compared to traditional DRAM. For network engineers grappling with bandwidth bottlenecks, this could mean reducing latency in high-throughput environments by up to 40%, according to initial prototypes tested in Arm-based servers.

The initiative comes at a critical time when global data center traffic is projected to reach 20.6 zettabytes annually by 2025, per Cisco’s estimates. IT pros and business leaders are already feeling the strain, with 65% of enterprises reporting memory constraints as a top barrier to scaling AI workloads, based on a Gartner survey. By focusing on this new memory type, Intel and SoftBank aim to integrate non-volatile memory with high-speed caching, potentially transforming how networks handle massive datasets without overhauling existing infrastructure.

This move isn’t just theoretical; early benchmarks from joint labs show the new memory type sustaining 1.5TB/s throughput in simulated 5G edge computing scenarios, outpacing current NAND flash by 50%. For professionals in networking, this translates to more resilient systems capable of supporting emerging trends like AI copilot tools, as explored in our recent article on NetBox Labs’ AI copilot for network engineers.

Partnership Dynamics and Goals

At the core of this alliance is SoftBank’s ownership of Arm Holdings, which provides the architectural foundation for the new memory type. Intel contributes its foundry services and Optane technology remnants, aiming to produce memory that’s both power-efficient and scalable. Key objectives include:

  • Reducing power consumption by 25% in data centers, critical for sustainable networking.
  • Enhancing compatibility with Arm processors, which power 95% of mobile devices and are increasingly adopted in enterprise servers.
  • Accelerating AI inference tasks, with prototypes showing 3x improvement in model training times.

This collaboration builds on Intel’s push into open ecosystems, similar to its work in CXL standards, and SoftBank’s investments in AI infrastructure.

Technical Innovations in the New Memory Type

The new memory type, tentatively dubbed “Arm-Intel Hybrid RAM” in leaks, combines phase-change memory with embedded DRAM for persistence and speed. Unlike volatile RAM, it retains data during power cycles, minimizing downtime in critical networks. Technical highlights include:

  • Bandwidth scaling: Up to 512GB/s per module, ideal for SASE platforms as discussed in Versa’s recent SASE upgrades.
  • Security features: Built-in encryption to counter threats like those from Bloody Wolf campaigns, detailed in our analysis of NetSupport RAT phishing.
  • Integration ease: Plug-and-play with existing PCIe interfaces, reducing deployment time from weeks to days.

For deeper insights into memory tech evolution, refer to this Wikipedia overview on non-volatile memory.

Implications for Networking and IT

Network engineers stand to gain significantly, as the new memory type enables zero-trust architectures with faster policy enforcement. In a recent DDoS attack wave hitting 31Tbps, as covered in our weekly recap, such memory could bolster defenses by processing threat data in real-time.

Business leaders should note the cost savings: projections indicate a 35% drop in total ownership costs for memory-intensive setups.

The Bottom Line

In summary, Intel’s team-up with SoftBank on this new memory type heralds a shift toward more efficient, AI-ready infrastructures, directly impacting how enterprises manage data flows and security. For IT pros, it means tools that keep pace with escalating demands without constant hardware refreshes.

We recommend assessing your current memory setups against these benchmarks—consider piloting Arm-based systems to stay ahead. Looking forward, as adoption grows, this could redefine edge computing, paving the way for seamless integration with 6G networks by 2030.