Left Banner Ad
Right Banner Ad
12.1 C
Delhi
Monday, February 9, 2026
spot_img
spot_img
spot_img

Optical Circuit Switches Show Promise for More Energy-Efficient AI Data Centers

In today’s rapidly evolving tech landscape, the demand for massive power and higher speeds to transport data between GPUs for training ever-expanding large language models presents a significant challenge for cloud computing service providers. While electrical packet switches (EPS) are approaching their technical limits, optical circuit switches (OCS) offer a promising solution.

Unlike traditional EPS – or even EPS with co-packaged optics – OCS uses all-optical connections to link GPUs via switched ports and optical transceivers. This architecture enables significantly higher bandwidth over kilometer distances while consuming much less power. In AI clusters, OCS can form an all-optical network, improving efficiency and scalability.

Applied Ventures co-led the Series A funding for startup Salience Labs because of its innovative OCS based on Semiconductor Operational Amplifier (SOA) technology, which delivers exceptionally low latency and insertion loss. Salience Labs offers its SOA-based OCS Photonic Integrated Circuit in two configurations: a higher radix switch for high-performance computing and a lower radix version to minimize impact radius for AI data centers. This flexibility allows Salience Labs to tailor its solutions to different performance and cost requirements for a variety of customers, from hyperscalers and GPU vendors to high-speed traders at financial institutions.

Beyond addressing the technical challenges of building high-speed, long-reach networks, the semiconductor industry must also focus on reducing the carbon footprint and tackling the unsustainable power consumption of AI data centers.  According to the Energy Information Administration, U.S. data centers will consume 6.6 percent of the country’s total electricity by 2028, more than double the 3 percent recorded in 2024[1]. Total power consumption in AI data centers includes electricity used by computing elements such as GPUs and networking gear such as switches and transceivers.

To slow the rapid growth of energy consumption, both established companies and startups are innovating in chip design and new system architecture. For example:

  • Google’s TPU design goal is to deliver 10 times the total cost of ownership efficiency compared to GPUs by using customized accelerators for specific AI workloads[2].
  • Lumentum estimates the power used in networking data between GPUs and memories per training run will increase from 21.5MW for GPT-4 to 122MW for GPT-5[3]. Using energy-efficient optical (EEO) interfaces and OCS can reduce total networking power during GPT-5 training by 79 percent, which is similar to GPT-4.
  • Arista Networks estimates that EEO interfaces, which include co-packaged optics and linear-drive pluggable optics, can save up to 20W per module at 1,600Gbps[4].

OCS has the potential to replace EPS to meet the high-bandwidth, long-reach requirements to scale-up and scale-out tens and hundreds of GPUs, while allowing these computing elements to act as one supercomputer.

Applied Ventures is actively investing in photonics startups to foster a vibrant ecosystem to build high-speed, energy-efficient networks for next-generation data centers. Combining these investments with internal R&D, Applied Materials is enabling scalable AI computing and supporting a sustainable future with technology innovation.

 

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
0FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img
spot_img
spot_img

Latest Articles