Avicena is revolutionizing chip-to-chip interconnect technology with its LightBundle multi-Tbps solution, which utilizes microLEDs to eliminate bandwidth and proximity limitations while offering impressive energy efficiency.
- LightBundle architecture utilizes microLEDs to eliminate bandwidth and proximity limitations
- LightBundle interconnects offer lower power and latency, higher bandwidth density, and cost-effectiveness
- LightBundle technology is compatible with chiplet interfaces like UCIe, OpenHBI, and BoW
Avicena, a tech company based in Sunnyvale, CA, is stirring up the world of chip-to-chip interconnect technology with its LightBundle multi-Tbps solution. The company is showcasing this technology at the European Conference for Optical Communications (ECOC) 2023 in Glasgow, Scotland. Avicena’s LightBundle architecture, which utilizes microLEDs, is set to revolutionize the performance of processors, memory, and sensors by eliminating bandwidth and proximity limitations while also offering impressive energy efficiency.
According to Chris Pfistner, VP Sales & Marketing of Avicena, the importance of high bandwidth-density, low-power, and low-Latency interconnects between xPUs and HBM modules cannot be overstated as generative AI continues to evolve. Avicena’s LightBundle interconnects have the potential to fundamentally change the way processors connect to each other and to memory. With their inherent parallelism that aligns well with the internal architecture of ICs, these interconnects are poised to enable the next era of AI innovation. They promise multi-terabit per second capacity and sub-pJ/bit efficiency, paving the way for more capable models and a wide range of AI applications that will shape the future.
The surge in demand for compute and memory performance driven by artificial intelligence (AI) applications like ChatGPT has created an urgent need for higher-density, low-power interconnects between GPUs and HBM modules. Currently, GPUs and HBM modules must be co-packaged due to the limited reach of the GPU-memory electrical interconnect. While conventional optical interconnects based on VCSELs or Silicon Photonics offer extended reach, they face challenges in terms of power consumption, bandwidth density, latency, and cost. Avicena’s microLED-based LightBundle interconnects, on the other hand, offer lower power and latency, higher bandwidth density, and cost-effectiveness.
Avicena’s LightBundle technology is based on arrays of innovative GaN microLEDs that can be integrated directly onto high-performance CMOS ICs. Each microLED array is connected to a matching array of CMOS-compatible PDs via a multi-core fiber cable. The company has already demonstrated microLEDs transmitting at over 10 Gbps per lane and a test ASIC running 32 lanes at less than 1pJ/bit. They are now developing their first ASIC in a 16 nm finFET process with over 300 lanes and an aggregate bandwidth of over 1Tbps bi-directional at 4 Gbps per lane. The ASIC, measuring less than 12 mm², contains the necessary circuitry for optical transmission and reception, as well as a high-speed parallel electrical interface and various DFT/DFM functions.
In the future, Avicena plans to further enhance the LightBundle platform, enabling interconnects with high-bandwidth density of multi-Tbps per mm² in advanced CMOS process nodes. The low power, high density, and low latency of LightBundle make it compatible with chiplet interfaces like UCIe, OpenHBI, and BoW, and it can also improve system architectures limited by existing compute interconnects such as PCIe/CXL and HBM/DDR/GDDR memory links. With Avicena’s LightBundle technology, the possibilities for AI innovation and system performance are endless.
About Our Team
Our team comprises industry insiders with extensive experience in computers, semiconductors, games, and consumer electronics. With decades of collective experience, we’re committed to delivering timely, accurate, and engaging news content to our readers.
Technology Explained
chiplet: Chiplets are a new type of technology that is revolutionizing the computer industry. They are small, modular components that can be used to build powerful computing systems. Chiplets are designed to be used in combination with other components, such as processors, memory, and storage, to create a complete system. This allows for more efficient and cost-effective production of computers, as well as more powerful and versatile systems. Chiplets can be used to create powerful gaming PCs, high-end workstations, and even supercomputers. They are also being used in the development of artificial intelligence and machine learning applications. Chiplets are an exciting new technology that is changing the way we build and use computers.
Latest Articles about chiplet
GPU: GPU stands for Graphics Processing Unit and is a specialized type of processor designed to handle graphics-intensive tasks. It is used in the computer industry to render images, videos, and 3D graphics. GPUs are used in gaming consoles, PCs, and mobile devices to provide a smooth and immersive gaming experience. They are also used in the medical field to create 3D models of organs and tissues, and in the automotive industry to create virtual prototypes of cars. GPUs are also used in the field of artificial intelligence to process large amounts of data and create complex models. GPUs are becoming increasingly important in the computer industry as they are able to process large amounts of data quickly and efficiently.
Latest Articles about GPU
Latency: Technology latency is the time it takes for a computer system to respond to a request. It is an important factor in the performance of computer systems, as it affects the speed and efficiency of data processing. In the computer industry, latency is a major factor in the performance of computer networks, storage systems, and other computer systems. Low latency is essential for applications that require fast response times, such as online gaming, streaming media, and real-time data processing. High latency can cause delays in data processing, resulting in slow response times and poor performance. To reduce latency, computer systems use various techniques such as caching, load balancing, and parallel processing. By reducing latency, computer systems can provide faster response times and improved performance.
Latest Articles about Latency
PCIe: PCIe (Peripheral Component Interconnect Express) is a high-speed serial computer expansion bus standard for connecting components such as graphics cards, sound cards, and network cards to a motherboard. It is the most widely used interface in the computer industry today, and is used in both desktop and laptop computers. PCIe is capable of providing up to 16 times the bandwidth of the older PCI standard, allowing for faster data transfer speeds and improved performance. It is also used in a variety of other applications, such as storage, networking, and communications. PCIe is an essential component of modern computing, and its applications are only expected to grow in the future.
Latest Articles about PCIe
Trending Posts
Renesas Launches First Comprehensive Chipset for Next-Gen DDR5 Server MRDIMMs
CHIEFTEC introduces Visio and Visio AIR: Dual-Chamber ATX PC Cases Redefined
NVIDIA DLSS 3 Expands Its Reach to Additional Games This Week
NVIDIA and Microsoft Unveil Blackwell, Omniverse AI, and RTX PCs at Ignite Event
ASUS Republic of Gamers introduces the New ROG Phone 9 Lineup
Evergreen Posts
NZXT about to launch the H6 Flow RGB, a HYTE Y60’ish Mid tower case
Intel’s CPU Roadmap: 15th Gen Arrow Lake Arriving Q4 2024, Panther Lake and Nova Lake Follow
HYTE teases the “HYTE Y70 Touch” case with large touch screen
NVIDIA’s Data-Center Roadmap Reveals GB200 and GX200 GPUs for 2024-2025
S.T.A.L.K.E.R. 2: Heart of Chornobyl Pushed to November 20, introduces Fresh Trailer