16.5 C
London
Monday, September 25, 2023
HomeTechLightmatter's photonic AI hardware is set to shine with $154 million in...

Lightmatter’s photonic AI hardware is set to shine with $154 million in new funding

Date:

Booting up photonic computing Light Matter is making a big splash in the burgeoning AI computation market with a hardware-software combination that it believes will take the industry to the next level — and save a lot of electricity, too.

Lightmatter’s chips actually use optical power to solve computational processes such as matrix vector products. This math is at the heart of much AI work and is currently performed by GPUs and TPUs that specialize in it but use traditional silicon gates and transistors.

The problem with that is that we are approaching the limits of density and thus speed for a given wattage or size. Progress is still being made, but at great cost and at the limits of classical physics. The supercomputers that power training models like GPT-4 are huge, consume a lot of power and produce a lot of waste heat.

“The largest companies in the world are running into a wall of energy power and experiencing huge challenges with the scalability of AI. Traditional chips are pushing the limits of what’s possible for cooling, and data centers are producing an ever-increasing energy footprint. Advances in AI will slow significantly unless we deploy a new solution in data centers,” said Nick Harris, CEO and founder of Lightmatter.

Some have predicted that training a single large language model could use more energy than 100 US households consume in a year. In addition, there are estimates that 10% to 20% of the world’s total wealth will go to AI inference by the end of the decade unless new computational paradigms are created.”

Lightmatter, of course, intends to be one of those new paradigms. The approach is, at least potentially, faster and more efficient, using arrays of microscopic optical waveguides to essentially make light perform logic operations just by passing through them: a sort of analog-digital hybrid. Because the waveguides are passive, the main power consumption is creating the light itself, then reading and processing the output.

A very interesting aspect of this form of optical computing is that you can increase the power of the chip simply by using more than one color at a time. Blue does one operation while red does the other – although in practice it’s more like a wavelength of 800 nanometers for one, 820 for the other. Of course, it’s not trivial to do this, but these “virtual chips” can greatly increase the amount of computation performed on the array. Twice the colors, twice the power.

Harris started the company based on optical computing work he and his team did at MIT (which licenses them the relevant patents) and managed to raise $11 million in 2018. An investor then said that “this is not a science project”, but Harris admitted in 2021 that while they knew the technology should work “in principle”, an awful lot had to be done to make it operational. Fortunately, he told me that in the context of investors dropping another $80 million on the company.

Now, Lightmatter has raised a $154 million C round and is preparing for its actual debut. It has several pilots running its full stack of Envise (computing hardware), Passage (interconnect, critical for large computing operations), and Idiom, a software platform that Harris says should allow machine learning developers to adapt quickly.

A captive Lightmatter Envise unit. Image Credits: Light Matter

“We built a software stack that integrates seamlessly with PyTorch and TensorFlow. The workflow for machine learning developers is the same from there: we take the neural networks built into these industry standard applications and import our libraries so that all code runs on Envise,” he explains.

The company declined to make any specific claims about speedups or efficiencies, and because it’s architecture and computing method are different, it’s hard to compare apples to apples. But we’re definitely talking about an order of magnitude, not a measly 10% or 15%. Interconnect is similarly upgraded as it is useless to have that level of processing isolated on one board.

Of course, this isn’t the kind of general-purpose chip you might use in your laptop; it is very specific to this task. But it’s the lack of task specificity on this scale that seems to be holding back AI development – although “holding back” is the wrong term, as it moves at great speed. But that development is extremely expensive and unwieldy.

The pilots are in beta and mass production is planned for 2024, after which they should presumably have enough feedback and maturity to deploy in data centers.

Funding for this round came from SIP Global, Fidelity Management & Research Company, Viking Global Investors, GV, HPE Pathfinder and existing investors.

Jackyhttps://whatsnew2day.com/
The author of what'snew2day.com is dedicated to keeping you up-to-date on the latest news and information.

Latest stories

spot_img