Excerpt from Forbes.com. Original article can be seen here: https://www.forbes.com/sites/tomcoughlin/2019/01/18/ai-and-storage-at-the-2019-ces/#73b3648a21de

Jan 18, 2019 | By Tom Coughlin

Gyrfalcon is a startup company that recently introduced an AI chip platform designed for edge AI applications as well as cloud applications and is very efficient on power use, while high in performance. This neural network accelerator uses a 2D Matrix Processing Engine (MPE) with AI Processing in Memory (APiM). Doing parallel matrix processing reduces power consumption and increases performance.

The company’s Lightspeeur 2802M uses 40 MB of non-volatile non-volatile memory (MRAM), rather than volatile and real estate hungry SRAMS as memory to support AI processing, using 22 nm lithography. This is the first MRAM product out of foundry company, TSMC. Adding the MRAM memory is said to provide  unparalleledcspeed, accuracy and performance to accelerate AI inferencing. Various models can be used for object detection, voice and facial recognition as well for voice commands.

The image below shows 16 2801 Gyrfalcon chips with 28 nm lithography on a GAINBOARD 2803 PCIe board showing very favorable performance versus NVIDIA and Graphcore AI processors. An interesting early product appears to be PV powered connected streetlights for Japan that should be in use by the 2020 Olympics.

Gyrfalcon MRAM enabled AI Chip versus NVIDIA and GraphcorePHOTO BY TOM COUGHLIN