AI Accelerator IP

AI Processing in Memory (APiM) Architecture
Matrix Processing Engine (MPE)
Supporting Models & Tools


Lower implementation risks with market-ready chips


In-house know-how to bring your vision to reality


Suite of models and framework tools that accelerate time-to-market

GTI LICENSING advantages

Flexible Network Options

• ResNet
• MobileNet
• Inception


• Process portable
• Easy SoC integration
• Support different market segments
• Differentiate & Innovate
• Scale

Framework Support

• TensorFlow
• Caffe

Target Industries

• Surveillance Cameras
• Smart Phones

ai solution advantages

GTI’s architecture features AI cores that are ultra-small and low-power, enabling AI Processing in Memory (APiM) with those cores configured in a proprietary Matrix Processing Engine (MPE™) architecture. The AI cores accelerate the convolutional neural network (CNN) on AI frameworks like Caffe, and TensorFlow. GTI’s accelerator chips combines over 28,000 cores, with the Lightspeeur® 2803 capable of 16.8TOPS in under a watt, and the Lightspeeur® 2801 using only 300mW while providing 2.8 TOPS. By offering the MPE architecture along with development and software optimization tools and technical support, SoC designers can integrate AI inference acceleration at a minimal cost in die area and just milliwatts of power consumption.