AI Accelerator IP

AI Processing in Memory (APiM) Architecture
Matrix Processing Engine (MPE)
Supporting Models & Tools

SILICON PROVEN

Lower implementation risks with market-ready chips

DEEP EXPERTISE

In-house know-how to bring your vision to reality

DEVELOPMENT RESOURCES

Suite of models and framework tools that accelerate time-to-market

GTI LICENSING advantages

Flexible Network Options

• VGG
• ResNet
• MobileNet
• Inception

Benefits

• Process portable
• Easy SoC integration
• Support different market segments
• Differentiate & Innovate
• Scale

Framework Support

• TensorFlow
• Caffe

Target Industries

Automotive
• Surveillance Cameras
• Smart Phones

ai solution advantages

GTI’s architecture features AI cores that are ultra-small and low-power, enabling AI Processing in Memory (APiM) with those cores configured in a proprietary Matrix Processing Engine (MPE™) architecture. The AI cores accelerate the convolutional neural network (CNN) on AI frameworks like Caffe, and TensorFlow. GTI’s accelerator chips combines over 28,000 cores, with the Lightspeeur® 2803 capable of 16.8TOPS in under a watt, and the Lightspeeur® 2801 using only 300mW while providing 2.8 TOPS. By offering the MPE architecture along with development and software optimization tools and technical support, SoC designers can integrate AI inference acceleration at a minimal cost in die area and just milliwatts of power consumption.

Interested in learning more about our IP? Contact us below.





Lead Source
Product Interest(s)
Company*
First Name*
Last Name*
Title*
Email*
Phone*
Country*
How did you hear about us?*
Enter the Captcha
Reload