GTI’s architecture features AI cores that are ultra-small and low-power, enabling AI Processing in Memory (APiM) with those cores configured in a proprietary Matrix Processing Engine (MPE™) architecture. The AI cores accelerate the convolutional neural network (CNN) on AI frameworks like Caffe, and TensorFlow. GTI’s accelerator chips combines over 28,000 cores, with the Lightspeeur® 2803 capable of 16.8TOPS in under a watt, and the Lightspeeur® 2801 using only 300mW while providing 2.8 TOPS. By offering the MPE architecture along with development and software optimization tools and technical support, SoC designers can integrate AI inference acceleration at a minimal cost in die area and just milliwatts of power consumption.