JANUX G31 AI server
Cost-Performance Optimized Edge AI Inference Server
Janux GS31 AI Inference Server was tailor made to meet the future challenges of mass deployment of machine learning applications at the edge, including energy consumption, cost effectiveness and server real estate. This powerful server foundation allows for accelerated and cost-effective scaling of AI inference.
Supporting ultra-low latency transcoding and video analytics of up to 128 channels of 1080p/60Hz video, Janux GS31 is well suited for monitoring smart cities, infrastructure, intelligent enterprise/industrial video surveillance applications, object detection, recognition & classification, deep leaning inference, smart visual analysis, and much more.
Janux GS31 packs the ultimate punch with the CEx7 LX2160A COM Express type 7 module at the center, harnessing NXP’s Layerscape LX2160A 16-core Arm Cortex A72 processor. The server includes up to 4 x Snowball modules, each featuring 8 x NXP i.MX8M processors decoding 32 x Gyrfalcon Lightspeeur® SPR2803 AI accelerators – bringing the total AI inference power to 128 x acceleration chips
Gyrfalcon’s Lightspeeur 2803 uses 100% proprietary and patented technologies to accelerate CNN processing at extremely high speeds, while consuming very little power.
GTI’s Matrix Processing Engine (MPE™) architecture is a multi-dimensional processing array of physical matrices of digital multiply-add (MAC) units that computes the series of matrix operations of a convolutional neural network. The scalable matrix design of the engines allows each engine to directly communicate and interact with adjacent engines, optimizing and accelerating data flow.
SCALE LARGE MODELS
Flexible on-chip model processing capabilities allow the Lightspeeur 2803 to be used with different configurations. Batch parallel process images and video through single model-to-chip config or larger and more complex models such as ResNet-101 or ResNet-152 cascaded through multiple chips.
Suite of Development and Training Tools
GTI offers the tools to build and deploy Artificial Intelligence solutions on edge and cloud deployments.
Train your models and develop powerful AI applications:
► Software Development Kit
► Model Development Kits
► Hardware Integration Kit
► Application Suite