technical PAPERS

Paper

Summary

Multi-modal Sentiment Analysis using Super Characters Method on Low-power CNN Accelerator Device

Accepted by AAAI 2020 Affective Content Analysis Workshop

In recent years NLP research has witnessed the record-breaking accuracy improvement by DNN models. However, power consumption is one of the practical concerns for deploying NLP systems. Most of the current state-of-the-art algorithms are implemented on GPUs, which is not power-efficient and the deployment cost is also very high. On the other hand, CNN Domain Specific Accelerator (CNN-DSA) has been in mass production providing low-power and low cost computation power. In this paper, we will implement the Super Characters method on the CNN-DSA. In addition, we modify the Super Characters method to utilize the multi-modal data, i.e. text plus tabular data in the CL-Aff sharedtask.

On-device Chatbot System using SuperChat Method on Raspberry Pi and CNN Domain Specific Accelerator

Accepted by KDD2019 workshop BigMine2019

Chatbot is a popular interactive entertainment device requires semantic understanding and natural language processing of input inquiries and appropriate individualized responses. Currently, most chatbot services are provided with connection to cloud due to the limitation of computation power on edge devices, which brings in the privacy and latency concerns. However, the recent research on SuperChat method shows that the chit- chat tasks can be solved using two-dimensional CNN models. In addition, low-power CNN Domain Specific Accelerators are in wide availability since the past two or three years. In this paper, we implement SuperChat method on a Raspberry Pi 3.0 connected through USB to a low-power CNN accelerator chip, which is loaded with the quantized weights two-dimensional CNN model. The resulting system can reach convincing accuracy with high power, memory efficiency, and very low power consumption.

Demonstration of Applications in Computer Vision and NLP on Ultra Power-Efficient CNN Domain-Specific Accelerator with 9.3 TOPS/Watt

Accepted by ICME 2019

Computer Vision and Natual Language Processing (NLP) applications are becoming available at edge devices and mobile platforms with the mass production of low-power and high- performance AI chips. SPR2801s is the CNN Domain-Specific Accelerator (CNN-DSA) with an inference speed of more than 140fps for an input image size of 224x224x3 and only 300mW power consumption. The convolution computations are all completed on SPR2801s chip which works as a co-processor. In this demo, we will show demos running on SPR2801s for computer vision and NLP applications, including image classification, text classification, sentiment analysis, and Compact Descriptor for Video Analysis (CDVA). The applications are demonstrated on a single chip and also demonstrated on a multi-chip board. The single-chip is shown as a dongle with USB connecting to a host processor. The eight-chip board shows the power of parallel processing with PCIe or M.2 interface. 

System Demo for Transfer Learning across Vision and Text using Domain Specific CNN Accelerator for On-Device NLP Applications

Accepted by IJCAI 2019 Tusion Workshop

Power-efficient CNN Domain-Specific Accelerator (CNN-DSA) chips are currently available for wide use in mobile devices. These chips are mainly used in computer vision applications. However, the recent work of Super Characters method for text classification and sentiment analysis tasks using two-dimensional CNN models has also achieved state-of-the-art results through the method of transfer learning from vision to text. In this paper, we implemented the text classification and sentiment analysis applications on mobile devices using CNN-DSA chips. Compact network representations using one-bit and three-bits precision for coefficients and five-bits for activations are used in the CNN-DSA chip with power consumption less than 300mW. For edge devices under memory and compute constraints, the network is further compressed by approximating the external Fully Connected (FC) layers within the CNN-DSA chip. At the workshop, we have two system demonstrations for NLP tasks. The first demo classifies the input English Wikipedia sentence into one of the 14 ontologies. The second demo classifies the Chinese online-shopping review into positive or negative.

SuperCaptioning: Image Captioning Using Two-dimensional Word Embedding

Accepted by IJCAI 2019 Tusion Workshop

Language and vision are processed as two different models in current work for image captioning. However, recent work on Super Characters method shows the effectiveness of two-dimensional word embedding, which converts text classification problems into an image classification problem. In this paper, we propose the SuperCaptioning method, which borrows the idea of two-dimensional word embedding from Super Characters method and processes the information of language and vision together in one single CNN model. The experimental results on Flickr30k data show the proposed method gives high-quality image captions. An interactive demo is ready to show at the workshop.

SuperChat: Dialogue Generation by Transfer Learning from Vision to Language using Two-dimensional Word Embedding and Pretrained ImageNet CNN Models

Accepted by CVPR2019 Language and Vision Workshop

The recent work of Super Characters method using two-dimensional word embedding achieved state-of-the-art results in text classification tasks, showcasing the promise of this new approach. This paper borrows the idea of Super Characters method and two-dimensional embedding and proposes a method of generating conversational responses for open-domain dialogues. The experimental results on a public dataset show that the proposed SuperChat method generates high-quality responses. An interactive demo is ready to show at the workshop.

SuperTML: Two-Dimensional Word Embedding for the Precognition on Structured Tabular Data

Accepted by CVPR2019 Precognition Workshop

Tabular data is the most commonly used form of data in the industry. Gradient Boosting Trees, Support Vector Machine, Random Forest, and Logistic Regression is typically used for classification tasks on tabular data. DNN models using categorical embeddings are also applied in this task, but all attempts thus far have used one-dimensional embeddings. The recent work of Super Characters method using two-dimensional word embeddings achieved the state of art result in text classification tasks, showcasing the promise of this new approach. In this paper, we propose the SuperTML method, which borrows the idea of Super Characters method and two-dimensional embeddings to address the problem of classification on tabular data. For each input of tabular data, the features are first projected into two-dimensional embeddings like an image, and then this image is fed into fine-tuned two-dimensional CNN models for classification. Experimental results have shown that the proposed SuperTML method had achieved state-of-the-art results on both large and small datasets.

Squared English Word: A Method of Generating Glyph to Use Super Characters for Sentiment Analysis

Submitted to AAAI2019

 

We propose a method named Super Characters for sentiment classification. This method converts the sentiment classification problem into an image classification problem by projecting texts into images and then applying CNN models for classification. Text features are extracted automatically from the generated Super Characters images, hence there is no need for any explicit step of embedding the words or characters into numerical vector representations. Experimental results on large social media corpus show that the Super Characters method consistently outperforms other methods for sentiment classification and topic classification tasks on ten large social media datasets of millions of contents in four different languages, including Chinese, Japanese, Korean and English.

MRAM Co-designed Processing-in-Memory CNN Accelerator for Mobile and IoT Applications

Accepted by NIPS 2018 MLPCD workshop

We designed a device for Convolution Neural Network applications with non-volatile MRAM memory and computing-in-memory co-designed architecture. It has been successfully fabricated using 22nm technology node CMOS Si process. More than 40MB MRAM density with 9.9TOPS/W are provided. It enables multiple models within one single chip for mobile and IoT device applications.

Super Characters: A Conversion from Sentiment Classification to Image Classification

Accepted by EMNLP2018 workshop

We propose a method named Super Characters for sentiment classification. This method converts the sentiment classification problem into an image classification problem by projecting texts into images and then applying CNN models for classification. Text features are extracted automatically from the generated Super Characters images, hence there is no need for any explicit step of embedding the words or characters into numerical vector representations. Experimental results on large social media corpus show that the Super Characters method consistently outperforms other methods for sentiment classification and topic classification tasks on ten large social media datasets of millions of contents in four different languages, including Chinese, Japanese, Korean and English.

Ultra Power Efficient CNN Domain Specific Architecture with 9.3 TOPS/Watt for Mobile and Embedded Applications

Accepted by CVPR 2018 Efficient Deep Learning for Computer Vision workshop

Computer vision performances have been significantly improved in recent years by Convolutional Neural Networks(CNN). Currently, applications using CNN algorithms are deployed mainly on general-purpose hardware, such as CPUs, GPUs or FPGAs. However, power consumption, speed, accuracy, memory footprint, and die size should all be taken into consideration for mobile and embedded applications. Domain-Specific Architecture (DSA) for CNN is an efficient and practical solution for CNN deployment and implementation. We designed and produced a 28nm Two-Dimensional CNN-DSA accelerator with an ultra power-efficient performance of 9.3TOPS/Watt and with all processing done in the internal memory instead of outside DRAM. It classifies 224×224 RGB image inputs at more than 140fps with peak power consumption at less than 300mW and accuracy comparable to the VGG benchmark. The CNN-DSA accelerator is reconfigurable to support CNN model coefficients of various layer sizes and layer types, including convolution, depth-wise convolution, short-cut connections, max pooling, and ReLU. Furthermore, in order to better support real-world deployment for various application scenarios, especially with low-end mobile and embedded platforms and MCUs (Microcontroller Units), we also designed algorithms to fully utilize the CNN-DSA accelerator efficiently by reducing the dependency on external accelerator computation resources, including implementation of Fully-Connected (FC) layers within the accelerator and compression of extracted features from the CNN-DSA accelerator. Live demos with our CNN-DSA accelerator on mobile and embedded systems show its capabilities to be widely and practically applied in the real world.