Applications

Generative AI


Our team at SqueezeBits, in collaboration with the SNU-VLSI lab, have successfully deployed Stable Diffusion v2.1 on an Apple iPhone! You can generate a high-quality 512x512 image in less than a second on an iPhone 14 Pro device without connecting to a server.

Less-than-a-second Mobile Stable Diffusion - iPhone

Our team at SqueezeBits, in collaboration with the SNU-VLSI lab, has successfully developed Mobile Stable Diffusion by compressing the Stable Diffusion v2.1 model to run on Galaxy S23 device in less than 7 seconds

Mobile Stable Diffusion - Android

NLP & Speech


SqueezeBits compressed Vicuna-13B LLM + STT + TTS into an NVIDIA Jetson Orin to enable operating solely on the device without an internet connection.

LLM Compression

In a speech recognition task that recognizes Korean language, the squeezed model achieved approximately 2.2 times lower latency than the baseline model on edge environments while still achieving equivalent performance levels.

Speech Recognition

Vision


In the case of performing face recognition task on an input video, the baseline model showed a speed of 12 FPS, but the model squeezed by SqueezeBits achieved about a two-fold improvement in speed with 25 FPS at an equivalent performance.

Face Recognition

For a real-time object detection task, SqueezeBits compressed the YOLOv5 object detection model to run at about 30 FPS on mobile devices using only a single-thread CPU.

Object Detection

The model lightweighted by SqueezeBits can upscale a low-resolution image to high-resolution image with more than 2x faster speed, while achieving equivalent performance levels. ESRGAN-series models are supported.

Super Resolution