When we think of AI, machine learning, deep learning, etc.. we often ask ourselves – how do they process all those algorithms so quickly?
With the advances in CPU technology, there has been some debate as to the efficacy of developing with dedicated AI chips at the edge and in IoT. Algorithms can be optimized and speed can be increased on CPUs and GPUs, but we have recently seen a trend to leave system and basic application processing on the CPU while AI processing is assigned to a Tensor Processing Unit (TPU) or other dedicated chip.
The TPU is About Processing Math
You’re probably familiar with some kinds of processors and have come in contact with them in your various projects or daily work. The CPU and GPU are what we’re generally most familiar with. You may have heard that increased interest and developments in computer vision has led to companies like Intel and IBM to create Visual Processor Units (VPUs). Google, however, created the TPU.
So what is a TPU? A processor, but not as specific as a VPU in that it is geared to run more machine learning and neural network operations – not only computer vision-based tasks. The TPU was first announced in May 2016 at Google I/O and was specifically designed for their TensorFlow framework (which is open source). Compared to a GPU, it is designed to be used for a high volume of low precision computation (e.g. as little as 8-bit precision) with more input/output operations per joule.
The Google TPU is Proprietary
Google has used TPUs for Google Street View text processing and was able to find all the text in the Street View database in less than five days. In Google Photos, an individual TPU can process over 100 million photos a day. It is also used in RankBrain which Google uses to provide search results.
Different tasks require different hardware, sometimes that’s a licensing issue and sometimes that’s a right tool for the job issue. We won’t get into that facet of the AI chip market in this article but it should be kept in mind.
Since 2016, Google’s TPU has gone from gen 1 to gen 2, 3, and on to the most recent release – the edge TPU.
A Focus on Edge TPU
Edge TPU is physically much smaller than the gen 3 and consumes far less power compared to the TPUs hosted in Google data centers. In January 2019, Google made the Edge TPU available to developers with a line of products under the Coral brand. The Edge TPU is capable of 4 trillion operations per second while using 2W.
The product offerings include a single board computer (SBC), a system on module (SoM), a USB accessory, a mini PCI-e card, and an M.2 card. The SBC Coral Dev Board and Coral SoM both run Mendel Linux OS.
The machine learning runtime used to execute models on the Edge TPU is based on TensorFlow Lite (also open source). The Edge TPU is only capable of accelerating forward-pass operations, which means it’s primarily useful for performing inferences. The Edge TPU also only supports 8-bit math, meaning that for a network to be compatible with the Edge TPU, it needs to be trained using TensorFlow quantization-aware training techniques.
Community and platform support is an important thing to keep in mind when looking at AI chips and Google knows this very well. Since its launch, the TensorFlow framework has had a large following and base of developers and open knowledge forums and training. Most of which can be accessed for free.
The Edge AI Chip Trend
How this falls into the trend of AI chips in edge computing really depends on what you are working on. There are pros and cons to using chips from Apple, Google, Intel, Nvidia, and the others. Not to say that one is more friendly to start-ups than another – there are many factors that come into play when making the decision about which chip to use like end-users, software, hardware, monetary, and many.
For example, maybe the Google Edge TPU and TensorFlow is a good choice if open source is what you’re looking for, but then again maybe the Intel Movidius and OpenVINO is a good choice if you want to focus on computer vision.
One thing is for sure – the competition for AI chips is fierce, which makes it quite easy for start-ups, developers, and makers to test many to see what works for them at not so outrageous costs.
Edge TPU Machines to Work With
If you are looking to try the edge TPU and TensorFlow, the Asus Tinker Edge T and Tinker Edge R boards were designed for IoT and edge AI projects. The SBCs support Android and Debian operating systems. They also demoed a mini PC which should be available soon called the PN60T.
To stick with Google throughout – you can find the Google Coral Accelerator Module and Coral Dev Board Mini.
Other AI accelerator designs are appearing from other vendors also and are aimed at the embedded and robotics markets.
It’s interesting to see how processing is moving closer and closer onto the edge and into IoT devices. No matter the complexity of your projects, talking to people who know how to get your project where it should be is key. TechDesign has been working with makers and AI start-ups in many different fields from concept to creation to launch, and is ready to help you succeed.