Building Your First ConvNet
This post is to help you get up to speed on training ConvNet models in the cloud without the hassles of setting up a VM, AWS instance, or anything of that sort. You’ll be able to design your own classification task with lots of images and train your own ConvNet models.
Jrobot Self Drive Powered by Tensorflow Lite
Jrobot Self Drive is another self-driving experiment based on machine learning. It is not a simulator, it is not a road vehicle, it is a footpath traveler. We built Nvidia CNN self drive model using Keras, collected training data, trained the model, and converted the trained model to TensorFlow Lite. TensorFlow Lite allows us to do inference on-board a mobile device and is the key part of this project. We added TensorFlow Lite to Jrobot Android app. When running, TensorFlow Lite is able to load the trained model, take a camera image as input and give a steering angle as output. Jrobot app runs on an Android phone (Xiaomi Mi5) sitting in the phone box on Jrobot car, and control the movement of the Jrobot car through Bluetooth connection with Arduino on the car.
Jrobot Self Drive Powered by TensorFlow Lite
Mobile Real-time Video Segmentation
Video segmentation is a widely used technique that enables movie directors and video content creators to separate the foreground of a scene from the background, and treat them as two different visual layers. By modifying or replacing the background, creators can convey a particular mood, transport themselves to a fun location or enhance the impact of the message. However, this operation has traditionally been performed as a time-consuming manual process (e.g. an artist rotoscoping every frame) or requires a studio environment with a green screen for real-time background removal (a technique referred to as chroma keying). In order to enable users to create this effect live in the viewfinder, we designed a new technique that is suitable for mobile phones.