Arduino TinyML: Gesture recognition with Tensorflow lite micro using MPU6050

This Arduino tutorial covers how to use Arduino TinyML to recognize gestures using Tensorflow lite micro. We will use an external sensor MPU6050 that is a six-axis (gyro and accelerometer). This project is interesting because we will develop the Tensorflow lite micro model from scratch. The idea of using Arduino TinyML to recognize gestures starts from the Arduino Tensorflow official example (you can find more info and the original code here). The idea behind this post is to show how we can use different sensors other than Arduino Nano 33 BLE built-in sensors.

Steps to building an Arduino TinyML application

These are the steps:

  1. Create an Arduino application to build the dataset using the MPU6050 accelerometer
  2. Create a Tensorflow lite micro machine learning model modifying the example in the Tensorflow repository
  3. Train the machine learning model, quantitize it and finally export the final model
  4. Develop an Arduino TinyML application that uses the model to recognize gestures

Step 1: Create an Arduino Application to build the dataset using an external accelerometer

As it happens for other machine learning projects, the first step is creating a dataset where we will train the Tensorflow lite micro to obtain the final model. To do it, we will use Arduino Nano 33 BLE with MPU6050. Before starting, it is necessary to install the MPU6050 library. This is the code:

#include <Adafruit_MPU6050.h> #include <Adafruit_Sensor.h> #include <Wire.h> #define NUM_SAMPLES 119 #define THRESHOLD 3 Adafruit_MPU6050 mpu; int samples = 0; float ax; float ay; float az; float baseAx; float baseAy; float baseAz; void setup(void) { Serial.begin(9600); while (!Serial) delay(10); Serial.println("Adafruit MPU6050 test!"); // Try to initialize! if (!mpu.begin()) { Serial.println("Failed to find MPU6050 chip"); while (1) { delay(10); } } Serial.println("MPU6050 Found!"); // Set Accelaration Range mpu.setAccelerometerRange(MPU6050_RANGE_4_G); calibrate_sensor(); Serial.println(""); Serial.println("aX,aY,az"); delay(100); } boolean detectMovement() { float totAcc = fabs(ax) + fabs(ay) + fabs(az); return totAcc > THRESHOLD; } void calibrate_sensor() { float totX, totY, totZ; sensors_event_t a, g, temp; for (int i = 0; i < 10; i++) { mpu.getEvent(&a, &g, &temp); totX = totX + a.acceleration.x; totY = totY + a.acceleration.y; totZ = totZ + a.acceleration.z; } baseAx = totX / 10; baseAy = totY / 10; baseAz = totZ / 10; //Serial.print("BaseX:"); //Serial.println(baseAx); //Serial.print("BaseY:"); //Serial.println(baseAy); //Serial.print("BaseZ:"); //Serial.println(baseAz); } void loop() { /* Get new sensor events with the readings */ sensors_event_t a, g, temp; mpu.getEvent(&a, &g, &temp); samples = 0; ax = a.acceleration.x - baseAx; ay = a.acceleration.y - baseAy; az = a.acceleration.z - baseAz; if (!detectMovement()) { delay(10); return; } while (samples < NUM_SAMPLES) { Serial.print(ax, 3); Serial.print(","); Serial.print(ay, 3); Serial.print(","); Serial.println(az, 3); samples++; if (samples == NUM_SAMPLES) Serial.println(); } }
Code language: C++ (cpp)

The code is really simple. It acquires data from the accelerometer and prints these values on the serial port. Before starting to acquire the values, there is a calibration phase. The sensor calibration is really simple and fast. Anyway, we can use the output result to normalize the values.

More useful resources:
How to recognize keywords using Machine Learning with Arduino
Arduino with TinyML with Google keyword dataset
ESP32-CAM image classification using EdgeImpulse

Capturing gesture data

In this simple tutorial, we want to recognize only two kinds of gestures:

  • punch
  • up-down

Therefore, it is necessary to start to acquire data while you repeat several times the same movements using two different acquisition sessions. Follow these steps:

  • Run the Arduino application on your Arduino Nano 33 BLE
  • Start capturing the serial port data. If you are using OSX or Linux you can acquire the serial port data using this command:
cat /dev/cu.usbmodemxxx > up.csv
Code language: Bash (bash)
  • Repeat the movements several times. More data you acquire more accurate will be the model. At the end, you should have a CSV file:

Now you should have two files names punch.csv and up.csv:

0.823,-0.893,-2.081 0.823,-0.893,-2.081 0.823,-0.893,-2.081 -2.620,-0.369,-0.810 -2.620,-0.369,-0.810 -2.620,-0.369,-0.810 -2.620,-0.369,-0.810 -2.620,-0.369,-0.810 -2.620,-0.369,-0.810
Code language: CSS (css)

Building the Tensorflow lite micro model and training it

Now, opens this colab file in your browser. And let’s modify it according to our needs:

  1. Go to Graph Data folder and modify the code as shown below:
arduino tiny ml model

2. Go to Parse and prepare the data and change it using punch and up labels. Next modify the code as shown below:

arduino with tensorflow lite micro machine learning model

We don’t use the gyroscope data.
3. Train your model and export it

Recognizing gestures using Arduino and Tensorflow lite

Finally the last step. We have to build the code that runs Tensorflow lite micro on Arduino Nano 33 BLE. The code is shown below and derives from this example:

#include <TensorFlowLite.h> #include "tensorflow/lite/micro/micro_error_reporter.h" #include "tensorflow/lite/micro/micro_interpreter.h" #include "tensorflow/lite/micro/micro_mutable_op_resolver.h" #include "tensorflow/lite/schema/schema_generated.h" #include "tensorflow/lite/version.h" #include "tensorflow/lite/micro/all_ops_resolver.h" #include "model.h" #include <Adafruit_MPU6050.h> #include <Adafruit_Sensor.h> #include <Wire.h> #define NUM_SAMPLES 119 #define THRESHOLD 3 Adafruit_MPU6050 mpu; int samples = 0; float ax; float ay; float az; float baseAx; float baseAy; float baseAz; tflite::ErrorReporter* tflErrorReporter; tflite::AllOpsResolver tflOpsResolver; const tflite::Model* tflModel = nullptr; tflite::MicroInterpreter* tflInterpreter = nullptr; TfLiteTensor* tflInputTensor = nullptr; TfLiteTensor* tflOutputTensor = nullptr; constexpr int tensorArenaSize = 100 * 1024; byte tensorArena[tensorArenaSize]; const char* GESTURES[] = { "punch", "up" }; #define NUM_GESTURES 2 void init_tensorflow() { Serial.println("Initialing Tensorflow Lite.."); // get the TFL representation of the model byte array tflModel = tflite::GetModel(model); if (tflModel->version() != TFLITE_SCHEMA_VERSION) { Serial.println("Model schema mismatch!"); while (1); } static tflite::MicroErrorReporter micro_error_reporter; tflErrorReporter = &micro_error_reporter; static tflite::MicroInterpreter static_interpreter( tflModel, tflOpsResolver, tensorArena, tensorArenaSize, tflErrorReporter); tflInterpreter = &static_interpreter; // Allocate memory for the model's input and output tensors TfLiteStatus allocate_status = tflInterpreter->AllocateTensors(); if (allocate_status != kTfLiteOk) { TF_LITE_REPORT_ERROR(tflErrorReporter, "AllocateTensors() failed"); return; } // Get pointers for the model's input and output tensors tflInputTensor = tflInterpreter->input(0); tflOutputTensor = tflInterpreter->output(0); Serial.println("Tensorflow initialized"); } void setup(void) { Serial.begin(9600); while (!Serial) delay(10); // will pause Zero, Leonardo, etc until serial console opens Serial.println("Adafruit MPU6050 test!"); // Try to initialize! if (!mpu.begin()) { Serial.println("Failed to find MPU6050 chip"); while (1) { delay(10); } } Serial.println("MPU6050 Found!"); // Set Accelaration Range mpu.setAccelerometerRange(MPU6050_RANGE_4_G); calibrate_sensor(); Serial.println(""); init_tensorflow(); delay(100); } boolean detectMovement() { float totAcc = fabs(ax) + fabs(ay) + fabs(az); return totAcc > THRESHOLD; } void calibrate_sensor() { float totX, totY, totZ; sensors_event_t a, g, temp; for (int i = 0; i < 10; i++) { mpu.getEvent(&a, &g, &temp); totX = totX + a.acceleration.x; totY = totY + a.acceleration.y; totZ = totZ + a.acceleration.z; } baseAx = totX / 10; baseAy = totY / 10; baseAz = totZ / 10; } void loop() { sensors_event_t a, g, temp; mpu.getEvent(&a, &g, &temp); samples = 0; ax = a.acceleration.x - baseAx; ay = a.acceleration.y - baseAy; az = a.acceleration.z - baseAz; if (!detectMovement()) { delay(10); return; } while (samples < NUM_SAMPLES) { // Read samples mpu.getEvent(&a, &g, &temp); ax = a.acceleration.x - baseAx; ay = a.acceleration.y - baseAy; az = a.acceleration.z - baseAz; tflInputTensor->data.f[samples * 3 + 0] = (ax + 4.0) / 8.0; tflInputTensor->data.f[samples * 3 + 1] = (ay + 4.0) / 8.0; tflInputTensor->data.f[samples * 3 + 2] = (az + 4.0) / 8.0; samples++; } if (samples == NUM_SAMPLES) { Serial.println("Invoking inferences..."); TfLiteStatus invokeStatus = tflInterpreter->Invoke(); if (invokeStatus != kTfLiteOk) { Serial.println("Invoke failed!"); return; } for (int i = 0; i < NUM_GESTURES; i++) { Serial.print(GESTURES[i]); Serial.print(": "); Serial.println(tflOutputTensor->data.f[i], 6); } Serial.println("-----------------------"); return; } }
Code language: C++ (cpp)

Runs the code and start recognizing the gestures using Arduino.

Wrapping up

We have discovered how to build a Tensorflow lite micro model for Arduino and how we can use the machine learning to recognize gestures. Using the code above, you can recognize other gestures building your own dataset.