This article describes how to build a face features detecting app using Firebase Face detection API (Firebase ML Kit) and Android/ Android Things. The idea of this article comes from the Google project called “Android Things expression flower”. This project idea is detecting face characteristics (or face classification) using machine vision based on Firebase ML Kit. Moreover, this project displays the face characteristics using an LCD display and some emoticons.
To build this project you will need:
- Raspberry Pi
- Raspberry Camera
- LCD Display (SSD1306) – optional
You can apply the same steps to a smartphone using Android.
The final result is shown here:


Using Machine Learning is possible to detect not only faces but objects too using Machine Learning image classification.
A brief introduction to Firebase face detection (Firebase ML Kit)
Firebase ML Kit is a mobile SDK that helps us to experiment with machine learning technologies. Tensorflow and CloudVision make easier to develop mobile apps that use machine learning. Anyway, the machine learning models that stand behind require time and effort. Firebase ML Kit is the Google effort to make machine learning easier to use and more accessible to people that do not know much about machine learning technologies, providing pre-trained models that can be used in developing Android and Android Things app.
It will show how easy is adding machine learning capabilities to an Android / Android Things app without knowing much about Machine Learning and without building and optimizing a Machine learning model.
What is Face detection API in Firebase ML Kit
Using Firebase ML Kit Face detection API is possible to detect faces in a picture or using a camera. In this Android Things project, we will use a camera connected to Raspberry Pi. Moreover, once the face is detected we can detect face features such as face rotation, size and so on. Moreover, using Face Detection API we can go deeper in this face analysis retrieving:
- Landmark: the point of interest of the face such as left eyes, right eyes, nose base and so on.
- Contour: they are points that follow the face shape
- Classification: it is the capability to detect specific face characteristic. For example, it is possible to detect if one eye is closed or open or if the face is smiling
Moreover, using Face detection API it possible to track faces in video sequences. As you can see, they are very interesting features that open new scenarios in developing apps.
In this project, as stated before, we will use face classification to represent the face characteristic in an LCD display. To do it the app will use these images to represent the face characteristics:




How to use Firebase Face detection API
Now we know what is Firebase Face detection API, it is time to start using it building the Android Things app.
Before implementing our app, it is necessary to configure a new Project using the Firebase Console. This is a very simple step. In the end, you will get a json file to download that must be added to your Android Things project.
Setup Firebase ML Kit
Once the project is configured, it is necessary to configure the Face detection API and add the right dependencies in our project:
dependencies { implementation 'com.google.firebase:firebase-ml-vision:18.0.2' implementation 'com.google.firebase:firebase-ml-vision-face-model:17.0.2' }
Next, let us add these lines in our Manifest.xml
:
<uses-permission android:name="android.permission.CAMERA"/> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <meta-data android:name="com.google.firebase.ml.vision.DEPENDENCIES" android:value="face" />
How to use face classification to detect face characteristics
It is time to start using the Firebase ML kit and in more details the Face Detection API in this Android Things app. There are two steps to follow in order to detect face characteristics like a smile, left or right eye closed and so on. These steps are shown below:
- Use Camera to capture the picture
- Pass the image capture to the Firebase MK Kit to detect the face
By now, we can suppose that the image is captured somehow and we can focus our attention on how to use Firebase ML Kit (Face Detection API) to detect face characteristics.
Configuring Firebase Face Detection API
Before applying a face detection process to an image, it is necessary to initialize the Firebase ML Kit and configure the Face Detection API. In the MainActivity
and in more details in the onCreate
method, add this line:
FirebaseApp.initializeApp(this);
To configure the Face detector is necessary to use FirebaseVisionFaceDetectorOptions
(more info here) in this way:
FirebaseVisionFaceDetectorOptions options = new FirebaseVisionFaceDetectorOptions.Builder();
Next, it is necessary to add the configuration options:
FirebaseVisionFaceDetectorOptions options = new FirebaseVisionFaceDetectorOptions.Builder() .setClassificationMode(FirebaseVisionFaceDetectorOptions.ALL_CLASSIFICATIONS) .enableTracking() .build();
The Android Things app is interested on the face classification as stated before so we enable this configuration. Moreover, by default, the face detection will use a FAST_MODE (enabled by default).
Finally:
FirebaseVisionFaceDetector detector = FirebaseVision.getInstance().getVisionFaceDetector(options);
Once the detector is ready and correctly configured, we can start detecting face characteristics (or face classification) using a captured image:
firebaseImage = FirebaseVisionImage.fromBitmap(displayBitmap); result = detector .detectInImage(firebaseImage) .addOnSuccessListener(new OnSuccessListener<List<FirebaseVisionFace>>() { @Override public void onSuccess(List<FirebaseVisionFace> faces) { for (FirebaseVisionFace face : faces) { Log.d(TAG, "****************************"); Log.d(TAG, "face ["+face+"]"); Log.d(TAG, "Smiling Prob ["+face.getSmilingProbability()+"]"); Log.d(TAG, "Left eye open ["+face.getLeftEyeOpenProbability()+"]"); Log.d(TAG, "Right eye open ["+face.getRightEyeOpenProbability()+"]"); checkFaceExpression(face); } } });
There are some aspects to notice:
- Using the
displayBitmap
we get the firebaseImage that is the image where we want to detect face characteristics. - The app invokes the method
detectInImage
to start detecting the face (the app uses the face classification) - The app adds a listener to get notified when the face characteristics are available
- For each face detected, the app gets the probability
- Finally using the probability retrieved before the Android Things app controls the LCD display showing the emoticon
The method checkFaceExpression
classified the face determining the face characteristics. In the end, it notifies the result to the caller (as we will see later):
private void checkFaceExpression(FirebaseVisionFace face) { if (face.getSmilingProbability() > 0.5) { Log.d(TAG, "**** Smiling ***"); listener.onSuccess(FACE_STATUS.SMILING); } if (face.getLeftEyeOpenProbability() < 0.2 && face.getLeftEyeOpenProbability() != -1 && face.getRightEyeOpenProbability() > 0.5) { Log.d(TAG, "Right Open.."); listener.onSuccess(FACE_STATUS.RIGHT_EYE_OPEN_LEFT_CLOSE) } if (face.getRightEyeOpenProbability() < 0.2 && face.getRightEyeOpenProbability() != -1 && face.getLeftEyeOpenProbability() > 0.5) { Log.d(TAG, "Left Open.."); listener.onSuccess(FACE_STATUS.LEFT_EYE_OPEN_RIGHT_CLOSE); } listener.onSuccess(FACE_STATUS.LEFT_OPEN_RIGHT_OPEN); }
How to capture the image using Camera in Android Things
By now, we have supposed to have already captured the image. This paragraph shows how to do it using a camera connected to the Raspberry Pi. This process is quite simple and it is the same we use when we implement an Android app. It is possible to break this process in these steps:
- Open the camera
- Create a capture session
- Handle the image
Open the camera
In this step, the Android Things app initializes the camera. Before using the camera it is necessary to add the right permission to the Manifest.xml
:
<uses-permission android:name="android.permission.CAMERA"/>
Moreover, let us create a new class that will handle all the details related to the face detection and call it FaceDetector.java
and its constructor is:
public FaceDetector(Context ctx, ImageView img, Looper looper) { this.ctx = ctx; this.img = img; this.looper = looper; }
We will see later the role of ImageView. Next, check if the camera is present and open it:
private void openCamera(CameraManager camManager) { try { String[] camIds = camManager.getCameraIdList(); if (camIds.length < 1) { Log.e(TAG, "Camera not available"); listener.onError(); return; } camManager.openCamera(camIds[0], new CameraDevice.StateCallback() { @Override public void onOpened(@NonNull CameraDevice camera) { Log.i(TAG, "Camera opened"); startCamera(camera); } @Override public void onDisconnected(@NonNull CameraDevice camera) {} @Override public void onError(@NonNull CameraDevice camera, int error) { Log.e(TAG, "Error ["+error+"]"); listener.onError(); } }, backgroundHandler); } catch(CameraAccessException cae) { cae.printStackTrace(); listener.onError(); } }
where
CameraManager cameraManager = (CameraManager) ctx.getSystemService(Context.CAMERA_SERVICE);
The code is quite simple, it is necessary to implement a listener to get notified when the camera is opened or some errors occurs. That’s all.
Create a capture session
The next step is creating a capture session so that the Android Things app can capture the image. Let us add a new method:
private void startCamera(CameraDevice cameraDevice) { try { final CaptureRequest.Builder requestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW); requestBuilder.addTarget(mImageReader.getSurface()); cameraDevice.createCaptureSession(Collections.singletonList(mImageReader.getSurface()), new CameraCaptureSession.StateCallback() { @Override public void onConfigured(@NonNull CameraCaptureSession session) { Log.i(TAG, "Camera configured.."); CaptureRequest request = requestBuilder.build(); try { session.setRepeatingRequest(request, null, backgroundHandler); } catch (CameraAccessException cae) { Log.e(TAG, "Camera session error"); cae.printStackTrace(); } } @Override public void onConfigureFailed(@NonNull CameraCaptureSession session) { } }, backgroundHandler); } catch (CameraAccessException cae) { Log.e(TAG, "Camera Access Error"); cae.printStackTrace(); listener.onError(); } }
In this method, the Android Things app starts a capturing session and this app gets notified when the image is captured.
Handle the image
The last step is handling the image captured. This image will be sent to the Firebase ML Kit to get the facial characteristics. To this purpose it is necessary to implement a callback method:
@Override public void onImageAvailable(ImageReader reader) { //Log.i(TAG, "Image Ready.."); Image image = reader.acquireLatestImage(); // We have to convert the image before // use it in Firebase ML Kit ... }
That’s all. The image is ready and the camera has captured it so that we can start detecting face characteristics.
More useful resources
Master IoT Tensorflow: How to make a smart Android Things project using TensorFlow Machine Learning
4 external displays to use with Android Things: TM1637, Max7219, SSD1306, LCD 1602, LCD 2004
How to Deploy OpenCV on Raspberry Pi enabling machine vision
Displaying face detected characteristics using Android Things and LCD
In this step, we will show how to display face characteristics retrieved by Firebase ML Kit. In this project, Raspberry Pi is connected to an LCD display (SSD1306) that will show the face characteristics. In this way, the Android Things app can control a device using the face detected.
Before starting it is useful to show how to connect Raspberry Pi to SSD1306:

As you can notice, the connection is very simple. To handle the LCD display it is necessary to add the right driver to our Android Things project. In the build.gradle
add this line:
implementation 'com.google.android.things.contrib:driver-ssd1306:1.1'
To handle all the details related to the LCD let us create a new class called DisplayManager
. The purpose of this class is showing the right image according to the face characteristics detected. We can represent these different characteristics using four different images as described previously. These images must be in the drawable
(nodpi).
In order to show this image according to the face characteristics detected we will add this method to this class:
public void setImage(Resources res, int resId) { display.clearPixels(); Bitmap bmp = BitmapFactory.decodeResource(res, resId); BitmapHelper.setBmpData(display, 0,0, bmp, true); try { display.show(); } catch (IOException ioe) { ioe.printStackTrace(); } }
Final step
In this last step we will glue everything so that the app will work correctly. To do it, it is necessary to add a listener so that the MainActivity
will be notified when the face characteristics are detected. Let us define the listener to the FaceDetector
:
public interface CameraListener { public void onError(); public void onSuccess(FACE_STATUS status); }
where
// Face status enum FACE_STATUS { SMILING, LEFT_EYE_OPEN_RIGHT_CLOSE, RIGHT_EYE_OPEN_LEFT_CLOSE, LEFT_OPEN_RIGHT_OPEN }
Now in the MainActivity
, we will implement the listener:
FaceDetector fc = new FaceDetector(this, img, getMainLooper()); fc.setListener(new FaceDetector.CameraListener() { @Override public void onError() { // Handle error } @Override public void onSuccess(FaceDetector.FACE_STATUS status) { Log.d(TAG, "Face ["+status+"]"); switch (status) { case SMILING: display.setImage(getResources(), R.drawable.smiling_face); break; case LEFT_EYE_OPEN_RIGHT_CLOSE: display.setImage(getResources(), R.drawable.right_eyes_closed); break; case RIGHT_EYE_OPEN_LEFT_CLOSE: display.setImage(getResources(), R.drawable.left_eyes_closed); break; default: display.setImage(getResources(), R.drawable.neutral_face); } } });
Creating the app UI
If you want to create the UI of the Android Things app, you have to add the layout:
<?xml version="1.0" encoding="utf-8"?> <android.support.constraint.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity"> <ImageView android:layout_height="wrap_content" android:layout_width="wrap_content" android:id="@+id/img" /> </android.support.constraint.ConstraintLayout>
Final consideration
At the end of this article, you hopefully gained the knowledge about how to use Firebase ML Kit with Android Things. We have explored how to detect face characteristics using Machine Learning. Firebase ML Kit offers the possibility to test and use Machine Learning without knowing much about it and without spending time and effort in building ML models. Using Face Detection Kit API you can build easily an Android Things app that detects face characteristics.
I got this website from my buddy who informed me concerning this site and now this time I am visiting this site and reading very informative posts here.
Thank you very much
I blog quite often and I genuinely appreciate your information.
This great article has truly peaked my interest. I will take a note of your site and keep checking for new details
about once per week. I subscribed to your Feed too.