Firebase Face Detection: How to use Firebase ML kit Face Detection

This article describes how to build a face features detecting app using Firebase Face detection API (Firebase ML Kit) and Android/ Android Things. The idea of this article comes from the Google project called “Android Things expression flower”. This project idea is detecting face characteristics (or face classification) using machine vision based on Firebase ML Kit. Moreover, this project displays the face characteristics using an LCD display and some emoticons. 

To build this project you will need:

  • Raspberry Pi
  • Raspberry Camera
  • LCD Display (SSD1306) – optional

You can apply the same steps to a smartphone using Android.


The final result is shown here:

Firebase face detection
Firebase ML Kit face detection with Android Things

Using Machine Learning is possible to detect not only faces but objects too using Machine Learning image classification.

A brief introduction to Firebase face detection (Firebase ML Kit)

Firebase ML Kit is a mobile SDK that helps us to experiment with machine learning technologies. Tensorflow and CloudVision make easier to develop mobile apps that use machine learning. Anyway, the machine learning models that stand behind require time and effort. Firebase ML Kit is the Google effort to make machine learning easier to use and more accessible to people that do not know much about machine learning technologies, providing pre-trained models that can be used in developing Android and Android Things app.

It will show how easy is adding machine learning capabilities to an Android / Android Things app without knowing much about Machine Learning and without building and optimizing a Machine learning model.

What is Face detection API in Firebase ML Kit

Using Firebase ML Kit Face detection API is possible to detect faces in a picture or using a camera. In this Android Things project, we will use a camera connected to Raspberry Pi. Moreover, once the face is detected we can detect face features such as face rotation, size and so on. Moreover, using Face Detection API we can go deeper in this face analysis retrieving:

  • Landmark: the point of interest of the face such as left eyes, right eyes, nose base and so on.
  • Contour: they are points that follow the face shape
  • Classification: it is the capability to detect specific face characteristic. For example, it is possible to detect if one eye is closed or open or if the face is smiling

Moreover, using Face detection API it possible to track faces in video sequences. As you can see, they are very interesting features that open new scenarios in developing apps.

In this project, as stated before, we will use face classification to represent the face characteristic in an LCD display. To do it the app will use these images to represent the face characteristics:

Firebase ML Kit face characteristics detection
Neutral face
Firebase ML Kit face characteristics detection
Right eye closed
Firebase ML Kit face characteristics detection
Smiling face
Firebase ML Kit face features detection
Left eye closed

How to use Firebase Face detection API

Now we know what is Firebase Face detection API, it is time to start using it building the Android Things app.

Before implementing our app, it is necessary to configure a new Project using the Firebase Console. This is a very simple step. In the end, you will get a json file to download that must be added to your Android Things project.

Setup Firebase ML Kit

Once the project is configured, it is necessary to configure the Face detection API and add the right dependencies in our project:

dependencies {
    implementation ''
    implementation ''

Next, let us add these lines in our Manifest.xml :

     android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
     android:value="face" />

How to use face classification to detect face characteristics

It is time to start using the Firebase ML kit and in more details the Face Detection API in this Android Things app. There are two steps to follow in order to detect face characteristics like a smile, left or right eye closed and so on. These steps are shown below:

  • Use Camera to capture the picture
  • Pass the image capture to the Firebase MK Kit to detect the face

By now, we can suppose that the image is captured somehow and we can focus our attention on how to use Firebase ML Kit (Face Detection API) to detect face characteristics.

Configuring Firebase Face Detection API

Before applying a face detection process to an image, it is necessary to initialize the Firebase ML Kit and configure the Face Detection API. In the MainActivity and in more details in the onCreate method, add this line:


To configure the Face detector is necessary to use FirebaseVisionFaceDetectorOptions (more info here) in this way:

 FirebaseVisionFaceDetectorOptions options =
                new FirebaseVisionFaceDetectorOptions.Builder();

Next, it is necessary to add the configuration options:

FirebaseVisionFaceDetectorOptions options =
         new FirebaseVisionFaceDetectorOptions.Builder()                      

The Android Things app is interested on the face classification as stated before so we enable this configuration. Moreover, by default, the face detection will use a FAST_MODE (enabled by default).


FirebaseVisionFaceDetector detector = FirebaseVision.getInstance().getVisionFaceDetector(options);

Once the detector is ready and correctly configured, we can start detecting face characteristics (or face classification) using a captured image:

 firebaseImage = FirebaseVisionImage.fromBitmap(displayBitmap);
 result = detector
      OnSuccessListener<List<FirebaseVisionFace>>() {
       public void onSuccess(List<FirebaseVisionFace> faces) {
         for (FirebaseVisionFace face : faces) {
           Log.d(TAG, "****************************");
           Log.d(TAG, "face ["+face+"]");
           Log.d(TAG, "Smiling Prob ["+face.getSmilingProbability()+"]");
           Log.d(TAG, "Left eye open ["+face.getLeftEyeOpenProbability()+"]");
           Log.d(TAG, "Right eye open ["+face.getRightEyeOpenProbability()+"]");

There are some aspects to notice:

  1. Using the displayBitmap we get the firebaseImage that is the image where we want to detect face characteristics.
  2. The app invokes the method detectInImage to start detecting the face (the app uses the face classification)
  3. The app adds a listener to get notified when the face characteristics are available
  4. For each face detected, the app gets the probability
  5. Finally using the probability retrieved before the Android Things app controls the LCD display showing the emoticon

The method checkFaceExpression classified the face determining the face characteristics. In the end, it notifies the result to the caller (as we will see later):

private void checkFaceExpression(FirebaseVisionFace face) {
  if (face.getSmilingProbability() > 0.5) {
    Log.d(TAG, "**** Smiling ***");
   if (face.getLeftEyeOpenProbability() < 0.2 &&
      face.getLeftEyeOpenProbability() != -1 &&
      face.getRightEyeOpenProbability() > 0.5) {
    Log.d(TAG, "Right Open..");
   if (face.getRightEyeOpenProbability() < 0.2 &&
     face.getRightEyeOpenProbability() != -1 &&
     face.getLeftEyeOpenProbability() > 0.5) {
    Log.d(TAG, "Left Open..");        

How to capture the image using Camera in Android Things

By now, we have supposed to have already captured the image. This paragraph shows how to do it using a camera connected to the Raspberry Pi. This process is quite simple and it is the same we use when we implement an Android app. It is possible to break this process in these steps:

  • Open the camera
  • Create a capture session
  • Handle the image

Open the camera

In this step, the Android Things app initializes the camera. Before using the camera it is necessary to add the right permission to the Manifest.xml :

<uses-permission android:name="android.permission.CAMERA"/>

Moreover, let us create a new class that will handle all the details related to the face detection and call it and its constructor is:

 public FaceDetector(Context ctx, ImageView img, Looper looper) {
   this.ctx = ctx;
   this.img = img;
   this.looper = looper;

We will see later the role of ImageView. Next, check if the camera is present and open it:

private void openCamera(CameraManager camManager) {
  try {
   String[] camIds = camManager.getCameraIdList();
   if (camIds.length < 1) {
     Log.e(TAG, "Camera not available");
      new CameraDevice.StateCallback() {
        public void onOpened(@NonNull CameraDevice camera) {
          Log.i(TAG, "Camera opened");
        public void onDisconnected(@NonNull CameraDevice camera) 
        public void onError(@NonNull CameraDevice camera, int error) {
         Log.e(TAG, "Error ["+error+"]");
  catch(CameraAccessException cae) {


CameraManager cameraManager = (CameraManager) ctx.getSystemService(Context.CAMERA_SERVICE);

The code is quite simple, it is necessary to implement a listener to get notified when the camera is opened or some errors occurs. That’s all.

Create a capture session

The next step is creating a capture session so that the Android Things app can capture the image. Let us add a new method:

private void startCamera(CameraDevice cameraDevice) {
    try {
     final CaptureRequest.Builder requestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
                    new CameraCaptureSession.StateCallback() {
                        public void onConfigured(@NonNull CameraCaptureSession session) {
                            Log.i(TAG, "Camera configured..");
                            CaptureRequest request =;
                            try {                               session.setRepeatingRequest(request, null, backgroundHandler);
                            catch (CameraAccessException cae) {
                                Log.e(TAG, "Camera session error");
                        public void onConfigureFailed(@NonNull CameraCaptureSession session) {
        catch (CameraAccessException cae) {
            Log.e(TAG, "Camera Access Error");

In this method, the Android Things app starts a capturing session and this app gets notified when the image is captured.

Handle the image

The last step is handling the image captured. This image will be sent to the Firebase ML Kit to get the facial characteristics. To this purpose it is necessary to implement a callback method:

 public void onImageAvailable(ImageReader reader) {
   //Log.i(TAG, "Image Ready..");
   Image image = reader.acquireLatestImage();
   // We have to convert the image before
   // use it in Firebase ML Kit

That’s all. The image is ready and the camera has captured it so that we can start detecting face characteristics.

Displaying face detected characteristics using Android Things and LCD

In this step, we will show how to display face characteristics retrieved by Firebase ML Kit. In this project, Raspberry Pi is connected to an LCD display (SSD1306) that will show the face characteristics. In this way, the Android Things app can control a device using the face detected.

Before starting it is useful to show how to connect Raspberry Pi to SSD1306:

android things ssd1306

As you can notice, the connection is very simple. To handle the LCD display it is necessary to add the right driver to our Android Things project. In the build.gradle add this line:

implementation ''

To handle all the details related to the LCD let us create a new class called DisplayManager. The purpose of this class is showing the right image according to the face characteristics detected. We can represent these different characteristics using four different images as described previously. These images must be in the drawable (nodpi).

In order to show this image according to the face characteristics detected we will add this method to this class:

public void setImage(Resources res, int resId) {
  Bitmap bmp = BitmapFactory.decodeResource(res, resId);
  BitmapHelper.setBmpData(display, 0,0, bmp, true);
  try {;
  catch (IOException ioe) {

Final step

In this last step we will glue everything so that the app will work correctly. To do it, it is necessary to add a listener so that the MainActivity will be notified when the face characteristics are detected. Let us define the listener to the FaceDetector:

 public interface CameraListener {
   public void onError();
   public void onSuccess(FACE_STATUS status);


 // Face status

Now in the MainActivity, we will implement the listener:

 FaceDetector fc = new FaceDetector(this, img, getMainLooper());
 fc.setListener(new FaceDetector.CameraListener() {
    public void onError() {
      // Handle error
    public void onSuccess(FaceDetector.FACE_STATUS status) {
       Log.d(TAG, "Face ["+status+"]");
       switch (status) {
         case SMILING:
             display.setImage(getResources(), R.drawable.right_eyes_closed);
             display.setImage(getResources(), R.drawable.left_eyes_closed);
            display.setImage(getResources(), R.drawable.neutral_face);

Creating the app UI

If you want to create the UI of the Android Things app, you have to add the layout:

<?xml version="1.0" encoding="utf-8"?>
< xmlns:android=""
  <ImageView android:layout_height="wrap_content"
    android:id="@+id/img" />

Final consideration

At the end of this article, you hopefully gained the knowledge about how to use Firebase ML Kit with Android Things. We have explored how to detect face characteristics using Machine Learning. Firebase ML Kit offers the possibility to test and use Machine Learning without knowing much about it and without spending time and effort in building ML models. Using Face Detection Kit API you can build easily an Android Things app that detects face characteristics.

    1. Adalberto June 1, 2019
    2. Joellen June 22, 2019

    Add Your Comment