How to apply machine learning to android using

This article describes how to develop a Machine Learning Android app using Before diving into the details about how to build a Machine learning Android app, it is useful to describe briefly what is platform. As you may know, Machine Learning is an interesting topic that is gaining importance and promises to transform several areas including the way we interact with Android apps.

To experiment how to apply Machine Learning to Android using we will develop an Android app that uses Image Classification.

Machine Learning is an application of AI that gives to a system the capability to accomplish tasks without using explicit instructions but learning from the data and improving from experience.

You can download the source code here:

What is a Machine Learning platform? is a platform for Android and iOS that simplifies the development of an Android Machine Learning app. When developing an Android app, usually it is necessary to create an ML-model that will be used by the Android app. The process to build an ML-model is tedious and difficult. provides a set of built-in models ready to use that speeds up the development of a Machine Learning Android app.

The built-in ML-models provided by are:

  • Image Labeling
  • Object detection
  • Style transfer
  • Image segmentation
  • Pose estimation

Moreover, it is possible to upload a custom model if we want to use a specific model.

How to configure

The first step to use Machine Learning platform is creating a free account and then create a new Machine Learning Android project, as shown in the picture below:

Machine Learning Android app with

Now, it is possible to add a new project:

Android machine learing +

You have to provide the Project name. Once the project is created, it is necessary to provide the project package: machine learning

How to set up an Android app using Image classification

Once the project is correctly configured in console, we can set up the Android project to include At the project-level, it is necessary to modify build.gradle in this way:

allprojects {
 repositories {
    maven { url "" }

and in the build.gradle at app-level:

dependencies {
    implementation 'ai.fritz:core:3.0.1'
    implementation 'ai.fritz:vision:3.0.1'

The last step is registering a service so that the Android app can receive model updates. To do it in the Manifest.xml it is necessary to add:

   android:permission="android.permission.BIND_JOB_SERVICE" />

That’s all. We are now ready to start developing our image classification Android app.

Implementing Machine Learning image classification in Android

To implement an Android app that uses image classification we have to follow these steps:

  • Configure SDK in the Android app
  • Label the image detected

Configure SDK in Android app

The first step is configuring the SDK in our Android app so that we can use it later when we need to label the captured image. Let us create an activity and in onCreate() method add the following lines:

 protected void onCreate(Bundle savedInstanceState) {
    Fritz.configure(this, "your_fritz.ai_key");

In line 4, it is necessary to configure Fritz using the API_KEY we get during the project set up in the console. The next step, it is downloading the model from the cloud:

private void downloadModel() {
  FritzManagedModel model = new ImageLabelManagedModel();
  Log.d(TAG, "Download model...");
new PredictorStatusListener<FritzVisionLabelPredictor>() { @Override public void onPredictorReady(FritzVisionLabelPredictor predictor) { Log.d(TAG, "Model downloaded"); MainActivity.this.predictor = predictor; } }); }

In this case, the Android app uses the ImageLabelManageModel because we want to label the image using Machine Learning.

<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android=""
        android:text="Label it"

The TextureView will hold the image stream coming from the camera, while the button will be used to label the image, as we will see later.

How to use to classify an image

To classify the image, once we have an image from the camera, it is necessary to create an instance of FritzVisionImage. Let us suppose, by now, that we have an instance of ImageReader that holds the image captured from the camera. It is necessary to convert it into a bitmap and then use the predictor to label the image:

Image image = reader.acquireLatestImage();
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.capacity()];
Bitmap bitmapImage = BitmapFactory.decodeByteArray(bytes, 0, bytes.length, null);
FritzVisionImage fvi = FritzVisionImage.fromBitmap(bitmapImage);

Finally, at line 8, the predictor, that will classify the image, is ready to be used. Now, the Android app can retrieve a list of labels with relative accuracy:

FritzVisionLabelResult labels = predictor.predict(fvi);
List<FritzVisionLabel> labelList = labels.getVisionLabels();
if (labelList != null && labelList.size() > 0) {
    FritzVisionLabel label = labels.getVisionLabels().get(0);
    System.out.println("Label [" + label.getText() + "]");

The app gets the first element of the list, that has higher accuracy. Now we can describe how to use the camera in Android to capture an image.

Handle the Camera in Android

Once the Machine Learning engine is ready, we have to focus our attention on how to capture the image. As stated before, the TextureView will hold the image stream coming from the camera. It is necessary to add a listener to the TextureView to know when we can use the camera:

TextureView.SurfaceTextureListener surfaceListener = new TextureView.SurfaceTextureListener() {
   public void onSurfaceTextureAvailable(SurfaceTexture surface, int width, int height) {
       Log.d(TAG, "Surface available");
   public void onSurfaceTextureSizeChanged(SurfaceTexture surface, int width, int height) {}
   public boolean onSurfaceTextureDestroyed(SurfaceTexture surface) {
      return false;
   public void onSurfaceTextureUpdated(SurfaceTexture surface) {

When the surface is ready, the app can open the camera and start using it. Please notice that:

preview = (TextureView) findViewById(;

Now it is possible to open the camera:

 private void openCamera() {
    CameraManager manager = (CameraManager) 
    // We get the first available camera
     try {
         cameraId = manager.getCameraIdList()[0];
         CameraCharacteristics characteristics = 
         StreamConfigurationMap streamConfMap = characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
         imageDimension = streamConfMap.getOutputSizes(SurfaceTexture.class)[0];
         // We can open the camera now
         manager.openCamera(cameraId, new cameraDevice.StateCallback() {
             public void onOpened(@NonNull CameraDevice camera) {
                Log.i(TAG, "Camera opened");
                cameraDevice = camera;
             public void onDisconnected(@NonNull CameraDevice camera) {      }
             public void onError(@NonNull CameraDevice camera, int error) {
                Log.e(TAG, "Error opening the camera");
            }, null);
        catch(CameraAccessException cae) {
            // Let us handle the error
        catch(SecurityException se) {

When the camera is opened and ready to use, the Android app can starts streaming images:

private void createPreview() {
        SurfaceTexture surfaceTexture = preview.getSurfaceTexture();
        surfaceTexture.setDefaultBufferSize(imageDimension.getWidth(), imageDimension.getHeight());
        Surface surface = new Surface(surfaceTexture);
        try {
            previewRequestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
                new CameraCaptureSession.StateCallback() {
                   public void onConfigured(@NonNull CameraCaptureSession cameraCaptureSession) {
                       cameraSession = cameraCaptureSession;
                       // Flash is automatically enabled when necessary.
                       // Finally, we start displaying the camera preview.
                       CaptureRequest previewRequest =;
                       try {
                                   previewRequest, new CameraCaptureSession.CaptureCallback() {
                                       public void onCaptureProgressed(
                                               final CameraCaptureSession session,
                                               final CaptureRequest request,
                                               final CaptureResult partialResult) {
                                       public void onCaptureCompleted(
                                               final CameraCaptureSession session,
                                               final CaptureRequest request,
                                               final TotalCaptureResult result) {
                                   }, backgroundHandler);
                       catch(CameraAccessException cae) {}
                    public void onConfigureFailed(@NonNull CameraCaptureSession cameraCaptureSession) {
                    Log.e(TAG, "Configuration failed");
               }, null);
        catch (CameraAccessException cae) {}

Classifying the image captured by the camera

In this last step, it is necessary to capture the image and classify it using Machine Learning. To this purpose, the Android app uses the button defined in the previous layout:

labelBtn = (Button) findViewById(;
labelBtn.setOnClickListener(new View.OnClickListener() {
     public void onClick(View v) {

In the takePicture method, the Android app takes the picture and label it (as shown above). If you want to have the full code you can go to Github and download it.


At the end of this article, you hopefully gained the knowledge about how to build a Machine Learning Android app using You have discovered how to use Machine Learning image classification to label an image captured by the camera.

  • Add Your Comment