Father’s day hacking: xPlatform Edge CV with Flutter + TensorFlow Lite + MobileNet

Mohammed Zaheeruddin Malick
5 min readJun 20, 2022

It’s the father’s day weekend, Happy Father’s day to all…

I had a great start to the day; my babies surprised me with a nice breakfast, thoughtful gift and cards :) feeling so blessed… and it’s been quite a while since i got some good dose of hacking dopamine — so :) thought of hacking a random idea and hope to inspire my babies to be builders this Father’s day.

First need a random idea… My baby wants an app which can identify all kinds of birds perching in our yard… hmm… that’d require some darwinistic level of model training, so wanted to start small first, perhaps identify fruits and veggies… as baby steps.

My tool of choice is flutter, It still has a long way to go for mainstream adoption but it’s been growing leaps and bounds, attracting an army of app developers and fostering a strong community…

Lets get started — what do we need first? flutter setup, thats a standard op: https://docs.flutter.dev/get-started/install/macos

Second: we need a way to make Edge CV work, turns out Tensor flow has a lite version for mobile and some googling lead me to Tensorflow Lite “tflite”: https://www.tensorflow.org/lite

Third: Does flutter support tflite bindings? again google to the rescue, turns out there is a binding, although badly outdated, so had to struggle a bit to get it to work, but it worked nevertheless https://github.com/shaqian/flutter_tflite

Fourth: Now, we need a pre-trained model for getting started fast, googling again and after some filtering found a TFlite model trained on imagenet dataset: https://github.com/tensorflow/models/tree/master/research/slim (search for ‘mobilenet_v1_1.0_224’)

Fifth: Let the hacking begin…

Keeping it simple, let’s build a simple scaffold based interface, nothing fancy, initialize the model, provide a way for the user to select an image from the gallery using an action button and image picker, run inference and display the results…

Code is straight forward, wanted to keep it within 150 lines… best written code is always self-documenting…

import 'dart:async';
import 'dart:io';

import 'package:flutter/material.dart';
import 'package:flutter/services.dart';
import 'package:image_picker/image_picker.dart';
import 'package:tflite_maven/tflite.dart';

void main() => runApp(FlutterEdgeCVApp());

class FlutterEdgeCVApp extends StatelessWidget {
@override
Widget build(BuildContext context) =>
MaterialApp(home: FlutterEgdeCVAppWidget());
} // Boiler plate

class FlutterEgdeCVAppWidget extends StatefulWidget {
@override
State<FlutterEgdeCVAppWidget> createState() => _FlutterEgdeCVAppWidgetState();
} // Boiler plate

class _FlutterEgdeCVAppWidgetState extends State<FlutterEgdeCVAppWidget> {
XFile? selectedImage;
List inference = [];
bool modelBusy = false;
@override
void initState() {
super.initState();
modelBusy = true;
loadModelMobilenet().then((_) {
if (mounted) setState(() => modelBusy = false);
}); // Load Mobilnet model
}
Future selectImageAndInfer() async {
var image = await ImagePicker().pickImage(
source: ImageSource.gallery);
if (image == null) return;
if (mounted) setState(() => modelBusy = true);
inferObject(image);
} // Select image using image picker from gallery

Future inferObject(XFile? image) async {
if (image == null) return;
recognizeImage(image).then((_) => setState(() {
selectedImage = image;
modelBusy = false;
}));
} // Pass selected image to model in memory for inferencing

Future loadModelMobilenet() async {
Tflite.close();
try {
Tflite.loadModel(
model: "assets/mobilenet_v1_1.0_224.tflite",
labels: "assets/mobilenet_v1_1.0_224.txt",
useGpuDelegate: true)
.then((_result) {
modelBusy = false;
print("Loading Mobilenet v1 Model: $_result");
}); // Load Mobilnet model, metadata and label
} on PlatformException {
print('Failed to load Mobilenet v1 Model');
}
}

Future recognizeImage(XFile image) async {
int startTime = DateTime.now().millisecondsSinceEpoch;
final inferenceResults = await Tflite.runModelOnImage(
path: image.path,
numResults: 6,
threshold: 0.05,
imageMean: 127.5,
imageStd: 127.5); // Run inferencing on picture
setState(() => inference = inferenceResults ?? []);
int endTime = DateTime.now().millisecondsSinceEpoch;
print("Done in ${endTime - startTime}ms");
}

@override
Widget build(BuildContext context) {
final List<Widget> listViewItems = [];

listViewItems.add(selectedImage != null
? Padding(
padding: const EdgeInsets.all(8.0),
child:
Image.file(File.fromUri(
Uri.file(selectedImage?.path ?? ""))))
: Padding(
padding: const EdgeInsets.all(8.0),
child: Text("Select an image",
textAlign: TextAlign.center,
style: TextStyle(fontSize: 20)))); // UX Layout

_tableCell({text, TextAlign align = TextAlign.end}) =>
TableCell(
child: Padding(
padding: const EdgeInsets.all(2.0),
child: Text(text,
textAlign: align,
style: TextStyle(color: Colors.black,
fontSize: 21.0))));

listViewItems.add(Table(
children: inference.map((_) {
print(_);
return TableRow(
children: [
_tableCell(text: "${_["label"]} :"),
_tableCell(
text:
"${(_["confidence"] * 100).toStringAsFixed(2)}%",
align: TextAlign.start)
],
);
}).toList(), // Tabulate result classes and confidence scores
));

return Scaffold( // Flutter scaffold boilerplate
appBar: AppBar(
title: const Text('Flutter + TensorFlow Lite'),
backgroundColor: Colors.black),
body: modelBusy
? Center(child: CircularProgressIndicator())
: ListView(children: listViewItems),
floatingActionButton: FloatingActionButton(
backgroundColor: Colors.black,
onPressed: selectImageAndInfer,
child: Icon(Icons.image)),
);
}
}

And that’s it works great on IOS emulator… well was not exactly a breeze, faced a bunch of build failures, deprecation issues and what not, golden rule: make sure you configure the IOS module in XCode to use Objective-C as the source to be compiled.

And of-course, configure pubspec.yaml (which essentially tells flutter build tool to download all the required dependencies)

name: FlutterEdgeCV
description: Edge CV with Flutter and TFlite.

version: 1.0.0+1

environment: ## Boiler plate, upgraded to Flutter 3
sdk: ">=2.17.1 <3.0.0"

dependencies:
flutter:
sdk: flutter

# The following adds the Cupertino Icons font to your application.
# Use with the CupertinoIcons class for iOS style icons.
cupertino_icons: ^1.0.5
image_picker: ^0.8.5+3 # Dependency for image picker

dev_dependencies:
flutter_test:
sdk: flutter


tflite_maven: ^1.1.5 # Tensor flow lite binding plugin
flutter_lints: ^2.0.1


flutter:
uses-material-design: true

assets:
- assets/mobilenet_v1_1.0_224.txt
- assets/mobilenet_v1_1.0_224.tflite
# ^^ Model, Metadata and labelling assets

Sixth: Try it on Android emulator but now i ran out of luck… dang! Android embeddings used by the tflite binding is grossly outdated, i need to port the bindings to the latest standards (let’s park that for now) but will definitely try running the model on live capture frames with IOS…

What’s next: The model needs to be trained with more data and tuned, to identify more objects, specifically birds or at-least the finches and humming bird visitors in my yard!

Hmm… that was quick, was able to get this up and running in couple hours… and it’s been so exciting to get this working first attempt, perhaps having a way to collect labelled ground truth from users and running training in the cloud to push updated models back to the device would drastically improve accuracy and coverage…

Let the hacking continue…

--

--

Mohammed Zaheeruddin Malick

Tireless Learner | Geek | Father of two budding Women leaders!