fluttervisionsdkplugin 1.3.13 copy "fluttervisionsdkplugin: ^1.3.13" to clipboard
fluttervisionsdkplugin: ^1.3.13 copied to clipboard

A Flutter plugin package for integrating the Flutter Vision SDK into your Flutter application for scanning Barcodes and QrCodes

Flutter Vision SDK Plugin #

A Flutter plugin package for seamless integration of the Flutter Vision SDK into your Flutter applications. This package provides features for scanning barcodes, QR codes, and text, along with OCR (Optical Character Recognition) support. Customize scan and capture modes to meet your specific needs.

Features #

  • Scan barcodes, QR codes, and text
  • Optical Character Recognition (OCR) support
  • Flexible customization of scan and capture modes
  • NEW: Model Management API - Fine-grained control over OCR model lifecycle
    • Download models separately from loading them
    • Pre-download during onboarding for offline-first UX
    • Query downloaded/loaded models with detailed metadata
    • Check for updates before downloading
    • Explicit memory management via load/unload

Getting started #

import the package and use it in your app.

import 'package:fluttervisionsdkplugin/visionsdk.dart';
import 'package:fluttervisionsdkplugin/visioncamerawidget.dart';
import 'package:fluttervisionsdkplugin/ondeviceocrmanager.dart';
import 'package:fluttervisionsdkplugin/model_manager.dart'; // NEW: Model Management API

Usage #

It is important to initialize VisionSDK before accessing any of its other classes. You can initialize VisionSDK by using the following code:

VisionSDK().initialize(Environment.sandbox);

Here's an example of how to use the VisionCameraWidget in your Flutter application:

class VisionSDKView extends StatefulWidget {
  const VisionSDKView({super.key});

  @override
  State<VisionSDKView> createState() => VisionSDKState();
}

class VisionSDKState extends State<VisionSDKView> {
  MyPluginToFlutterCommunicator receiver = MyPluginToFlutterCommunicator();
  FlutterToPluginCommunicator? sender;

  @override
  void initState() {
    super.initState();
    VisionSDK().initialize(Environment.sandbox);
  }

  @override
  Widget build(BuildContext context) {
    return VisionCameraWidget(
      listener: receiver,
      onViewCreated: (FlutterToPluginCommunicator sender) {
        this.sender = sender;
        this.sender?.startCamera();
      },
    );
  }
}

class MyPluginToFlutterCommunicator extends PluginToFlutterCommunicator {
  @override void onCameraStarted() {}
  @override void onCameraStopped() {}
  @override void onScanError(String error) {}
  @override void onCodesReceived(List<String> codes) {}
  @override void onDetectionResult(bool isText, bool isBarcode, bool isQrCode) {}
  @override void onImageCaptured(Uint8List byteArrayImage, List<String> codes) {}
  @override void onOnlineSLResult(Map<String, dynamic> result) {}
  @override void onOnlineBOLResult(Map<String, dynamic> result) {}
}

class MainOnDeviceOCRManagerListener implements OnDeviceOCRManagerListener {
  @override void onOnDeviceConfigureProgress(double progress) {}
  @override void onOnDeviceConfigurationComplete() {}
  @override void onOnDeviceOCRResult(Map<String, dynamic> result) {}
  @override void onError(String error) {}
  @override void onReportResult(Map<ReportResult, String> reportResult) {}
}

User can set capture modes to Barcode, QRCode or OCR as following:

  void setCaptureMode(FlutterToPluginCommunicator? sender) {
    sender?.setCaptureModeBarcode();
    // OR
    sender?.setCaptureModeQrCode();
    // OR
    sender?.setCaptureModeOCR();
  }

User can set scan modes to Auto or Manual as following:

  void setCaptureMode(FlutterToPluginCommunicator? sender) {
    sender?.setScanMode(1); // Auto
    // OR
    sender?.setScanMode(2); // Manual
  }

In order to detect Barcode or QRCode in Manual mode, use the following function:

  void capture(FlutterToPluginCommunicator? sender) {
    sender?.capturePhoto();
  }

  // You will get results in the following callback:
  @override void onCodesReceived(List<String> codes) {
    print(codes);
  }  

In order to perform OCR processes in Manual mode, use the following function:

  void capture(FlutterToPluginCommunicator? sender) {
    sender?.capturePhoto();
  }

  // You will get the captured image in Base64 format and any barcodes detected in that image in the following callback:
  @override void onImageCaptured(Uint8List byteArrayImage, List<String> codes) {

  }  

Shipping Label API #

After an image is captured, you can send it to our cloud service for further logistics processing like extracting data from a shipping label. You can do that using the following methods:

  void makeShippingLabelAPICall(FlutterToPluginCommunicator? sender, Uint8List byteArrayImage, List<String> codes) {
    sender?.callShippingLabelApi(
          apiKey: 'YOUR_API_KEY_HERE',
          // OR
          token: 'YOUR_TOKEN_HERE',
          image: byteArrayImage,
          barcodes: codes
        );
  }

  // You will get the response from API in following functions:
  @override void onOnlineSLResult(Map<String, dynamic> result) {

  }

  // Or in case of any error:
  @override void onScanError(String error) {

  }

Bill of Lading API #

After an image is captured, you can send it to our cloud service for further logistics processing like extracting data from a bill of lading. You can do that using the following methods:

  void makeBillOfLadingAPICall(FlutterToPluginCommunicator? sender, Uint8List byteArrayImage, List<String> codes) {
    sender?.callBolApi(
          apiKey: 'YOUR_API_KEY_HERE',
          // OR
          token: 'YOUR_TOKEN_HERE',
          image: byteArrayImage,
          barcodes: codes);
  }

  // You will get the response from API in following functions:
  @override void onOnlineBOLResult(Map<String, dynamic> result) {

  }

  // Or in case of any error:
  @override void onScanError(String error) {

  }

Item Label API #

After an image is captured, you can send it to our cloud service for further logistics processing like extracting data from an item label. You can do that using the following methods:

  void makeItemLabelAPICall(FlutterToPluginCommunicator? sender, Uint8List byteArrayImage, List<String> codes) {
    sender?.callItemLabelApi(
          apiKey: 'YOUR_API_KEY_HERE',
          // OR
          token: 'YOUR_TOKEN_HERE',
          image: byteArrayImage);
  }

  // You will get the response from API in following functions:
  @override void onOnlineItemLabelResult(Map<String, dynamic> result) {

  }

  // Or in case of any error:
  @override void onScanError(String error) {

  }

Document Classification API #

After an image is captured, you can send it to our cloud service for identifying the class of the document in that image. You can do that using the following methods:

  void makeDocumentClassificationAPICall(FlutterToPluginCommunicator? sender, Uint8List byteArrayImage, List<String> codes) {
    sender?.callItemLabelApi(
          apiKey: 'YOUR_API_KEY_HERE',
          // OR
          token: 'YOUR_TOKEN_HERE',
          image: byteArrayImage);
  }

  // You will get the response from API in following functions:
  @override void onOnlineDocumentClassificationResult(Map<String, dynamic> result) {

  }

  // Or in case of any error:
  @override void onScanError(String error) {

  }

On-Device Shipping Label #

After an image is captured, you can extract shipping label information from it using our On-Device image processing AI powered capabilities, without the requirement of Internet. In order to do that, firstly you need to call the following function to load the AI related files before image processing actually begins:

  void configureOnDeviceSLModel(OnDeviceOCRManagerCommunicator? communicator) {
    communicator?.configureOnDeviceOCR(
          apiKey: 'YOUR_API_KEY_HERE',
          // OR
          token: 'YOUR_TOKEN_HERE',
          modelClass: ModelClass.shippingLabel,
          modelSize: ModelSize.large);
  }

Important Note:

ModelClass.shippingLabel is supported with ModelSize.micro and Model.large options.

ModelClass.itemLabel is supported with Model.large option.

ModelClass.documentClassification is supported with Model.large option.

You will get its progress and completion callbacks in the following function:

  @override void onOnDeviceConfigureProgress(double progress) {
    // Progress goes from 0.0 to 1.0
  }

  @override void onOnDeviceConfigurationComplete() {
    // At this point, model configuration has been completed.
  }

  // Or in case of any error:
  @override void onError(String error) {

  }

After model configuration has been completed, you can send the Base64 image to the following function to extract Shipping Label information from it:

  void getLabelInfoOffline(OnDeviceOCRManagerCommunicator? communicator, Uint8List byteArrayImage, List<String> codes) {
    communicator?.getPredictions(byteArrayImage, codes);
  }

  // You will get the response in following functions:
  @override void onOnDeviceOCRResult(Map<String, dynamic> result) {
    
  }

  // Or in case of any error:
  @override void onError(String error) {

  }

Report an issue #

VisionSDK contains internal error reporting mechanism if it faces any issue. Furthermore, if you get a response from On-Device models, that you consider to be incorrect, then you can report it using the following function:

void reportAnIssue(
  OnDeviceOCRManagerCommunicator? communicator,
  String? apiKey,
  String? token,
  required ModelClass modelClass,
  required ModelSize modelSize,
  required String report,
  Map<String, dynamic>? customData,
  String? base64ImageToReportOn
) {
    communicator?.reportAnIssue(
      apiKey: apiKey,
      token: tokem
      modelClass: modelClass,
      modelSize: modelSize,
      report: report,
      customData: customData,
      base64ImageToReportOn: base64ImageToReportOn
    );
  }

  // You will get the response in following functions:
  @override void onReportResult(Map<ReportResult, String> reportResult) {
    switch (reportResult.keys.first) {
      
      case ReportResult.successful:
        // Case where report was submitted successfully.
        break;
        
      case ReportResult.savedForLater:
        // Case where report was saved to be submitted later.
        // NOTE: Saved reports will be submitted by VisionSDK automatically.
        break;
        
      case ReportResult.failed:
        // Case where report submission faced an error. These reports needs to be submitted again.
        val errorMessage = reportResult.values.first;
        break;
        
    }
  }

Release Resources #

After client app is done with the On-Device processing, it should release the resources by calling the following method:

void releaseResources(OnDeviceOCRManagerCommunicator? communicator) {
  communicator?.release();
}

Model Management API (NEW) #

The Model Management API provides fine-grained control over OCR model lifecycle. This is particularly useful for:

  • Pre-downloading models during onboarding for an offline-first experience
  • Memory management by loading/unloading models on demand
  • Background updates by checking for model updates before downloading
  • Querying model state with detailed metadata

Initialize ModelManager #

Before using ModelManager, you must initialize it with your API key or token:

import 'package:fluttervisionsdkplugin/model_manager.dart';

// Initialize with builder pattern callback
await ModelManager.initialize((builder) {
  builder
    .enableLogging(true)
    .lifecycleListener(MyModelLifecycleListener());
});

// Get singleton instance after initialization
final modelManager = ModelManager.getInstance();

Download a Model #

Download a model without loading it into memory:

// Define the model to download
final module = OCRModule(
  modelClass: ModelClass.shippingLabel,
  modelSize: ModelSize.large,
);

// Download the model with progress callback
await modelManager.downloadModel(
  module: module,
  apiKey: 'YOUR_API_KEY',
  onProgress: (progress) {
    print('Progress: ${progress.progressPercent}%');
  },
);

Load a Model #

Load a downloaded model into memory for inference:

// Load with optional execution provider (Android only)
await modelManager.loadModel(
  module: module,
  apiKey: 'YOUR_API_KEY',
  executionProvider: ExecutionProvider.cpu, // Optional: cpu, nnapi, or xnnpack
);

Unload a Model #

Free memory by unloading a model that's no longer needed:

await modelManager.unloadModel(module);

Query Downloaded Models #

Find all models that have been downloaded:

final downloadedModels = await modelManager.findDownloadedModels();

for (final info in downloadedModels) {
  print('Model: ${info.module.modelClass} - ${info.module.modelSize}');
  print('Version: ${info.version}');
  print('Is Loaded: ${info.isLoaded}');
}

Query Loaded Models #

Find all models currently loaded in memory:

final loadedModels = await modelManager.findLoadedModels();

Check for Model Updates #

Check if a newer version of a model is available:

final updateInfo = await modelManager.checkModelUpdates(
  module: module,
  apiKey: 'YOUR_API_KEY',
);

if (updateInfo.updateAvailable) {
  print('Update available for ${module.modelClass}');
  // Download the update
  await modelManager.downloadModel(module: module, apiKey: 'YOUR_API_KEY');
}

Delete a Model #

Remove a downloaded model from storage:

await modelManager.deleteModel(module);

Check Model States #

// Check if a model is downloaded
final isDownloaded = await modelManager.isModelDownloaded(module);

// Check if a model is loaded in memory
final isLoaded = await modelManager.isModelLoaded(module);

Cancel Download #

Cancel an ongoing model download:

await modelManager.cancelDownload(module);

Lifecycle Listener #

Monitor model lifecycle events with a listener:

class MyModelLifecycleListener extends ModelLifecycleListener {
  @override
  void onDownloadStarted(OCRModule module) {
    print('Download started: ${module.modelClass}');
  }

  @override
  void onDownloadCompleted(OCRModule module) {
    print('Download completed: ${module.modelClass}');
  }

  @override
  void onDownloadFailed(OCRModule module, ModelException exception) {
    print('Download failed: ${exception.message}');
  }

  @override
  void onDownloadCancelled(OCRModule module) {
    print('Download cancelled: ${module.modelClass}');
  }

  @override
  void onModelLoaded(OCRModule module) {
    print('Model loaded: ${module.modelClass}');
  }

  @override
  void onModelUnloaded(OCRModule module) {
    print('Model unloaded: ${module.modelClass}');
  }

  @override
  void onModelDeleted(OCRModule module) {
    print('Model deleted: ${module.modelClass}');
  }
}

Or use the convenience callback class:

final listener = ModelLifecycleCallbacks(
  onDownloadStarted: (module) {
    print('Download started: ${module.modelClass}');
  },
  onDownloadCompleted: (module) {
    print('Download completed: ${module.modelClass}');
  },
  onDownloadFailed: (module, exception) {
    print('Download failed: ${exception.message}');
  },
);

Make Predictions with Specific Model #

Use makePredictionWithModule to run OCR with a model loaded via ModelManager:

// First, ensure the model is loaded via ModelManager
await modelManager.loadModel(module: module, apiKey: 'YOUR_API_KEY');

// Then make predictions using that specific module
communicator?.makePredictionWithModule(
  modelClass: module.modelClass,
  modelSize: module.modelSize,
  byteArrayImage: imageData,
  barcodes: barcodes,
);

Error Handling #

The API uses typed exceptions for error handling:

try {
  await modelManager.downloadModel(module: module, apiKey: 'YOUR_API_KEY');
} on ModelSdkNotInitializedException {
  print('SDK not initialized');
} on ModelNoNetworkException {
  print('No network connection');
} on ModelNetworkException catch (e) {
  print('Network error: ${e.message}');
} on ModelStorageException catch (e) {
  print('Storage error: ${e.message}');
} on ModelNotFoundException catch (e) {
  print('Model not found: ${e.message}');
} on ModelLoadException catch (e) {
  print('Failed to load model: ${e.message}');
} on ModelException catch (e) {
  print('Model error: ${e.message}');
}

Example App #

The example app includes a Model Management demo accessible via the settings icon (⚙️) next to the camera switch button. This bottom sheet demonstrates:

  • Initializing ModelManager
  • Downloading models with progress tracking
  • Loading/unloading models from memory
  • Querying downloaded and loaded models
  • Checking for model updates
  • Deleting models
  • Error handling

Native Documentation #

iOS Documentation #

  • To see the iOS documentation you can visit here to see the details of each feature and their configuration parameters

Android Documentation #

  • To see the Android documentation you can visit here to see the details of each feature and their configuration parameters

Additional information #

Users can import this package and integrate it into their Flutter projects to enable barcode and QR code scanning with customizable modes. This package provides a simple and efficient way to leverage the power of the Flutter Vision SDK in your applications.

For ios installation you need to install the pods again

For android add the below code in setting.gradle:

allprojects {
    repositories {
        google()
        mavenCentral()
        maven { url "https://jitpack.io" }
    }
}

Set the below versions in android->app->build.gradle:

minSdk 29

targetSdk 34

2
likes
140
points
521
downloads

Publisher

verified publisherpackagex.io

Weekly Downloads

A Flutter plugin package for integrating the Flutter Vision SDK into your Flutter application for scanning Barcodes and QrCodes

Homepage

Documentation

API reference

License

MIT (license)

Dependencies

flutter, flutter_web_plugins, plugin_platform_interface

More

Packages that depend on fluttervisionsdkplugin

Packages that implement fluttervisionsdkplugin