pose_detection library

On-device pose detection and landmark estimation using TensorFlow Lite.

This library provides a Flutter plugin for real-time human pose detection using Google's MediaPipe BlazePose models. It detects persons in images and extracts 33 body landmarks (keypoints) for each detected person.

Quick Start:

import 'package:pose_detection/pose_detection.dart';

// One-step construction
final detector = await PoseDetector.create();

// Or two-step, if you need to configure between construction and init
final detector = PoseDetector();
await detector.initialize();

final poses = await detector.detect(imageBytes);
for (final pose in poses) {
  print('Person detected at ${pose.boundingBox}');
  if (pose.hasLandmarks) {
    final nose = pose.getLandmark(PoseLandmarkType.nose);
    print('Nose position: (${nose?.x}, ${nose?.y})');
  }
}

await detector.dispose();

Main Classes:

  • PoseDetector: Main API for pose detection (background isolate on native; async browser runtime on web)
  • Pose: Detected person with bounding box and optional 33 landmarks
  • PoseLandmark: Single body keypoint with 3D coordinates and visibility
  • PoseLandmarkType: Enum of 33 body parts (nose, shoulders, knees, etc.)
  • BoundingBox: Axis-aligned rectangle for person location

Detection Modes:

Model Variants:

Classes

BoundingBox
An axis-aligned or rotated bounding box defined by four corner points.
CameraFrame
A camera frame packaged for off-thread colour conversion and inference.
CameraPoseOverlayPainter
Paints pose detection results over a live camera preview.
FpsCounter
A simple 1-second rolling FPS counter for camera-preview apps.
LetterboxParams
Parameters for aspect-preserving resize with centered padding.
MultiOverlayPainter
Paints pose detection results over a still image.
PackedYuv
A contiguous YUV buffer produced by packYuv420, ready to hand to a native colour-conversion routine.
PerformanceConfig
Configuration for interpreter hardware acceleration and threading.
Point
A point with x, y, and optional z coordinates.
Pose
Detected person with bounding box and optional body landmarks.
PoseDetectionDart
Dart plugin registration for pose_detection.
PoseDetector
PoseLandmark
A single body keypoint with 3D coordinates and visibility score.
PoseLandmarks
Collection of pose landmarks with a confidence score.

Enums

CameraFrameConversion
The colour conversion a CameraFrame's bytes need before being used as a 3-channel BGR image. Detector packages map this to an opencv COLOR_* code at the point of decode, inside their existing detection isolate.
CameraFrameRotation
Optional rotation applied after colour conversion. Detector packages map this to an opencv ROTATE_* code.
PerformanceMode
Hardware acceleration mode for LiteRT inference.
PoseLandmarkModel
BlazePose model variant for landmark extraction.
PoseLandmarkType
Body part types for the 33 BlazePose landmarks.
PoseMode
Detection mode controlling the two-stage pipeline behavior.
YuvLayout
Memory layout of a packed YUV buffer produced by packYuv420.

Constants

poseLandmarkConnections → const List<List<PoseLandmarkType>>
Defines the standard skeleton connections between BlazePose landmarks.

Functions

allocTensorShape(List<int> shape) Object
Allocates a nested list structure matching the given tensor shape.
barQuarterTurns(DeviceOrientation orientation) int
Quarter-turns (clockwise) to rotate a top-bar widget so it reads upright when the device is in landscape. Use with RotatedBox(quarterTurns: ...).
bgrBytesToRgbFloat32({required Uint8List bytes, required int totalPixels, Float32List? buffer}) Float32List
Converts BGR bytes to a flat Float32List with 0.0..1.0 normalization.
bgrBytesToSignedFloat32({required Uint8List bytes, required int totalPixels, Float32List? buffer}) Float32List
Converts BGR bytes to a flat Float32List with -1.0..1.0 normalization.
clamp01(double v) double
Clamps v to the range 0.0..1.0. Returns 0.0 for NaN inputs.
clip(double v, double lo, double hi) double
Clamps v to the range lo..hi.
computeLetterboxParams({required int srcWidth, required int srcHeight, required int targetWidth, required int targetHeight, bool roundDimensions = true}) LetterboxParams
Computes letterbox parameters for resizing srcWidthxsrcHeight to fit within targetWidthxtargetHeight while preserving aspect ratio.
coverFitScaleOffset(int sourceW, int sourceH, double viewW, double viewH) → ({double offsetX, double offsetY, double scale})
Cover-fit scale + offset for rendering a source region of size (sourceW, sourceH) into a viewport of size (viewW, viewH).
createNHWCTensor4D(int height, int width) List<List<List<List<double>>>>
Creates a pre-allocated [1][height][width][3] tensor structure.
detectionSize({required int width, required int height, required CameraFrameRotation? rotation, required int maxDim}) Size
Compute the final detection-image size used by overlay painters to map detector coordinates back onto the widget coord space.
drawBoundingBoxOutline({required Canvas canvas, required BoundingBox bbox, required double scaleX, required double scaleY, required double offsetX, required double offsetY, required Paint paint}) → void
Draw the axis-aligned outline of a BoundingBox transformed by a linear scale + offset. Use a stroked Paint for an outline, or a filled one to tint the interior.
drawLandmarkMarker(Canvas canvas, double x, double y, {double glowRadius = 8, double pointRadius = 5, double centerRadius = 2, Paint? glowPaint, Paint? pointPaint, Paint? centerPaint}) → void
Draw a standard "glow + point + center dot" triple-circle landmark marker at (x, y) in canvas coordinates.
drawSkeletonConnections({required Canvas canvas, required List<Offset> scaledPoints, required List<(int, int)> connections, required Paint paint}) → void
Draw straight-line connections between pre-scaled landmark points.
fillNHWC4D(Float32List flat, List<List<List<List<double>>>> cache, int inH, int inW) → void
Fills an NHWC 4D tensor cache from a flat Float32List.
flattenDynamicTensor(Object? out) Float32List
Flattens an arbitrarily nested tensor to a flat Float32List.
packYuv420({required int width, required int height, required YuvPlane y, required YuvPlane u, YuvPlane? v}) PackedYuv?
Packs a YUV420 camera frame into a single contiguous buffer suitable for native colour conversion (e.g. opencv's cvtColor with a COLOR_YUV2BGR_NV21 / COLOR_YUV2BGR_NV12 / COLOR_YUV2BGR_I420 code).
prepareCameraFrame({required int width, required int height, required List<CameraPlane> planes, CameraFrameRotation? rotation, bool isBgra = true}) CameraFrame?
Prepare a CameraFrame descriptor from raw camera planes, for use with a detector package's detectFromCameraFrame(...) method.
prepareCameraFrameFromImage(Object cameraImage, {CameraFrameRotation? rotation, bool isBgra = true}) CameraFrame?
Convenience wrapper around prepareCameraFrame that accepts any object duck-typed to package:camera's CameraImage (i.e. exposing width, height, and a planes iterable of objects with bytes, bytesPerRow, and bytesPerPixel getters).
rotationForFrame({required int width, required int height, required int sensorOrientation, required bool isFrontCamera, required DeviceOrientation deviceOrientation}) CameraFrameRotation?
Compute the rotation needed to present a camera frame upright to an on-device detection model, given the camera's sensor orientation and the device's current physical orientation.
sigmoid(double x) double
Sigmoid activation function.
sigmoidClipped(double x, {double limit = 80.0}) double
Sigmoid with input clipping to prevent overflow.

Typedefs

CameraPlane = ({Uint8List bytes, int pixelStride, int rowStride})
A single camera frame plane exposed by a camera plugin.
YuvPlane = ({Uint8List bytes, int pixelStride, int rowStride})
A single YUV plane exposed by a camera plugin, decoupled from any specific Flutter plugin's type (e.g. CameraImage.Plane).