AndroidVirtualBackgroundProcessor

Android implementation of VirtualBackgroundProcessor using ML Kit Selfie Segmentation.

This captures frames from WebRTC video, runs ML Kit segmentation to detect the person, composites the person with a virtual background, and outputs to a virtual video source.

Constructors

Link copied to clipboard
constructor(context: Context)

Types

Link copied to clipboard
object Companion

Properties

Link copied to clipboard

The currently applied virtual background

Link copied to clipboard
open override val isProcessing: Boolean

Whether the processor is currently active

Functions

Link copied to clipboard
open override fun release()

Release all resources. Call when done with the processor.

Link copied to clipboard
open override fun setFrameCallback(onProcessedFrame: (ProcessedFrame) -> Unit?)

Update the frame callback for preview while processing continues. This allows registering a preview callback when the processor is already running.

Link copied to clipboard
open suspend override fun startProcessing(inputStream: MediaStream, background: VirtualBackground, onProcessedFrame: (ProcessedFrame) -> Unit?): MediaStream?

Start processing the input video stream with the given background.

Link copied to clipboard
open suspend override fun startProcessingWithDevice(inputStream: MediaStream, background: VirtualBackground, device: WebRtcDevice, onProcessedFrame: (ProcessedFrame) -> Unit?): MediaStream?

Start processing with a WebRtcDevice to create a virtual output stream. This creates a new video source that receives processed frames and can be used with mediasoup to send to remote participants.

Link copied to clipboard
open suspend override fun stopProcessing(): MediaStream?

Stop processing and release resources.

Link copied to clipboard
open suspend override fun updateBackground(background: VirtualBackground)

Update the virtual background while processing continues.