VirtualBackgroundProcessor

Interface for virtual background processing.

Platform implementations use ML Kit Selfie Segmentation to:

  1. Capture video frames from the camera

  2. Segment the person from the background

  3. Composite the person with a virtual background

  4. Output processed frames to a new video track

Inheritors

AndroidVirtualBackgroundProcessor

Properties

Link copied to clipboard

The currently applied virtual background

Link copied to clipboard
abstract val isProcessing: Boolean

Whether the processor is currently active

Functions

Link copied to clipboard
abstract fun release()

Release all resources. Call when done with the processor.

Link copied to clipboard
abstract fun setFrameCallback(onProcessedFrame: (ProcessedFrame) -> Unit?)

Update the frame callback for preview while processing continues. This allows registering a preview callback when the processor is already running.

Link copied to clipboard
abstract suspend fun startProcessing(inputStream: MediaStream, background: VirtualBackground, onProcessedFrame: (ProcessedFrame) -> Unit? = null): MediaStream?

Start processing the input video stream with the given background.

Link copied to clipboard
open suspend fun startProcessingWithDevice(inputStream: MediaStream, background: VirtualBackground, device: WebRtcDevice, onProcessedFrame: (ProcessedFrame) -> Unit? = null): MediaStream?

Start processing with a WebRtcDevice to create output stream. This method creates a virtual video source that can be used with mediasoup to send processed frames to remote participants.

Link copied to clipboard
abstract suspend fun stopProcessing(): MediaStream?

Stop processing and release resources.

Link copied to clipboard
abstract suspend fun updateBackground(background: VirtualBackground)

Update the virtual background while processing continues.