Reasoning

AI-powered reasoning over live media streams using Plan + Evaluate architecture

Component Configuration

Option Description Required

id

A unique identifier for the component instance

displayName

The name shown in the Studio UI of this component

apiMode

API mode: Generative (frame-by-frame analysis) or Live (continuous streaming via WebSocket)

mode

Operating mode

evaluationProvider

LLM provider for evaluation

analysisIntervalSecs

Seconds between LLM prompts (Generative: analysis interval, Live: periodic prompt interval) (default: 2)

contextDocuments

File paths to reference documents (relative to data directory, e.g. rules.md) (default: [])

notes

Optional notes about this component

Api Mode (Generative)

Option Description Required

kind

"generative"

audioEnergy

Audio energy levels (RMS, peak)

vad

Voice activity detection

sceneChange

Scene change detection

objectDetection

Object detection

ocr

Optical character recognition (default: {"enabled":false,"windowSecs":10})

Api Mode (Live)

Option Description Required

kind

"live"

Mode (Planning)

Option Description Required

kind

"planning"

query

Natural-language query for the planning LLM

autoAccept

Auto-accept first proposed spec (default: true)

planningProvider

LLM provider for planning (spec generation)

events

Events the LLM can raise (default: [])

Mode (Baked Spec)

Option Description Required

kind

"baked"

specJson

Paste a spec here (JSON or YAML from 'Copy Spec' button)

specApiMode

API mode the spec was generated from (default: "generative")

restorePlanning

If the pasted spec was produced by 'Copy Spec', restores the original planning configuration (apiMode, query, provider, events) into the form (default: null)

Evaluation Provider (Gemini API)

Option Description Required

kind

"gemini-api"

model

Gemini model name (default: "gemini-2.5-pro")

apiKeyOverride

Override GOOGLE_API_KEY env var

Evaluation Provider (Gemini Vertex AI)

Option Description Required

kind

"gemini-vertex"

model

Gemini model name (default: "gemini-2.5-pro")

project

Vertex AI Project ID

location

Vertex AI Location (default: "us-central1")

credentialsPath

Path to service account JSON file

Evaluation Provider (OpenAI)

Option Description Required

kind

"openai"

model

OpenAI model name (default: "gpt-4o")

apiKeyOverride

Override OPENAI_API_KEY env var

Evaluation Provider (Claude)

Option Description Required

kind

"claude"

model

Claude model name (default: "claude-sonnet-4-20250514")

maxTokens

Max tokens per response (default: 4096)

apiKeyOverride

Override ANTHROPIC_API_KEY env var

Evaluation Provider (Local)

Option Description Required

kind

"local"

modelName

Local model name (default: "vikhyatk/moondream2")

Tags: ai reasoning llm plan evaluate gemini openai claude detection analysis