Components
Conceptually, every media workflow has three behaviors. Media flows into the workflow (from camera, files etc), it flows out (e.g. to a downstream CDN or storage capability) and in between you probably want to manipulate that media is some way.
Norsk models this behavior with components - the elements you drag onto the Studio canvas and components themselves are in one of three categories - Inputs, Outputs and Processors
Each component can be configured and controlled through the Studio UI. Many also expose an OpenAPI-defined interface, giving you programmatic control of their behavior at runtime. That means you can do things like switch sources, enable or disable outputs, or update on-screen graphics directly from your own automation or scripts.
Studio includes an embedded Swagger interface so you can explore and try these APIs without leaving the application.
Components fall into three main categories:
Inputs
Input components bring media into your workflow. They can ingest streams over common protocols such as RTMP, SRT, WebRTC (via WHIP), UDP, or provide synthetic sources like test cards and silence generators.
Processors
Processor components transform or modify media within the workflow. They can switch between sources, overlay graphics, transcode to different resolutions, insert SCTE markers, and more. Examples include encoding video into multiple renditions for ABR, adding a browser overlay, or creating a multiview from two or more sources.
Outputs
Output components deliver media for playback. This could mean publishing adaptive bitrate ladders to a CDN, using WebRTC for real-time delivery, delivering RTMP or SRT endpoints, saving the media to file or other storage, or exposing live previews within Studio.