AudioTranscribeAzureSettings interface
Settings for an audio transcribe/translate operation using Azure Speech Service see: NorskTransform.audioTranscribeAzure()
Signature:
export interface AudioTranscribeAzureSettings extends ProcessorNodeSettings<AudioTranscribeAzureNode>
Properties
Property | Type | Description |
---|---|---|
string |
Key for the Azure Speech Service endpoint |
|
string |
Region for the Azure Speech Service endpoint |
|
boolean |
(Optional) Enable dictation mode (recognise dictated punctuation etc rather than transcribing the audio verbatim) |
|
number |
||
'masked' | 'removed' | 'raw' |
(Optional) Profanity behaviour (whether to mask or remove profanity) |
|
string |
The source language to recognise - an IETF BCP 47 language tag, eg en-US, en-GB, de-DE. Supported languages are found at https://learn.microsoft.com/en-us/azure/ai-services/speech-service/language-support?tabs=stt |
|
string[] |
(Optional) The target output languages for translation - technically a BCP 47 language tag but but in most cases omitting region, e.g. en, de, zh-Hant. Leave this field absent/empty to use the transcription service without translation, while if any target languages are present the translation service will be used even if this is the same as the source language. |