Class: Aws::TranscribeStreamingService::AsyncClient
- Inherits:
-
Seahorse::Client::AsyncBase
- Object
- Seahorse::Client::Base
- Seahorse::Client::AsyncBase
- Aws::TranscribeStreamingService::AsyncClient
- Includes:
- AsyncClientStubs
- Defined in:
- gems/aws-sdk-transcribestreamingservice/lib/aws-sdk-transcribestreamingservice/async_client.rb
Overview
An API async client for TranscribeStreamingService. To construct an async client, you need to configure a :region
and :credentials
.
async_client = Aws::TranscribeStreamingService::AsyncClient.new(
region: region_name,
credentials: credentials,
# ...
)
For details on configuring region and credentials see the developer guide.
See #initialize for a full list of supported configuration options.
Instance Attribute Summary
Attributes inherited from Seahorse::Client::AsyncBase
Attributes inherited from Seahorse::Client::Base
API Operations collapse
-
#start_call_analytics_stream_transcription(params = {}) ⇒ Types::StartCallAnalyticsStreamTranscriptionResponse
Starts a bidirectional HTTP/2 or WebSocket stream where audio is streamed to Amazon Transcribe and the transcription results are streamed to your application.
-
#start_medical_scribe_stream(params = {}) ⇒ Types::StartMedicalScribeStreamResponse
Starts a bidirectional HTTP/2 stream, where audio is streamed to Amazon Web Services HealthScribe and the transcription results are streamed to your application.
-
#start_medical_stream_transcription(params = {}) ⇒ Types::StartMedicalStreamTranscriptionResponse
Starts a bidirectional HTTP/2 or WebSocket stream where audio is streamed to Amazon Transcribe Medical and the transcription results are streamed to your application.
-
#start_stream_transcription(params = {}) ⇒ Types::StartStreamTranscriptionResponse
Starts a bidirectional HTTP/2 or WebSocket stream where audio is streamed to Amazon Transcribe and the transcription results are streamed to your application.
Instance Method Summary collapse
-
#initialize(options) ⇒ AsyncClient
constructor
A new instance of AsyncClient.
Methods included from AsyncClientStubs
Methods included from ClientStubs
#api_requests, #stub_data, #stub_responses
Methods inherited from Seahorse::Client::AsyncBase
#close_connection, #connection_errors, #new_connection, #operation_names
Methods inherited from Seahorse::Client::Base
add_plugin, api, clear_plugins, define, new, #operation_names, plugins, remove_plugin, set_api, set_plugins
Methods included from Seahorse::Client::HandlerBuilder
#handle, #handle_request, #handle_response
Constructor Details
#initialize(options) ⇒ AsyncClient
Returns a new instance of AsyncClient.
Parameters:
- options (Hash)
Options Hash (options):
-
:plugins
(Array<Seahorse::Client::Plugin>)
— default:
[]]
—
A list of plugins to apply to the client. Each plugin is either a class name or an instance of a plugin class.
-
:credentials
(required, Aws::CredentialProvider)
—
Your AWS credentials. This can be an instance of any one of the following classes:
Aws::Credentials
- Used for configuring static, non-refreshing credentials.Aws::SharedCredentials
- Used for loading static credentials from a shared file, such as~/.aws/config
.Aws::AssumeRoleCredentials
- Used when you need to assume a role.Aws::AssumeRoleWebIdentityCredentials
- Used when you need to assume a role after providing credentials via the web.Aws::SSOCredentials
- Used for loading credentials from AWS SSO using an access token generated fromaws login
.Aws::ProcessCredentials
- Used for loading credentials from a process that outputs to stdout.Aws::InstanceProfileCredentials
- Used for loading credentials from an EC2 IMDS on an EC2 instance.Aws::ECSCredentials
- Used for loading credentials from instances running in ECS.Aws::CognitoIdentityCredentials
- Used for loading credentials from the Cognito Identity service.
When
:credentials
are not configured directly, the following locations will be searched for credentials:Aws.config[:credentials]
- The
:access_key_id
,:secret_access_key
,:session_token
, and:account_id
options. - ENV['AWS_ACCESS_KEY_ID'], ENV['AWS_SECRET_ACCESS_KEY'], ENV['AWS_SESSION_TOKEN'], and ENV['AWS_ACCOUNT_ID']
~/.aws/credentials
~/.aws/config
- EC2/ECS IMDS instance profile - When used by default, the timeouts
are very aggressive. Construct and pass an instance of
Aws::InstanceProfileCredentials
orAws::ECSCredentials
to enable retries and extended timeouts. Instance profile credential fetching can be disabled by setting ENV['AWS_EC2_METADATA_DISABLED'] to true.
-
:region
(required, String)
—
The AWS region to connect to. The configured
:region
is used to determine the service:endpoint
. When not passed, a default:region
is searched for in the following locations:Aws.config[:region]
ENV['AWS_REGION']
ENV['AMAZON_REGION']
ENV['AWS_DEFAULT_REGION']
~/.aws/credentials
~/.aws/config
- :access_key_id (String)
- :account_id (String)
-
:adaptive_retry_wait_to_fill
(Boolean)
— default:
true
—
Used only in
adaptive
retry mode. When true, the request will sleep until there is sufficent client side capacity to retry the request. When false, the request will raise aRetryCapacityNotAvailableError
and will not retry instead of sleeping. -
:convert_params
(Boolean)
— default:
true
—
When
true
, an attempt is made to coerce request parameters into the required types. -
:correct_clock_skew
(Boolean)
— default:
true
—
Used only in
standard
and adaptive retry modes. Specifies whether to apply a clock skew correction and retry requests with skewed client clocks. -
:defaults_mode
(String)
— default:
"legacy"
—
See DefaultsModeConfiguration for a list of the accepted modes and the configuration defaults that are included.
-
:disable_request_compression
(Boolean)
— default:
false
—
When set to 'true' the request body will not be compressed for supported operations.
-
:endpoint
(String, URI::HTTPS, URI::HTTP)
—
Normally you should not configure the
:endpoint
option directly. This is normally constructed from the:region
option. Configuring:endpoint
is normally reserved for connecting to test or custom endpoints. The endpoint should be a URI formatted like:'http://example.com' 'https://example.com' 'http://example.com:123'
-
:event_stream_handler
(Proc)
—
When an EventStream or Proc object is provided, it will be used as callback for each chunk of event stream response received along the way.
-
:ignore_configured_endpoint_urls
(Boolean)
—
Setting to true disables use of endpoint URLs provided via environment variables and the shared configuration file.
-
:input_event_stream_handler
(Proc)
—
When an EventStream or Proc object is provided, it can be used for sending events for the event stream.
-
:log_formatter
(Aws::Log::Formatter)
— default:
Aws::Log::Formatter.default
—
The log formatter.
-
:log_level
(Symbol)
— default:
:info
—
The log level to send messages to the
:logger
at. -
:logger
(Logger)
—
The Logger instance to send log messages to. If this option is not set, logging will be disabled.
-
:max_attempts
(Integer)
— default:
3
—
An integer representing the maximum number attempts that will be made for a single request, including the initial attempt. For example, setting this value to 5 will result in a request being retried up to 4 times. Used in
standard
andadaptive
retry modes. -
:output_event_stream_handler
(Proc)
—
When an EventStream or Proc object is provided, it will be used as callback for each chunk of event stream response received along the way.
-
:profile
(String)
— default:
"default"
—
Used when loading credentials from the shared credentials file at HOME/.aws/credentials. When not specified, 'default' is used.
-
:request_checksum_calculation
(String)
— default:
"when_supported"
—
Determines when a checksum will be calculated for request payloads. Values are:
when_supported
- (default) When set, a checksum will be calculated for all request payloads of operations modeled with thehttpChecksum
trait whererequestChecksumRequired
istrue
and/or arequestAlgorithmMember
is modeled.when_required
- When set, a checksum will only be calculated for request payloads of operations modeled with thehttpChecksum
trait whererequestChecksumRequired
istrue
or where arequestAlgorithmMember
is modeled and supplied.
-
:request_min_compression_size_bytes
(Integer)
— default:
10240
—
The minimum size in bytes that triggers compression for request bodies. The value must be non-negative integer value between 0 and 10485780 bytes inclusive.
-
:response_checksum_validation
(String)
— default:
"when_supported"
—
Determines when checksum validation will be performed on response payloads. Values are:
when_supported
- (default) When set, checksum validation is performed on all response payloads of operations modeled with thehttpChecksum
trait whereresponseAlgorithms
is modeled, except when no modeled checksum algorithms are supported.when_required
- When set, checksum validation is not performed on response payloads of operations unless the checksum algorithm is supported and therequestValidationModeMember
member is set toENABLED
.
-
:retry_backoff
(Proc)
—
A proc or lambda used for backoff. Defaults to 2**retries * retry_base_delay. This option is only used in the
legacy
retry mode. -
:retry_base_delay
(Float)
— default:
0.3
—
The base delay in seconds used by the default backoff function. This option is only used in the
legacy
retry mode. -
:retry_jitter
(Symbol)
— default:
:none
—
A delay randomiser function used by the default backoff function. Some predefined functions can be referenced by name - :none, :equal, :full, otherwise a Proc that takes and returns a number. This option is only used in the
legacy
retry mode.@see https://www.awsarchitectureblog.com/2015/03/backoff.html
-
:retry_limit
(Integer)
— default:
3
—
The maximum number of times to retry failed requests. Only ~ 500 level server errors and certain ~ 400 level client errors are retried. Generally, these are throttling errors, data checksum errors, networking errors, timeout errors, auth errors, endpoint discovery, and errors from expired credentials. This option is only used in the
legacy
retry mode. -
:retry_max_delay
(Integer)
— default:
0
—
The maximum number of seconds to delay between retries (0 for no limit) used by the default backoff function. This option is only used in the
legacy
retry mode. -
:retry_mode
(String)
— default:
"legacy"
—
Specifies which retry algorithm to use. Values are:
legacy
- The pre-existing retry behavior. This is default value if no retry mode is provided.standard
- A standardized set of retry rules across the AWS SDKs. This includes support for retry quotas, which limit the number of unsuccessful retries a client can make.adaptive
- An experimental retry mode that includes all the functionality ofstandard
mode along with automatic client side throttling. This is a provisional mode that may change behavior in the future.
-
:sdk_ua_app_id
(String)
—
A unique and opaque application ID that is appended to the User-Agent header as app/sdk_ua_app_id. It should have a maximum length of 50. This variable is sourced from environment variable AWS_SDK_UA_APP_ID or the shared config profile attribute sdk_ua_app_id.
- :secret_access_key (String)
- :session_token (String)
-
:sigv4a_signing_region_set
(Array)
—
A list of regions that should be signed with SigV4a signing. When not passed, a default
:sigv4a_signing_region_set
is searched for in the following locations:Aws.config[:sigv4a_signing_region_set]
ENV['AWS_SIGV4A_SIGNING_REGION_SET']
~/.aws/config
-
:stub_responses
(Boolean)
— default:
false
—
Causes the client to return stubbed responses. By default fake responses are generated and returned. You can specify the response data to return or errors to raise by calling ClientStubs#stub_responses. See ClientStubs for more information.
Please note When response stubbing is enabled, no HTTP requests are made, and retries are disabled.
-
:telemetry_provider
(Aws::Telemetry::TelemetryProviderBase)
— default:
Aws::Telemetry::NoOpTelemetryProvider
—
Allows you to provide a telemetry provider, which is used to emit telemetry data. By default, uses
NoOpTelemetryProvider
which will not record or emit any telemetry data. The SDK supports the following telemetry providers:- OpenTelemetry (OTel) - To use the OTel provider, install and require the
opentelemetry-sdk
gem and then, pass in an instance of aAws::Telemetry::OTelProvider
for telemetry provider.
- OpenTelemetry (OTel) - To use the OTel provider, install and require the
-
:token_provider
(Aws::TokenProvider)
—
A Bearer Token Provider. This can be an instance of any one of the following classes:
Aws::StaticTokenProvider
- Used for configuring static, non-refreshing tokens.Aws::SSOTokenProvider
- Used for loading tokens from AWS SSO using an access token generated fromaws login
.
When
:token_provider
is not configured directly, theAws::TokenProviderChain
will be used to search for tokens configured for your profile in shared configuration files. -
:use_dualstack_endpoint
(Boolean)
—
When set to
true
, dualstack enabled endpoints (with.aws
TLD) will be used if available. -
:use_fips_endpoint
(Boolean)
—
When set to
true
, fips compatible endpoints will be used if available. When afips
region is used, the region is normalized and this config is set totrue
. -
:validate_params
(Boolean)
— default:
true
—
When
true
, request parameters are validated before sending the request. -
:endpoint_provider
(Aws::TranscribeStreamingService::EndpointProvider)
—
The endpoint provider used to resolve endpoints. Any object that responds to
#resolve_endpoint(parameters)
whereparameters
is a Struct similar toAws::TranscribeStreamingService::EndpointParameters
. -
:connection_read_timeout
(Integer)
— default:
60
—
Connection read timeout in seconds, defaults to 60 sec.
-
:connection_timeout
(Integer)
— default:
60
—
Connection timeout in seconds, defaults to 60 sec.
-
:enable_alpn
(Boolean)
— default:
true
—
Set to
false
to disable ALPN in HTTP2 over TLS. ALPN requires Openssl version >= 1.0.2. Note: RFC7540 requires HTTP2 to use ALPN over TLS but some services may not fully support ALPN and require setting this tofalse
. -
:http_wire_trace
(Boolean)
— default:
false
—
When
true
, HTTP2 debug output will be sent to the:logger
. -
:max_concurrent_streams
(Integer)
— default:
100
—
Maximum concurrent streams used in HTTP2 connection, defaults to 100. Note that server may send back :settings_max_concurrent_streams value which will take priority when initializing new streams.
-
:raise_response_errors
(Boolean)
— default:
true
—
Defaults to
true
, raises errors if exist when #wait or #join! is called upon async response. - :read_chunk_size (Integer) — default: 1024
-
:ssl_ca_bundle
(String)
—
Full path to the SSL certificate authority bundle file that should be used when verifying peer certificates. If you do not pass
:ssl_ca_directory
or:ssl_ca_bundle
the system default will be used if available. -
:ssl_ca_directory
(String)
—
Full path of the directory that contains the unbundled SSL certificate authority files for verifying peer certificates. If you do not pass
:ssl_ca_bundle
or:ssl_ca_directory
the system default will be used if available. - :ssl_ca_store (String)
-
:ssl_verify_peer
(Boolean)
— default:
true
—
When
true
, SSL peer certificates are verified when establishing a connection.
398 399 400 401 402 403 |
# File 'gems/aws-sdk-transcribestreamingservice/lib/aws-sdk-transcribestreamingservice/async_client.rb', line 398 def initialize(*args) unless Kernel.const_defined?("HTTP2") raise "Must include http/2 gem to use AsyncClient instances." end super end |
Instance Method Details
#start_call_analytics_stream_transcription(params = {}) ⇒ Types::StartCallAnalyticsStreamTranscriptionResponse
Starts a bidirectional HTTP/2 or WebSocket stream where audio is streamed to Amazon Transcribe and the transcription results are streamed to your application. Use this operation for Call Analytics transcriptions.
The following parameters are required:
language-code
media-encoding
sample-rate
For more information on streaming with Amazon Transcribe, see Transcribing streaming audio.
Examples:
Bi-directional EventStream Operation Example
Bi-directional EventStream Operation Example
# You can signal input events after the initial request is established. Events
# will be sent to the stream immediately once the stream connection is
# established successfully.
# To signal events, you can call the #signal methods from an
# Aws::TranscribeStreamingService::EventStreams::AudioStream object.
# You must signal events before calling #wait or #join! on the async response.
input_stream = Aws::TranscribeStreamingService::EventStreams::AudioStream.new
async_resp = client.start_call_analytics_stream_transcription(
# params input
input_event_stream_handler: input_stream
) do |out_stream|
# register callbacks for events
out_stream.on_utterance_event_event do |event|
event # => Aws::TranscribeStreamingService::Types::UtteranceEvent
end
out_stream.on_category_event_event do |event|
event # => Aws::TranscribeStreamingService::Types::CategoryEvent
end
out_stream.on_bad_request_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::BadRequestException
end
out_stream.on_limit_exceeded_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::LimitExceededException
end
out_stream.on_internal_failure_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::InternalFailureException
end
out_stream.on_conflict_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::ConflictException
end
out_stream.on_service_unavailable_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::ServiceUnavailableException
end
end
# => Aws::Seahorse::Client::AsyncResponse
# signal events
input_stream.signal_audio_event_event(
# ...
)
input_stream.signal_configuration_event_event(
# ...
)
# make sure to signal :end_stream at the end
input_stream.signal_end_stream
# wait until stream is closed before finalizing the sync response
resp = async_resp.wait
# Or close the stream and finalize sync response immediately
resp = async_resp.join!
# You can also provide an Aws::TranscribeStreamingService::EventStreams::CallAnalyticsTranscriptResultStream object
# to register callbacks before initializing the request instead of processing
# from the request block.
output_stream = Aws::TranscribeStreamingService::EventStreams::CallAnalyticsTranscriptResultStream.new
# register callbacks for output events
output_stream.on_utterance_event_event do |event|
event # => Aws::TranscribeStreamingService::Types::UtteranceEvent
end
output_stream.on_category_event_event do |event|
event # => Aws::TranscribeStreamingService::Types::CategoryEvent
end
output_stream.on_bad_request_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::BadRequestException
end
output_stream.on_limit_exceeded_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::LimitExceededException
end
output_stream.on_internal_failure_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::InternalFailureException
end
output_stream.on_conflict_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::ConflictException
end
output_stream.on_service_unavailable_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::ServiceUnavailableException
end
output_stream.on_error_event do |event|
# catch unmodeled error event in the stream
raise event
# => Aws::Errors::EventError
# event.event_type => :error
# event.error_code => String
# event.error_message => String
end
async_resp = client.start_call_analytics_stream_transcription(
# params input
input_event_stream_handler: input_stream,
output_event_stream_handler: output_stream
)
resp = async_resp.join!
# You can also iterate through events after the response is complete.
# Events are available at
resp.call_analytics_transcript_result_stream # => Enumerator
Request syntax with placeholder values
Request syntax with placeholder values
async_resp = async_client.start_call_analytics_stream_transcription({
language_code: "en-US", # required, accepts en-US, en-GB, es-US, fr-CA, fr-FR, en-AU, it-IT, de-DE, pt-BR
media_sample_rate_hertz: 1, # required
media_encoding: "pcm", # required, accepts pcm, ogg-opus, flac
vocabulary_name: "VocabularyName",
session_id: "SessionId",
input_event_stream_hander: EventStreams::AudioStream.new,
vocabulary_filter_name: "VocabularyFilterName",
vocabulary_filter_method: "remove", # accepts remove, mask, tag
language_model_name: "ModelName",
enable_partial_results_stabilization: false,
partial_results_stability: "high", # accepts high, medium, low
content_identification_type: "PII", # accepts PII
content_redaction_type: "PII", # accepts PII
pii_entity_types: "PiiEntityTypes",
})
# => Seahorse::Client::AsyncResponse
async_resp.wait
# => Seahorse::Client::Response
# Or use async_resp.join!
Response structure
Response structure
resp.request_id #=> String
resp.language_code #=> String, one of "en-US", "en-GB", "es-US", "fr-CA", "fr-FR", "en-AU", "it-IT", "de-DE", "pt-BR"
resp.media_sample_rate_hertz #=> Integer
resp.media_encoding #=> String, one of "pcm", "ogg-opus", "flac"
resp.vocabulary_name #=> String
resp.session_id #=> String
# All events are available at resp.call_analytics_transcript_result_stream:
resp.call_analytics_transcript_result_stream #=> Enumerator
resp.call_analytics_transcript_result_stream.event_types #=> [:utterance_event, :category_event, :bad_request_exception, :limit_exceeded_exception, :internal_failure_exception, :conflict_exception, :service_unavailable_exception]
# For :utterance_event event available at #on_utterance_event_event callback and response eventstream enumerator:
event.utterance_id #=> String
event.is_partial #=> Boolean
event.participant_role #=> String, one of "AGENT", "CUSTOMER"
event.begin_offset_millis #=> Integer
event.end_offset_millis #=> Integer
event.transcript #=> String
event.items #=> Array
event.items[0].begin_offset_millis #=> Integer
event.items[0].end_offset_millis #=> Integer
event.items[0].type #=> String, one of "pronunciation", "punctuation"
event.items[0].content #=> String
event.items[0].confidence #=> Float
event.items[0].vocabulary_filter_match #=> Boolean
event.items[0].stable #=> Boolean
event.entities #=> Array
event.entities[0].begin_offset_millis #=> Integer
event.entities[0].end_offset_millis #=> Integer
event.entities[0].category #=> String
event.entities[0].type #=> String
event.entities[0].content #=> String
event.entities[0].confidence #=> Float
event.sentiment #=> String, one of "POSITIVE", "NEGATIVE", "MIXED", "NEUTRAL"
event.issues_detected #=> Array
event.issues_detected[0].character_offsets.begin #=> Integer
event.issues_detected[0].character_offsets.end #=> Integer
# For :category_event event available at #on_category_event_event callback and response eventstream enumerator:
event.matched_categories #=> Array
event.matched_categories[0] #=> String
event.matched_details #=> Hash
event.matched_details["String"].timestamp_ranges #=> Array
event.matched_details["String"].timestamp_ranges[0].begin_offset_millis #=> Integer
event.matched_details["String"].timestamp_ranges[0].end_offset_millis #=> Integer
# For :bad_request_exception event available at #on_bad_request_exception_event callback and response eventstream enumerator:
event.message #=> String
# For :limit_exceeded_exception event available at #on_limit_exceeded_exception_event callback and response eventstream enumerator:
event.message #=> String
# For :internal_failure_exception event available at #on_internal_failure_exception_event callback and response eventstream enumerator:
event.message #=> String
# For :conflict_exception event available at #on_conflict_exception_event callback and response eventstream enumerator:
event.message #=> String
# For :service_unavailable_exception event available at #on_service_unavailable_exception_event callback and response eventstream enumerator:
event.message #=> String
resp.vocabulary_filter_name #=> String
resp.vocabulary_filter_method #=> String, one of "remove", "mask", "tag"
resp.language_model_name #=> String
resp.enable_partial_results_stabilization #=> Boolean
resp.partial_results_stability #=> String, one of "high", "medium", "low"
resp.content_identification_type #=> String, one of "PII"
resp.content_redaction_type #=> String, one of "PII"
resp.pii_entity_types #=> String
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:language_code
(required, String)
—
Specify the language code that represents the language spoken in your audio.
For a list of languages supported with real-time Call Analytics, refer to the Supported languages table.
-
:media_sample_rate_hertz
(required, Integer)
—
The sample rate of the input audio (in hertz). Low-quality audio, such as telephone audio, is typically around 8,000 Hz. High-quality audio typically ranges from 16,000 Hz to 48,000 Hz. Note that the sample rate you specify must match that of your audio.
-
:media_encoding
(required, String)
—
Specify the encoding of your input audio. Supported formats are:
FLAC
OPUS-encoded audio in an Ogg container
PCM (only signed 16-bit little-endian audio formats, which does not include WAV)
For more information, see Media formats.
-
:vocabulary_name
(String)
—
Specify the name of the custom vocabulary that you want to use when processing your transcription. Note that vocabulary names are case sensitive.
If the language of the specified custom vocabulary doesn't match the language identified in your media, the custom vocabulary is not applied to your transcription.
For more information, see Custom vocabularies.
-
:session_id
(String)
—
Specify a name for your Call Analytics transcription session. If you don't include this parameter in your request, Amazon Transcribe generates an ID and returns it in the response.
-
:vocabulary_filter_name
(String)
—
Specify the name of the custom vocabulary filter that you want to use when processing your transcription. Note that vocabulary filter names are case sensitive.
If the language of the specified custom vocabulary filter doesn't match the language identified in your media, the vocabulary filter is not applied to your transcription.
For more information, see Using vocabulary filtering with unwanted words.
-
:vocabulary_filter_method
(String)
—
Specify how you want your vocabulary filter applied to your transcript.
To replace words with
***
, choosemask
.To delete words, choose
remove
.To flag words without changing them, choose
tag
. -
:language_model_name
(String)
—
Specify the name of the custom language model that you want to use when processing your transcription. Note that language model names are case sensitive.
The language of the specified language model must match the language code you specify in your transcription request. If the languages don't match, the custom language model isn't applied. There are no errors or warnings associated with a language mismatch.
For more information, see Custom language models.
-
:enable_partial_results_stabilization
(Boolean)
—
Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy. For more information, see Partial-result stabilization.
-
:partial_results_stability
(String)
—
Specify the level of stability to use when you enable partial results stabilization (
EnablePartialResultsStabilization
).Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.
For more information, see Partial-result stabilization.
-
:content_identification_type
(String)
—
Labels all personally identifiable information (PII) identified in your transcript.
Content identification is performed at the segment level; PII specified in
PiiEntityTypes
is flagged upon complete transcription of an audio segment. If you don't includePiiEntityTypes
in your request, all PII is identified.You can’t set
ContentIdentificationType
andContentRedactionType
in the same request. If you set both, your request returns aBadRequestException
.For more information, see Redacting or identifying personally identifiable information.
-
:content_redaction_type
(String)
—
Redacts all personally identifiable information (PII) identified in your transcript.
Content redaction is performed at the segment level; PII specified in
PiiEntityTypes
is redacted upon complete transcription of an audio segment. If you don't includePiiEntityTypes
in your request, all PII is redacted.You can’t set
ContentRedactionType
andContentIdentificationType
in the same request. If you set both, your request returns aBadRequestException
.For more information, see Redacting or identifying personally identifiable information.
-
:pii_entity_types
(String)
—
Specify which types of personally identifiable information (PII) you want to redact in your transcript. You can include as many types as you'd like, or you can select
ALL
.Values must be comma-separated and can include:
ADDRESS
,BANK_ACCOUNT_NUMBER
,BANK_ROUTING
,CREDIT_DEBIT_CVV
,CREDIT_DEBIT_EXPIRY
,CREDIT_DEBIT_NUMBER
,EMAIL
,NAME
,PHONE
,PIN
,SSN
, orALL
.Note that if you include
PiiEntityTypes
in your request, you must also includeContentIdentificationType
orContentRedactionType
.If you include
ContentRedactionType
orContentIdentificationType
in your request, but do not includePiiEntityTypes
, all PII is redacted or identified.
Yields:
- (output_event_stream_handler)
Returns:
-
(Types::StartCallAnalyticsStreamTranscriptionResponse)
—
Returns a response object which responds to the following methods:
- #request_id => String
- #language_code => String
- #media_sample_rate_hertz => Integer
- #media_encoding => String
- #vocabulary_name => String
- #session_id => String
- #call_analytics_transcript_result_stream => Types::CallAnalyticsTranscriptResultStream
- #vocabulary_filter_name => String
- #vocabulary_filter_method => String
- #language_model_name => String
- #enable_partial_results_stabilization => Boolean
- #partial_results_stability => String
- #content_identification_type => String
- #content_redaction_type => String
- #pii_entity_types => String
See Also:
821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 |
# File 'gems/aws-sdk-transcribestreamingservice/lib/aws-sdk-transcribestreamingservice/async_client.rb', line 821 def start_call_analytics_stream_transcription(params = {}, options = {}, &block) params = params.dup input_event_stream_handler = _event_stream_handler( :input, params.delete(:input_event_stream_handler), EventStreams::AudioStream ) output_event_stream_handler = _event_stream_handler( :output, params.delete(:output_event_stream_handler) || params.delete(:event_stream_handler), EventStreams::CallAnalyticsTranscriptResultStream ) yield(output_event_stream_handler) if block_given? req = build_request(:start_call_analytics_stream_transcription, params) req.context[:input_event_stream_handler] = input_event_stream_handler req.handlers.add(Aws::Binary::EncodeHandler, priority: 55) req.context[:output_event_stream_handler] = output_event_stream_handler req.handlers.add(Aws::Binary::DecodeHandler, priority: 55) req.send_request(options, &block) end |
#start_medical_scribe_stream(params = {}) ⇒ Types::StartMedicalScribeStreamResponse
Starts a bidirectional HTTP/2 stream, where audio is streamed to Amazon Web Services HealthScribe and the transcription results are streamed to your application.
When you start a stream, you first specify the stream configuration in
a MedicalScribeConfigurationEvent
. This event includes channel
definitions, encryption settings, and post-stream analytics settings,
such as the output configuration for aggregated transcript and
clinical note generation. These are additional streaming session
configurations beyond those provided in your initial start request
headers. Whether you are starting a new session or resuming an
existing session, your first event must be a
MedicalScribeConfigurationEvent
.
After you send a MedicalScribeConfigurationEvent
, you start
AudioEvents
and Amazon Web Services HealthScribe responds with
real-time transcription results. When you are finished, to start
processing the results with the post-stream analytics, send a
MedicalScribeSessionControlEvent
with a Type
of END_OF_SESSION
and Amazon Web Services HealthScribe starts the analytics.
You can pause or resume streaming. To pause streaming, complete the
input stream without sending the MedicalScribeSessionControlEvent
.
To resume streaming, call the StartMedicalScribeStream
and specify
the same SessionId you used to start the stream.
The following parameters are required:
language-code
media-encoding
media-sample-rate-hertz
For more information on streaming with Amazon Web Services HealthScribe, see Amazon Web Services HealthScribe.
Examples:
Bi-directional EventStream Operation Example
Bi-directional EventStream Operation Example
# You can signal input events after the initial request is established. Events
# will be sent to the stream immediately once the stream connection is
# established successfully.
# To signal events, you can call the #signal methods from an
# Aws::TranscribeStreamingService::EventStreams::MedicalScribeInputStream object.
# You must signal events before calling #wait or #join! on the async response.
input_stream = Aws::TranscribeStreamingService::EventStreams::MedicalScribeInputStream.new
async_resp = client.start_medical_scribe_stream(
# params input
input_event_stream_handler: input_stream
) do |out_stream|
# register callbacks for events
out_stream.on_transcript_event_event do |event|
event # => Aws::TranscribeStreamingService::Types::TranscriptEvent
end
out_stream.on_bad_request_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::BadRequestException
end
out_stream.on_limit_exceeded_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::LimitExceededException
end
out_stream.on_internal_failure_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::InternalFailureException
end
out_stream.on_conflict_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::ConflictException
end
out_stream.on_service_unavailable_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::ServiceUnavailableException
end
end
# => Aws::Seahorse::Client::AsyncResponse
# signal events
input_stream.signal_audio_event_event(
# ...
)
input_stream.signal_session_control_event_event(
# ...
)
input_stream.signal_configuration_event_event(
# ...
)
# make sure to signal :end_stream at the end
input_stream.signal_end_stream
# wait until stream is closed before finalizing the sync response
resp = async_resp.wait
# Or close the stream and finalize sync response immediately
resp = async_resp.join!
# You can also provide an Aws::TranscribeStreamingService::EventStreams::MedicalScribeResultStream object
# to register callbacks before initializing the request instead of processing
# from the request block.
output_stream = Aws::TranscribeStreamingService::EventStreams::MedicalScribeResultStream.new
# register callbacks for output events
output_stream.on_transcript_event_event do |event|
event # => Aws::TranscribeStreamingService::Types::TranscriptEvent
end
output_stream.on_bad_request_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::BadRequestException
end
output_stream.on_limit_exceeded_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::LimitExceededException
end
output_stream.on_internal_failure_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::InternalFailureException
end
output_stream.on_conflict_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::ConflictException
end
output_stream.on_service_unavailable_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::ServiceUnavailableException
end
output_stream.on_error_event do |event|
# catch unmodeled error event in the stream
raise event
# => Aws::Errors::EventError
# event.event_type => :error
# event.error_code => String
# event.error_message => String
end
async_resp = client.start_medical_scribe_stream(
# params input
input_event_stream_handler: input_stream,
output_event_stream_handler: output_stream
)
resp = async_resp.join!
# You can also iterate through events after the response is complete.
# Events are available at
resp.result_stream # => Enumerator
Request syntax with placeholder values
Request syntax with placeholder values
async_resp = async_client.start_medical_scribe_stream({
session_id: "SessionId",
language_code: "en-US", # required, accepts en-US
media_sample_rate_hertz: 1, # required
media_encoding: "pcm", # required, accepts pcm, ogg-opus, flac
input_event_stream_hander: EventStreams::MedicalScribeInputStream.new,
})
# => Seahorse::Client::AsyncResponse
async_resp.wait
# => Seahorse::Client::Response
# Or use async_resp.join!
Response structure
Response structure
resp.session_id #=> String
resp.request_id #=> String
resp.language_code #=> String, one of "en-US"
resp.media_sample_rate_hertz #=> Integer
resp.media_encoding #=> String, one of "pcm", "ogg-opus", "flac"
# All events are available at resp.result_stream:
resp.result_stream #=> Enumerator
resp.result_stream.event_types #=> [:transcript_event, :bad_request_exception, :limit_exceeded_exception, :internal_failure_exception, :conflict_exception, :service_unavailable_exception]
# For :transcript_event event available at #on_transcript_event_event callback and response eventstream enumerator:
event.transcript_segment.segment_id #=> String
event.transcript_segment.begin_audio_time #=> Float
event.transcript_segment.end_audio_time #=> Float
event.transcript_segment.content #=> String
event.transcript_segment.items #=> Array
event.transcript_segment.items[0].begin_audio_time #=> Float
event.transcript_segment.items[0].end_audio_time #=> Float
event.transcript_segment.items[0].type #=> String, one of "pronunciation", "punctuation"
event.transcript_segment.items[0].confidence #=> Float
event.transcript_segment.items[0].content #=> String
event.transcript_segment.items[0].vocabulary_filter_match #=> Boolean
event.transcript_segment.is_partial #=> Boolean
event.transcript_segment.channel_id #=> String
# For :bad_request_exception event available at #on_bad_request_exception_event callback and response eventstream enumerator:
event.message #=> String
# For :limit_exceeded_exception event available at #on_limit_exceeded_exception_event callback and response eventstream enumerator:
event.message #=> String
# For :internal_failure_exception event available at #on_internal_failure_exception_event callback and response eventstream enumerator:
event.message #=> String
# For :conflict_exception event available at #on_conflict_exception_event callback and response eventstream enumerator:
event.message #=> String
# For :service_unavailable_exception event available at #on_service_unavailable_exception_event callback and response eventstream enumerator:
event.message #=> String
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:session_id
(String)
—
Specify an identifier for your streaming session (in UUID format). If you don't include a SessionId in your request, Amazon Web Services HealthScribe generates an ID and returns it in the response.
-
:language_code
(required, String)
—
Specify the language code for your HealthScribe streaming session.
-
:media_sample_rate_hertz
(required, Integer)
—
Specify the sample rate of the input audio (in hertz). Amazon Web Services HealthScribe supports a range from 16,000 Hz to 48,000 Hz. The sample rate you specify must match that of your audio.
-
:media_encoding
(required, String)
—
Specify the encoding used for the input audio.
Supported formats are:
FLAC
OPUS-encoded audio in an Ogg container
PCM (only signed 16-bit little-endian audio formats, which does not include WAV)
For more information, see Media formats.
Yields:
- (output_event_stream_handler)
Returns:
-
(Types::StartMedicalScribeStreamResponse)
—
Returns a response object which responds to the following methods:
- #session_id => String
- #request_id => String
- #language_code => String
- #media_sample_rate_hertz => Integer
- #media_encoding => String
- #result_stream => Types::MedicalScribeResultStream
See Also:
1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 |
# File 'gems/aws-sdk-transcribestreamingservice/lib/aws-sdk-transcribestreamingservice/async_client.rb', line 1088 def start_medical_scribe_stream(params = {}, options = {}, &block) params = params.dup input_event_stream_handler = _event_stream_handler( :input, params.delete(:input_event_stream_handler), EventStreams::MedicalScribeInputStream ) output_event_stream_handler = _event_stream_handler( :output, params.delete(:output_event_stream_handler) || params.delete(:event_stream_handler), EventStreams::MedicalScribeResultStream ) yield(output_event_stream_handler) if block_given? req = build_request(:start_medical_scribe_stream, params) req.context[:input_event_stream_handler] = input_event_stream_handler req.handlers.add(Aws::Binary::EncodeHandler, priority: 55) req.context[:output_event_stream_handler] = output_event_stream_handler req.handlers.add(Aws::Binary::DecodeHandler, priority: 55) req.send_request(options, &block) end |
#start_medical_stream_transcription(params = {}) ⇒ Types::StartMedicalStreamTranscriptionResponse
Starts a bidirectional HTTP/2 or WebSocket stream where audio is streamed to Amazon Transcribe Medical and the transcription results are streamed to your application.
The following parameters are required:
language-code
media-encoding
sample-rate
For more information on streaming with Amazon Transcribe Medical, see Transcribing streaming audio.
Examples:
Bi-directional EventStream Operation Example
Bi-directional EventStream Operation Example
# You can signal input events after the initial request is established. Events
# will be sent to the stream immediately once the stream connection is
# established successfully.
# To signal events, you can call the #signal methods from an
# Aws::TranscribeStreamingService::EventStreams::AudioStream object.
# You must signal events before calling #wait or #join! on the async response.
input_stream = Aws::TranscribeStreamingService::EventStreams::AudioStream.new
async_resp = client.start_medical_stream_transcription(
# params input
input_event_stream_handler: input_stream
) do |out_stream|
# register callbacks for events
out_stream.on_transcript_event_event do |event|
event # => Aws::TranscribeStreamingService::Types::TranscriptEvent
end
out_stream.on_bad_request_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::BadRequestException
end
out_stream.on_limit_exceeded_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::LimitExceededException
end
out_stream.on_internal_failure_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::InternalFailureException
end
out_stream.on_conflict_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::ConflictException
end
out_stream.on_service_unavailable_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::ServiceUnavailableException
end
end
# => Aws::Seahorse::Client::AsyncResponse
# signal events
input_stream.signal_audio_event_event(
# ...
)
input_stream.signal_configuration_event_event(
# ...
)
# make sure to signal :end_stream at the end
input_stream.signal_end_stream
# wait until stream is closed before finalizing the sync response
resp = async_resp.wait
# Or close the stream and finalize sync response immediately
resp = async_resp.join!
# You can also provide an Aws::TranscribeStreamingService::EventStreams::MedicalTranscriptResultStream object
# to register callbacks before initializing the request instead of processing
# from the request block.
output_stream = Aws::TranscribeStreamingService::EventStreams::MedicalTranscriptResultStream.new
# register callbacks for output events
output_stream.on_transcript_event_event do |event|
event # => Aws::TranscribeStreamingService::Types::TranscriptEvent
end
output_stream.on_bad_request_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::BadRequestException
end
output_stream.on_limit_exceeded_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::LimitExceededException
end
output_stream.on_internal_failure_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::InternalFailureException
end
output_stream.on_conflict_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::ConflictException
end
output_stream.on_service_unavailable_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::ServiceUnavailableException
end
output_stream.on_error_event do |event|
# catch unmodeled error event in the stream
raise event
# => Aws::Errors::EventError
# event.event_type => :error
# event.error_code => String
# event.error_message => String
end
async_resp = client.start_medical_stream_transcription(
# params input
input_event_stream_handler: input_stream,
output_event_stream_handler: output_stream
)
resp = async_resp.join!
# You can also iterate through events after the response is complete.
# Events are available at
resp.transcript_result_stream # => Enumerator
Request syntax with placeholder values
Request syntax with placeholder values
async_resp = async_client.start_medical_stream_transcription({
language_code: "en-US", # required, accepts en-US, en-GB, es-US, fr-CA, fr-FR, en-AU, it-IT, de-DE, pt-BR, ja-JP, ko-KR, zh-CN, th-TH, es-ES, ar-SA, pt-PT, ca-ES, ar-AE, hi-IN, zh-HK, nl-NL, no-NO, sv-SE, pl-PL, fi-FI, zh-TW, en-IN, en-IE, en-NZ, en-AB, en-ZA, en-WL, de-CH, af-ZA, eu-ES, hr-HR, cs-CZ, da-DK, fa-IR, gl-ES, el-GR, he-IL, id-ID, lv-LV, ms-MY, ro-RO, ru-RU, sr-RS, sk-SK, so-SO, tl-PH, uk-UA, vi-VN, zu-ZA
media_sample_rate_hertz: 1, # required
media_encoding: "pcm", # required, accepts pcm, ogg-opus, flac
vocabulary_name: "VocabularyName",
specialty: "PRIMARYCARE", # required, accepts PRIMARYCARE, CARDIOLOGY, NEUROLOGY, ONCOLOGY, RADIOLOGY, UROLOGY
type: "CONVERSATION", # required, accepts CONVERSATION, DICTATION
show_speaker_label: false,
session_id: "SessionId",
input_event_stream_hander: EventStreams::AudioStream.new,
enable_channel_identification: false,
number_of_channels: 1,
content_identification_type: "PHI", # accepts PHI
})
# => Seahorse::Client::AsyncResponse
async_resp.wait
# => Seahorse::Client::Response
# Or use async_resp.join!
Response structure
Response structure
resp.request_id #=> String
resp.language_code #=> String, one of "en-US", "en-GB", "es-US", "fr-CA", "fr-FR", "en-AU", "it-IT", "de-DE", "pt-BR", "ja-JP", "ko-KR", "zh-CN", "th-TH", "es-ES", "ar-SA", "pt-PT", "ca-ES", "ar-AE", "hi-IN", "zh-HK", "nl-NL", "no-NO", "sv-SE", "pl-PL", "fi-FI", "zh-TW", "en-IN", "en-IE", "en-NZ", "en-AB", "en-ZA", "en-WL", "de-CH", "af-ZA", "eu-ES", "hr-HR", "cs-CZ", "da-DK", "fa-IR", "gl-ES", "el-GR", "he-IL", "id-ID", "lv-LV", "ms-MY", "ro-RO", "ru-RU", "sr-RS", "sk-SK", "so-SO", "tl-PH", "uk-UA", "vi-VN", "zu-ZA"
resp.media_sample_rate_hertz #=> Integer
resp.media_encoding #=> String, one of "pcm", "ogg-opus", "flac"
resp.vocabulary_name #=> String
resp.specialty #=> String, one of "PRIMARYCARE", "CARDIOLOGY", "NEUROLOGY", "ONCOLOGY", "RADIOLOGY", "UROLOGY"
resp.type #=> String, one of "CONVERSATION", "DICTATION"
resp.show_speaker_label #=> Boolean
resp.session_id #=> String
# All events are available at resp.transcript_result_stream:
resp.transcript_result_stream #=> Enumerator
resp.transcript_result_stream.event_types #=> [:transcript_event, :bad_request_exception, :limit_exceeded_exception, :internal_failure_exception, :conflict_exception, :service_unavailable_exception]
# For :transcript_event event available at #on_transcript_event_event callback and response eventstream enumerator:
event.transcript.results #=> Array
event.transcript.results[0].result_id #=> String
event.transcript.results[0].start_time #=> Float
event.transcript.results[0].end_time #=> Float
event.transcript.results[0].is_partial #=> Boolean
event.transcript.results[0].alternatives #=> Array
event.transcript.results[0].alternatives[0].transcript #=> String
event.transcript.results[0].alternatives[0].items #=> Array
event.transcript.results[0].alternatives[0].items[0].start_time #=> Float
event.transcript.results[0].alternatives[0].items[0].end_time #=> Float
event.transcript.results[0].alternatives[0].items[0].type #=> String, one of "pronunciation", "punctuation"
event.transcript.results[0].alternatives[0].items[0].content #=> String
event.transcript.results[0].alternatives[0].items[0].confidence #=> Float
event.transcript.results[0].alternatives[0].items[0].speaker #=> String
event.transcript.results[0].alternatives[0].entities #=> Array
event.transcript.results[0].alternatives[0].entities[0].start_time #=> Float
event.transcript.results[0].alternatives[0].entities[0].end_time #=> Float
event.transcript.results[0].alternatives[0].entities[0].category #=> String
event.transcript.results[0].alternatives[0].entities[0].content #=> String
event.transcript.results[0].alternatives[0].entities[0].confidence #=> Float
event.transcript.results[0].channel_id #=> String
# For :bad_request_exception event available at #on_bad_request_exception_event callback and response eventstream enumerator:
event.message #=> String
# For :limit_exceeded_exception event available at #on_limit_exceeded_exception_event callback and response eventstream enumerator:
event.message #=> String
# For :internal_failure_exception event available at #on_internal_failure_exception_event callback and response eventstream enumerator:
event.message #=> String
# For :conflict_exception event available at #on_conflict_exception_event callback and response eventstream enumerator:
event.message #=> String
# For :service_unavailable_exception event available at #on_service_unavailable_exception_event callback and response eventstream enumerator:
event.message #=> String
resp.enable_channel_identification #=> Boolean
resp.number_of_channels #=> Integer
resp.content_identification_type #=> String, one of "PHI"
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:language_code
(required, String)
—
Specify the language code that represents the language spoken in your audio.
Amazon Transcribe Medical only supports US English (
en-US
). -
:media_sample_rate_hertz
(required, Integer)
—
The sample rate of the input audio (in hertz). Amazon Transcribe Medical supports a range from 16,000 Hz to 48,000 Hz. Note that the sample rate you specify must match that of your audio.
-
:media_encoding
(required, String)
—
Specify the encoding used for the input audio. Supported formats are:
FLAC
OPUS-encoded audio in an Ogg container
PCM (only signed 16-bit little-endian audio formats, which does not include WAV)
For more information, see Media formats.
-
:vocabulary_name
(String)
—
Specify the name of the custom vocabulary that you want to use when processing your transcription. Note that vocabulary names are case sensitive.
-
:specialty
(required, String)
—
Specify the medical specialty contained in your audio.
-
:type
(required, String)
—
Specify the type of input audio. For example, choose
DICTATION
for a provider dictating patient notes andCONVERSATION
for a dialogue between a patient and a medical professional. -
:show_speaker_label
(Boolean)
—
Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file.
For more information, see Partitioning speakers (diarization).
-
:session_id
(String)
—
Specify a name for your transcription session. If you don't include this parameter in your request, Amazon Transcribe Medical generates an ID and returns it in the response.
-
:enable_channel_identification
(Boolean)
—
Enables channel identification in multi-channel audio.
Channel identification transcribes the audio on each channel independently, then appends the output for each channel into one transcript.
If you have multi-channel audio and do not enable channel identification, your audio is transcribed in a continuous manner and your transcript is not separated by channel.
If you include
EnableChannelIdentification
in your request, you must also includeNumberOfChannels
.For more information, see Transcribing multi-channel audio.
-
:number_of_channels
(Integer)
—
Specify the number of channels in your audio stream. This value must be
2
, as only two channels are supported. If your audio doesn't contain multiple channels, do not include this parameter in your request.If you include
NumberOfChannels
in your request, you must also includeEnableChannelIdentification
. -
:content_identification_type
(String)
—
Labels all personal health information (PHI) identified in your transcript.
Content identification is performed at the segment level; PHI is flagged upon complete transcription of an audio segment.
For more information, see Identifying personal health information (PHI) in a transcription.
Yields:
- (output_event_stream_handler)
Returns:
-
(Types::StartMedicalStreamTranscriptionResponse)
—
Returns a response object which responds to the following methods:
- #request_id => String
- #language_code => String
- #media_sample_rate_hertz => Integer
- #media_encoding => String
- #vocabulary_name => String
- #specialty => String
- #type => String
- #show_speaker_label => Boolean
- #session_id => String
- #transcript_result_stream => Types::MedicalTranscriptResultStream
- #enable_channel_identification => Boolean
- #number_of_channels => Integer
- #content_identification_type => String
See Also:
1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 |
# File 'gems/aws-sdk-transcribestreamingservice/lib/aws-sdk-transcribestreamingservice/async_client.rb', line 1426 def start_medical_stream_transcription(params = {}, options = {}, &block) params = params.dup input_event_stream_handler = _event_stream_handler( :input, params.delete(:input_event_stream_handler), EventStreams::AudioStream ) output_event_stream_handler = _event_stream_handler( :output, params.delete(:output_event_stream_handler) || params.delete(:event_stream_handler), EventStreams::MedicalTranscriptResultStream ) yield(output_event_stream_handler) if block_given? req = build_request(:start_medical_stream_transcription, params) req.context[:input_event_stream_handler] = input_event_stream_handler req.handlers.add(Aws::Binary::EncodeHandler, priority: 55) req.context[:output_event_stream_handler] = output_event_stream_handler req.handlers.add(Aws::Binary::DecodeHandler, priority: 55) req.send_request(options, &block) end |
#start_stream_transcription(params = {}) ⇒ Types::StartStreamTranscriptionResponse
Starts a bidirectional HTTP/2 or WebSocket stream where audio is streamed to Amazon Transcribe and the transcription results are streamed to your application.
The following parameters are required:
language-code
oridentify-language
oridentify-multiple-language
media-encoding
sample-rate
For more information on streaming with Amazon Transcribe, see Transcribing streaming audio.
Examples:
Bi-directional EventStream Operation Example
Bi-directional EventStream Operation Example
# You can signal input events after the initial request is established. Events
# will be sent to the stream immediately once the stream connection is
# established successfully.
# To signal events, you can call the #signal methods from an
# Aws::TranscribeStreamingService::EventStreams::AudioStream object.
# You must signal events before calling #wait or #join! on the async response.
input_stream = Aws::TranscribeStreamingService::EventStreams::AudioStream.new
async_resp = client.start_stream_transcription(
# params input
input_event_stream_handler: input_stream
) do |out_stream|
# register callbacks for events
out_stream.on_transcript_event_event do |event|
event # => Aws::TranscribeStreamingService::Types::TranscriptEvent
end
out_stream.on_bad_request_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::BadRequestException
end
out_stream.on_limit_exceeded_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::LimitExceededException
end
out_stream.on_internal_failure_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::InternalFailureException
end
out_stream.on_conflict_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::ConflictException
end
out_stream.on_service_unavailable_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::ServiceUnavailableException
end
end
# => Aws::Seahorse::Client::AsyncResponse
# signal events
input_stream.signal_audio_event_event(
# ...
)
input_stream.signal_configuration_event_event(
# ...
)
# make sure to signal :end_stream at the end
input_stream.signal_end_stream
# wait until stream is closed before finalizing the sync response
resp = async_resp.wait
# Or close the stream and finalize sync response immediately
resp = async_resp.join!
# You can also provide an Aws::TranscribeStreamingService::EventStreams::TranscriptResultStream object
# to register callbacks before initializing the request instead of processing
# from the request block.
output_stream = Aws::TranscribeStreamingService::EventStreams::TranscriptResultStream.new
# register callbacks for output events
output_stream.on_transcript_event_event do |event|
event # => Aws::TranscribeStreamingService::Types::TranscriptEvent
end
output_stream.on_bad_request_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::BadRequestException
end
output_stream.on_limit_exceeded_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::LimitExceededException
end
output_stream.on_internal_failure_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::InternalFailureException
end
output_stream.on_conflict_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::ConflictException
end
output_stream.on_service_unavailable_exception_event do |event|
event # => Aws::TranscribeStreamingService::Types::ServiceUnavailableException
end
output_stream.on_error_event do |event|
# catch unmodeled error event in the stream
raise event
# => Aws::Errors::EventError
# event.event_type => :error
# event.error_code => String
# event.error_message => String
end
async_resp = client.start_stream_transcription(
# params input
input_event_stream_handler: input_stream,
output_event_stream_handler: output_stream
)
resp = async_resp.join!
# You can also iterate through events after the response is complete.
# Events are available at
resp.transcript_result_stream # => Enumerator
Request syntax with placeholder values
Request syntax with placeholder values
async_resp = async_client.start_stream_transcription({
language_code: "en-US", # accepts en-US, en-GB, es-US, fr-CA, fr-FR, en-AU, it-IT, de-DE, pt-BR, ja-JP, ko-KR, zh-CN, th-TH, es-ES, ar-SA, pt-PT, ca-ES, ar-AE, hi-IN, zh-HK, nl-NL, no-NO, sv-SE, pl-PL, fi-FI, zh-TW, en-IN, en-IE, en-NZ, en-AB, en-ZA, en-WL, de-CH, af-ZA, eu-ES, hr-HR, cs-CZ, da-DK, fa-IR, gl-ES, el-GR, he-IL, id-ID, lv-LV, ms-MY, ro-RO, ru-RU, sr-RS, sk-SK, so-SO, tl-PH, uk-UA, vi-VN, zu-ZA
media_sample_rate_hertz: 1, # required
media_encoding: "pcm", # required, accepts pcm, ogg-opus, flac
vocabulary_name: "VocabularyName",
session_id: "SessionId",
input_event_stream_hander: EventStreams::AudioStream.new,
vocabulary_filter_name: "VocabularyFilterName",
vocabulary_filter_method: "remove", # accepts remove, mask, tag
show_speaker_label: false,
enable_channel_identification: false,
number_of_channels: 1,
enable_partial_results_stabilization: false,
partial_results_stability: "high", # accepts high, medium, low
content_identification_type: "PII", # accepts PII
content_redaction_type: "PII", # accepts PII
pii_entity_types: "PiiEntityTypes",
language_model_name: "ModelName",
identify_language: false,
language_options: "LanguageOptions",
preferred_language: "en-US", # accepts en-US, en-GB, es-US, fr-CA, fr-FR, en-AU, it-IT, de-DE, pt-BR, ja-JP, ko-KR, zh-CN, th-TH, es-ES, ar-SA, pt-PT, ca-ES, ar-AE, hi-IN, zh-HK, nl-NL, no-NO, sv-SE, pl-PL, fi-FI, zh-TW, en-IN, en-IE, en-NZ, en-AB, en-ZA, en-WL, de-CH, af-ZA, eu-ES, hr-HR, cs-CZ, da-DK, fa-IR, gl-ES, el-GR, he-IL, id-ID, lv-LV, ms-MY, ro-RO, ru-RU, sr-RS, sk-SK, so-SO, tl-PH, uk-UA, vi-VN, zu-ZA
identify_multiple_languages: false,
vocabulary_names: "VocabularyNames",
vocabulary_filter_names: "VocabularyFilterNames",
})
# => Seahorse::Client::AsyncResponse
async_resp.wait
# => Seahorse::Client::Response
# Or use async_resp.join!
Response structure
Response structure
resp.request_id #=> String
resp.language_code #=> String, one of "en-US", "en-GB", "es-US", "fr-CA", "fr-FR", "en-AU", "it-IT", "de-DE", "pt-BR", "ja-JP", "ko-KR", "zh-CN", "th-TH", "es-ES", "ar-SA", "pt-PT", "ca-ES", "ar-AE", "hi-IN", "zh-HK", "nl-NL", "no-NO", "sv-SE", "pl-PL", "fi-FI", "zh-TW", "en-IN", "en-IE", "en-NZ", "en-AB", "en-ZA", "en-WL", "de-CH", "af-ZA", "eu-ES", "hr-HR", "cs-CZ", "da-DK", "fa-IR", "gl-ES", "el-GR", "he-IL", "id-ID", "lv-LV", "ms-MY", "ro-RO", "ru-RU", "sr-RS", "sk-SK", "so-SO", "tl-PH", "uk-UA", "vi-VN", "zu-ZA"
resp.media_sample_rate_hertz #=> Integer
resp.media_encoding #=> String, one of "pcm", "ogg-opus", "flac"
resp.vocabulary_name #=> String
resp.session_id #=> String
# All events are available at resp.transcript_result_stream:
resp.transcript_result_stream #=> Enumerator
resp.transcript_result_stream.event_types #=> [:transcript_event, :bad_request_exception, :limit_exceeded_exception, :internal_failure_exception, :conflict_exception, :service_unavailable_exception]
# For :transcript_event event available at #on_transcript_event_event callback and response eventstream enumerator:
event.transcript.results #=> Array
event.transcript.results[0].result_id #=> String
event.transcript.results[0].start_time #=> Float
event.transcript.results[0].end_time #=> Float
event.transcript.results[0].is_partial #=> Boolean
event.transcript.results[0].alternatives #=> Array
event.transcript.results[0].alternatives[0].transcript #=> String
event.transcript.results[0].alternatives[0].items #=> Array
event.transcript.results[0].alternatives[0].items[0].start_time #=> Float
event.transcript.results[0].alternatives[0].items[0].end_time #=> Float
event.transcript.results[0].alternatives[0].items[0].type #=> String, one of "pronunciation", "punctuation"
event.transcript.results[0].alternatives[0].items[0].content #=> String
event.transcript.results[0].alternatives[0].items[0].vocabulary_filter_match #=> Boolean
event.transcript.results[0].alternatives[0].items[0].speaker #=> String
event.transcript.results[0].alternatives[0].items[0].confidence #=> Float
event.transcript.results[0].alternatives[0].items[0].stable #=> Boolean
event.transcript.results[0].alternatives[0].entities #=> Array
event.transcript.results[0].alternatives[0].entities[0].start_time #=> Float
event.transcript.results[0].alternatives[0].entities[0].end_time #=> Float
event.transcript.results[0].alternatives[0].entities[0].category #=> String
event.transcript.results[0].alternatives[0].entities[0].type #=> String
event.transcript.results[0].alternatives[0].entities[0].content #=> String
event.transcript.results[0].alternatives[0].entities[0].confidence #=> Float
event.transcript.results[0].channel_id #=> String
event.transcript.results[0].language_code #=> String, one of "en-US", "en-GB", "es-US", "fr-CA", "fr-FR", "en-AU", "it-IT", "de-DE", "pt-BR", "ja-JP", "ko-KR", "zh-CN", "th-TH", "es-ES", "ar-SA", "pt-PT", "ca-ES", "ar-AE", "hi-IN", "zh-HK", "nl-NL", "no-NO", "sv-SE", "pl-PL", "fi-FI", "zh-TW", "en-IN", "en-IE", "en-NZ", "en-AB", "en-ZA", "en-WL", "de-CH", "af-ZA", "eu-ES", "hr-HR", "cs-CZ", "da-DK", "fa-IR", "gl-ES", "el-GR", "he-IL", "id-ID", "lv-LV", "ms-MY", "ro-RO", "ru-RU", "sr-RS", "sk-SK", "so-SO", "tl-PH", "uk-UA", "vi-VN", "zu-ZA"
event.transcript.results[0].language_identification #=> Array
event.transcript.results[0].language_identification[0].language_code #=> String, one of "en-US", "en-GB", "es-US", "fr-CA", "fr-FR", "en-AU", "it-IT", "de-DE", "pt-BR", "ja-JP", "ko-KR", "zh-CN", "th-TH", "es-ES", "ar-SA", "pt-PT", "ca-ES", "ar-AE", "hi-IN", "zh-HK", "nl-NL", "no-NO", "sv-SE", "pl-PL", "fi-FI", "zh-TW", "en-IN", "en-IE", "en-NZ", "en-AB", "en-ZA", "en-WL", "de-CH", "af-ZA", "eu-ES", "hr-HR", "cs-CZ", "da-DK", "fa-IR", "gl-ES", "el-GR", "he-IL", "id-ID", "lv-LV", "ms-MY", "ro-RO", "ru-RU", "sr-RS", "sk-SK", "so-SO", "tl-PH", "uk-UA", "vi-VN", "zu-ZA"
event.transcript.results[0].language_identification[0].score #=> Float
# For :bad_request_exception event available at #on_bad_request_exception_event callback and response eventstream enumerator:
event.message #=> String
# For :limit_exceeded_exception event available at #on_limit_exceeded_exception_event callback and response eventstream enumerator:
event.message #=> String
# For :internal_failure_exception event available at #on_internal_failure_exception_event callback and response eventstream enumerator:
event.message #=> String
# For :conflict_exception event available at #on_conflict_exception_event callback and response eventstream enumerator:
event.message #=> String
# For :service_unavailable_exception event available at #on_service_unavailable_exception_event callback and response eventstream enumerator:
event.message #=> String
resp.vocabulary_filter_name #=> String
resp.vocabulary_filter_method #=> String, one of "remove", "mask", "tag"
resp.show_speaker_label #=> Boolean
resp.enable_channel_identification #=> Boolean
resp.number_of_channels #=> Integer
resp.enable_partial_results_stabilization #=> Boolean
resp.partial_results_stability #=> String, one of "high", "medium", "low"
resp.content_identification_type #=> String, one of "PII"
resp.content_redaction_type #=> String, one of "PII"
resp.pii_entity_types #=> String
resp.language_model_name #=> String
resp.identify_language #=> Boolean
resp.language_options #=> String
resp.preferred_language #=> String, one of "en-US", "en-GB", "es-US", "fr-CA", "fr-FR", "en-AU", "it-IT", "de-DE", "pt-BR", "ja-JP", "ko-KR", "zh-CN", "th-TH", "es-ES", "ar-SA", "pt-PT", "ca-ES", "ar-AE", "hi-IN", "zh-HK", "nl-NL", "no-NO", "sv-SE", "pl-PL", "fi-FI", "zh-TW", "en-IN", "en-IE", "en-NZ", "en-AB", "en-ZA", "en-WL", "de-CH", "af-ZA", "eu-ES", "hr-HR", "cs-CZ", "da-DK", "fa-IR", "gl-ES", "el-GR", "he-IL", "id-ID", "lv-LV", "ms-MY", "ro-RO", "ru-RU", "sr-RS", "sk-SK", "so-SO", "tl-PH", "uk-UA", "vi-VN", "zu-ZA"
resp.identify_multiple_languages #=> Boolean
resp.vocabulary_names #=> String
resp.vocabulary_filter_names #=> String
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:language_code
(String)
—
Specify the language code that represents the language spoken in your audio.
If you're unsure of the language spoken in your audio, consider using
IdentifyLanguage
to enable automatic language identification.For a list of languages supported with Amazon Transcribe streaming, refer to the Supported languages table.
-
:media_sample_rate_hertz
(required, Integer)
—
The sample rate of the input audio (in hertz). Low-quality audio, such as telephone audio, is typically around 8,000 Hz. High-quality audio typically ranges from 16,000 Hz to 48,000 Hz. Note that the sample rate you specify must match that of your audio.
-
:media_encoding
(required, String)
—
Specify the encoding of your input audio. Supported formats are:
FLAC
OPUS-encoded audio in an Ogg container
PCM (only signed 16-bit little-endian audio formats, which does not include WAV)
For more information, see Media formats.
-
:vocabulary_name
(String)
—
Specify the name of the custom vocabulary that you want to use when processing your transcription. Note that vocabulary names are case sensitive.
If the language of the specified custom vocabulary doesn't match the language identified in your media, the custom vocabulary is not applied to your transcription.
This parameter is not intended for use with the
IdentifyLanguage
parameter. If you're includingIdentifyLanguage
in your request and want to use one or more custom vocabularies with your transcription, use theVocabularyNames
parameter instead.For more information, see Custom vocabularies.
-
:session_id
(String)
—
Specify a name for your transcription session. If you don't include this parameter in your request, Amazon Transcribe generates an ID and returns it in the response.
-
:vocabulary_filter_name
(String)
—
Specify the name of the custom vocabulary filter that you want to use when processing your transcription. Note that vocabulary filter names are case sensitive.
If the language of the specified custom vocabulary filter doesn't match the language identified in your media, the vocabulary filter is not applied to your transcription.
This parameter is not intended for use with the
IdentifyLanguage
parameter. If you're includingIdentifyLanguage
in your request and want to use one or more vocabulary filters with your transcription, use theVocabularyFilterNames
parameter instead.For more information, see Using vocabulary filtering with unwanted words.
-
:vocabulary_filter_method
(String)
—
Specify how you want your vocabulary filter applied to your transcript.
To replace words with
***
, choosemask
.To delete words, choose
remove
.To flag words without changing them, choose
tag
. -
:show_speaker_label
(Boolean)
—
Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file.
For more information, see Partitioning speakers (diarization).
-
:enable_channel_identification
(Boolean)
—
Enables channel identification in multi-channel audio.
Channel identification transcribes the audio on each channel independently, then appends the output for each channel into one transcript.
If you have multi-channel audio and do not enable channel identification, your audio is transcribed in a continuous manner and your transcript is not separated by channel.
If you include
EnableChannelIdentification
in your request, you must also includeNumberOfChannels
.For more information, see Transcribing multi-channel audio.
-
:number_of_channels
(Integer)
—
Specify the number of channels in your audio stream. This value must be
2
, as only two channels are supported. If your audio doesn't contain multiple channels, do not include this parameter in your request.If you include
NumberOfChannels
in your request, you must also includeEnableChannelIdentification
. -
:enable_partial_results_stabilization
(Boolean)
—
Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy. For more information, see Partial-result stabilization.
-
:partial_results_stability
(String)
—
Specify the level of stability to use when you enable partial results stabilization (
EnablePartialResultsStabilization
).Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.
For more information, see Partial-result stabilization.
-
:content_identification_type
(String)
—
Labels all personally identifiable information (PII) identified in your transcript.
Content identification is performed at the segment level; PII specified in
PiiEntityTypes
is flagged upon complete transcription of an audio segment. If you don't includePiiEntityTypes
in your request, all PII is identified.You can’t set
ContentIdentificationType
andContentRedactionType
in the same request. If you set both, your request returns aBadRequestException
.For more information, see Redacting or identifying personally identifiable information.
-
:content_redaction_type
(String)
—
Redacts all personally identifiable information (PII) identified in your transcript.
Content redaction is performed at the segment level; PII specified in
PiiEntityTypes
is redacted upon complete transcription of an audio segment. If you don't includePiiEntityTypes
in your request, all PII is redacted.You can’t set
ContentRedactionType
andContentIdentificationType
in the same request. If you set both, your request returns aBadRequestException
.For more information, see Redacting or identifying personally identifiable information.
-
:pii_entity_types
(String)
—
Specify which types of personally identifiable information (PII) you want to redact in your transcript. You can include as many types as you'd like, or you can select
ALL
.Values must be comma-separated and can include:
ADDRESS
,BANK_ACCOUNT_NUMBER
,BANK_ROUTING
,CREDIT_DEBIT_CVV
,CREDIT_DEBIT_EXPIRY
,CREDIT_DEBIT_NUMBER
,EMAIL
,NAME
,PHONE
,PIN
,SSN
, orALL
.Note that if you include
PiiEntityTypes
in your request, you must also includeContentIdentificationType
orContentRedactionType
.If you include
ContentRedactionType
orContentIdentificationType
in your request, but do not includePiiEntityTypes
, all PII is redacted or identified. -
:language_model_name
(String)
—
Specify the name of the custom language model that you want to use when processing your transcription. Note that language model names are case sensitive.
The language of the specified language model must match the language code you specify in your transcription request. If the languages don't match, the custom language model isn't applied. There are no errors or warnings associated with a language mismatch.
For more information, see Custom language models.
-
:identify_language
(Boolean)
—
Enables automatic language identification for your transcription.
If you include
IdentifyLanguage
, you must include a list of language codes, usingLanguageOptions
, that you think may be present in your audio stream.You can also include a preferred language using
PreferredLanguage
. Adding a preferred language can help Amazon Transcribe identify the language faster than if you omit this parameter.If you have multi-channel audio that contains different languages on each channel, and you've enabled channel identification, automatic language identification identifies the dominant language on each audio channel.
Note that you must include either
LanguageCode
orIdentifyLanguage
orIdentifyMultipleLanguages
in your request. If you include more than one of these parameters, your transcription job fails.Streaming language identification can't be combined with custom language models or redaction.
-
:language_options
(String)
—
Specify two or more language codes that represent the languages you think may be present in your media; including more than five is not recommended.
Including language options can improve the accuracy of language identification.
If you include
LanguageOptions
in your request, you must also includeIdentifyLanguage
orIdentifyMultipleLanguages
.For a list of languages supported with Amazon Transcribe streaming, refer to the Supported languages table.
You can only include one language dialect per language per stream. For example, you cannot include
en-US
anden-AU
in the same request. -
:preferred_language
(String)
—
Specify a preferred language from the subset of languages codes you specified in
LanguageOptions
.You can only use this parameter if you've included
IdentifyLanguage
andLanguageOptions
in your request. -
:identify_multiple_languages
(Boolean)
—
Enables automatic multi-language identification in your transcription job request. Use this parameter if your stream contains more than one language. If your stream contains only one language, use IdentifyLanguage instead.
If you include
IdentifyMultipleLanguages
, you must include a list of language codes, usingLanguageOptions
, that you think may be present in your stream.If you want to apply a custom vocabulary or a custom vocabulary filter to your automatic multiple language identification request, include
VocabularyNames
orVocabularyFilterNames
.Note that you must include one of
LanguageCode
,IdentifyLanguage
, orIdentifyMultipleLanguages
in your request. If you include more than one of these parameters, your transcription job fails. -
:vocabulary_names
(String)
—
Specify the names of the custom vocabularies that you want to use when processing your transcription. Note that vocabulary names are case sensitive.
If none of the languages of the specified custom vocabularies match the language identified in your media, your job fails.
This parameter is only intended for use with the
IdentifyLanguage
parameter. If you're not includingIdentifyLanguage
in your request and want to use a custom vocabulary with your transcription, use theVocabularyName
parameter instead.For more information, see Custom vocabularies.
-
:vocabulary_filter_names
(String)
—
Specify the names of the custom vocabulary filters that you want to use when processing your transcription. Note that vocabulary filter names are case sensitive.
If none of the languages of the specified custom vocabulary filters match the language identified in your media, your job fails.
This parameter is only intended for use with the
IdentifyLanguage
parameter. If you're not includingIdentifyLanguage
in your request and want to use a custom vocabulary filter with your transcription, use theVocabularyFilterName
parameter instead.For more information, see Using vocabulary filtering with unwanted words.
Yields:
- (output_event_stream_handler)
Returns:
-
(Types::StartStreamTranscriptionResponse)
—
Returns a response object which responds to the following methods:
- #request_id => String
- #language_code => String
- #media_sample_rate_hertz => Integer
- #media_encoding => String
- #vocabulary_name => String
- #session_id => String
- #transcript_result_stream => Types::TranscriptResultStream
- #vocabulary_filter_name => String
- #vocabulary_filter_method => String
- #show_speaker_label => Boolean
- #enable_channel_identification => Boolean
- #number_of_channels => Integer
- #enable_partial_results_stabilization => Boolean
- #partial_results_stability => String
- #content_identification_type => String
- #content_redaction_type => String
- #pii_entity_types => String
- #language_model_name => String
- #identify_language => Boolean
- #language_options => String
- #preferred_language => String
- #identify_multiple_languages => Boolean
- #vocabulary_names => String
- #vocabulary_filter_names => String
See Also:
2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 |
# File 'gems/aws-sdk-transcribestreamingservice/lib/aws-sdk-transcribestreamingservice/async_client.rb', line 2042 def start_stream_transcription(params = {}, options = {}, &block) params = params.dup input_event_stream_handler = _event_stream_handler( :input, params.delete(:input_event_stream_handler), EventStreams::AudioStream ) output_event_stream_handler = _event_stream_handler( :output, params.delete(:output_event_stream_handler) || params.delete(:event_stream_handler), EventStreams::TranscriptResultStream ) yield(output_event_stream_handler) if block_given? req = build_request(:start_stream_transcription, params) req.context[:input_event_stream_handler] = input_event_stream_handler req.handlers.add(Aws::Binary::EncodeHandler, priority: 55) req.context[:output_event_stream_handler] = output_event_stream_handler req.handlers.add(Aws::Binary::DecodeHandler, priority: 55) req.send_request(options, &block) end |