Bedrock Runtime

This page documents function available when using the Bedrock_Runtime module, created with @service Bedrock_Runtime.

Index

Documentation

Main.Bedrock_Runtime.converseMethod
converse(messages, model_id)
converse(messages, model_id, params::Dict{String,<:Any})

Sends messages to the specified Amazon Bedrock model. Converse provides a consistent interface that works with all models that support messages. This allows you to write code once and use it with different models. Should a model have unique inference parameters, you can also pass those unique parameters to the model. For information about the Converse API, see Use the Converse API in the Amazon Bedrock User Guide. To use a guardrail, see Use a guardrail with the Converse API in the Amazon Bedrock User Guide. To use a tool with a model, see Tool use (Function calling) in the Amazon Bedrock User Guide For example code, see Converse API examples in the Amazon Bedrock User Guide. This operation requires permission for the bedrock:InvokeModel action.

Arguments

  • messages: The messages that you want to send to the model.
  • model_id: The identifier for the model that you want to call. The modelId to provide depends on the type of model that you use: If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see Amazon Bedrock base model IDs (on-demand throughput) in the Amazon Bedrock User Guide. If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see Run inference using a Provisioned Throughput in the Amazon Bedrock User Guide. If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see Use a custom model in Amazon Bedrock in the Amazon Bedrock User Guide.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "additionalModelRequestFields": Additional inference parameters that the model supports, beyond the base set of inference parameters that Converse supports in the inferenceConfig field. For more information, see Model parameters.
  • "additionalModelResponseFieldPaths": Additional model parameters field paths to return in the response. Converse returns the requested fields as a JSON Pointer object in the additionalModelResponseFields field. The following is example JSON for additionalModelResponseFieldPaths. [ "/stop_sequence" ] For information about the JSON Pointer syntax, see the Internet Engineering Task Force (IETF) documentation. Converse rejects an empty JSON Pointer or incorrectly structured JSON Pointer with a 400 error code. if the JSON Pointer is valid, but the requested field is not in the model response, it is ignored by Converse.
  • "guardrailConfig": Configuration information for a guardrail that you want to use in the request.
  • "inferenceConfig": Inference parameters to pass to the model. Converse supports a base set of inference parameters. If you need to pass additional parameters that the model supports, use the additionalModelRequestFields request field.
  • "system": A system prompt to pass to the model.
  • "toolConfig": Configuration information for the tools that the model can use when generating a response. This field is only supported by Anthropic Claude 3, Cohere Command R, Cohere Command R+, and Mistral Large models.
source
Main.Bedrock_Runtime.converse_streamMethod
converse_stream(messages, model_id)
converse_stream(messages, model_id, params::Dict{String,<:Any})

Sends messages to the specified Amazon Bedrock model and returns the response in a stream. ConverseStream provides a consistent API that works with all Amazon Bedrock models that support messages. This allows you to write code once and use it with different models. Should a model have unique inference parameters, you can also pass those unique parameters to the model. To find out if a model supports streaming, call GetFoundationModel and check the responseStreamingSupported field in the response. For information about the Converse API, see Use the Converse API in the Amazon Bedrock User Guide. To use a guardrail, see Use a guardrail with the Converse API in the Amazon Bedrock User Guide. To use a tool with a model, see Tool use (Function calling) in the Amazon Bedrock User Guide For example code, see Conversation streaming example in the Amazon Bedrock User Guide. This operation requires permission for the bedrock:InvokeModelWithResponseStream action.

Arguments

  • messages: The messages that you want to send to the model.
  • model_id: The ID for the model. The modelId to provide depends on the type of model that you use: If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see Amazon Bedrock base model IDs (on-demand throughput) in the Amazon Bedrock User Guide. If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see Run inference using a Provisioned Throughput in the Amazon Bedrock User Guide. If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see Use a custom model in Amazon Bedrock in the Amazon Bedrock User Guide.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "additionalModelRequestFields": Additional inference parameters that the model supports, beyond the base set of inference parameters that ConverseStream supports in the inferenceConfig field.
  • "additionalModelResponseFieldPaths": Additional model parameters field paths to return in the response. ConverseStream returns the requested fields as a JSON Pointer object in the additionalModelResponseFields field. The following is example JSON for additionalModelResponseFieldPaths. [ "/stop_sequence" ] For information about the JSON Pointer syntax, see the Internet Engineering Task Force (IETF) documentation. ConverseStream rejects an empty JSON Pointer or incorrectly structured JSON Pointer with a 400 error code. if the JSON Pointer is valid, but the requested field is not in the model response, it is ignored by ConverseStream.
  • "guardrailConfig": Configuration information for a guardrail that you want to use in the request.
  • "inferenceConfig": Inference parameters to pass to the model. ConverseStream supports a base set of inference parameters. If you need to pass additional parameters that the model supports, use the additionalModelRequestFields request field.
  • "system": A system prompt to send to the model.
  • "toolConfig": Configuration information for the tools that the model can use when generating a response. This field is only supported by Anthropic Claude 3 models.
source
Main.Bedrock_Runtime.invoke_modelMethod
invoke_model(body, model_id)
invoke_model(body, model_id, params::Dict{String,<:Any})

Invokes the specified Amazon Bedrock model to run inference using the prompt and inference parameters provided in the request body. You use model inference to generate text, images, and embeddings. For example code, see Invoke model code examples in the Amazon Bedrock User Guide. This operation requires permission for the bedrock:InvokeModel action.

Arguments

  • body: The prompt and inference parameters in the format specified in the contentType in the header. You must provide the body in JSON format. To see the format and content of the request and response bodies for different models, refer to Inference parameters. For more information, see Run inference in the Bedrock User Guide.
  • model_id: The unique identifier of the model to invoke to run inference. The modelId to provide depends on the type of model that you use: If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see Amazon Bedrock base model IDs (on-demand throughput) in the Amazon Bedrock User Guide. If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see Run inference using a Provisioned Throughput in the Amazon Bedrock User Guide. If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see Use a custom model in Amazon Bedrock in the Amazon Bedrock User Guide.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "Accept": The desired MIME type of the inference body in the response. The default value is application/json.
  • "Content-Type": The MIME type of the input data in the request. You must specify application/json.
  • "X-Amzn-Bedrock-GuardrailIdentifier": The unique identifier of the guardrail that you want to use. If you don't provide a value, no guardrail is applied to the invocation. An error will be thrown in the following situations. You don't provide a guardrail identifier but you specify the amazon-bedrock-guardrailConfig field in the request body. You enable the guardrail but the contentType isn't application/json. You provide a guardrail identifier, but guardrailVersion isn't specified.
  • "X-Amzn-Bedrock-GuardrailVersion": The version number for the guardrail. The value can also be DRAFT.
  • "X-Amzn-Bedrock-Trace": Specifies whether to enable or disable the Bedrock trace. If enabled, you can see the full Bedrock trace.
source
Main.Bedrock_Runtime.invoke_model_with_response_streamMethod
invoke_model_with_response_stream(body, model_id)
invoke_model_with_response_stream(body, model_id, params::Dict{String,<:Any})

Invoke the specified Amazon Bedrock model to run inference using the prompt and inference parameters provided in the request body. The response is returned in a stream. To see if a model supports streaming, call GetFoundationModel and check the responseStreamingSupported field in the response. The CLI doesn't support InvokeModelWithResponseStream. For example code, see Invoke model with streaming code example in the Amazon Bedrock User Guide. This operation requires permissions to perform the bedrock:InvokeModelWithResponseStream action.

Arguments

  • body: The prompt and inference parameters in the format specified in the contentType in the header. You must provide the body in JSON format. To see the format and content of the request and response bodies for different models, refer to Inference parameters. For more information, see Run inference in the Bedrock User Guide.
  • model_id: The unique identifier of the model to invoke to run inference. The modelId to provide depends on the type of model that you use: If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see Amazon Bedrock base model IDs (on-demand throughput) in the Amazon Bedrock User Guide. If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see Run inference using a Provisioned Throughput in the Amazon Bedrock User Guide. If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see Use a custom model in Amazon Bedrock in the Amazon Bedrock User Guide.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "Content-Type": The MIME type of the input data in the request. You must specify application/json.
  • "X-Amzn-Bedrock-Accept": The desired MIME type of the inference body in the response. The default value is application/json.
  • "X-Amzn-Bedrock-GuardrailIdentifier": The unique identifier of the guardrail that you want to use. If you don't provide a value, no guardrail is applied to the invocation. An error is thrown in the following situations. You don't provide a guardrail identifier but you specify the amazon-bedrock-guardrailConfig field in the request body. You enable the guardrail but the contentType isn't application/json. You provide a guardrail identifier, but guardrailVersion isn't specified.
  • "X-Amzn-Bedrock-GuardrailVersion": The version number for the guardrail. The value can also be DRAFT.
  • "X-Amzn-Bedrock-Trace": Specifies whether to enable or disable the Bedrock trace. If enabled, you can see the full Bedrock trace.
source