Forecast

This page documents function available when using the Forecast module, created with @service Forecast.

Index

Documentation

Main.Forecast.create_datasetMethod
create_dataset(dataset_name, dataset_type, domain, schema)
create_dataset(dataset_name, dataset_type, domain, schema, params::Dict{String,<:Any})

Creates an Amazon Forecast dataset. The information about the dataset that you provide helps Forecast understand how to consume the data for model training. This includes the following: DataFrequency - How frequently your historical time-series data is collected. Domain and DatasetType - Each dataset has an associated dataset domain and a type within the domain. Amazon Forecast provides a list of predefined domains and types within each domain. For each unique dataset domain and type within the domain, Amazon Forecast requires your data to include a minimum set of predefined fields. Schema - A schema specifies the fields in the dataset, including the field name and data type. After creating a dataset, you import your training data into it and add the dataset to a dataset group. You use the dataset group to create a predictor. For more information, see howitworks-datasets-groups. To get a list of all your datasets, use the ListDatasets operation. For example Forecast datasets, see the Amazon Forecast Sample GitHub repository. The Status of a dataset must be ACTIVE before you can import training data. Use the DescribeDataset operation to get the status.

Arguments

  • dataset_name: A name for the dataset.
  • dataset_type: The dataset type. Valid values depend on the chosen Domain.
  • domain: The domain associated with the dataset. When you add a dataset to a dataset group, this value and the value specified for the Domain parameter of the CreateDatasetGroup operation must match. The Domain and DatasetType that you choose determine the fields that must be present in the training data that you import to the dataset. For example, if you choose the RETAIL domain and TARGETTIMESERIES as the DatasetType, Amazon Forecast requires item_id, timestamp, and demand fields to be present in your data. For more information, see howitworks-datasets-groups.
  • schema: The schema for the dataset. The schema attributes and their order must match the fields in your data. The dataset Domain and DatasetType that you choose determine the minimum required fields in your training data. For information about the required fields for a specific dataset domain and type, see howitworks-domains-ds-types.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "DataFrequency": The frequency of data collection. This parameter is required for RELATEDTIMESERIES datasets. Valid intervals are Y (Year), M (Month), W (Week), D (Day), H (Hour), 30min (30 minutes), 15min (15 minutes), 10min (10 minutes), 5min (5 minutes), and 1min (1 minute). For example, "D" indicates every day and "15min" indicates every 15 minutes.
  • "EncryptionConfig": An AWS Key Management Service (KMS) key and the AWS Identity and Access Management (IAM) role that Amazon Forecast can assume to access the key.
  • "Tags": The optional metadata that you apply to the dataset to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. The following basic restrictions apply to tags: Maximum number of tags per resource - 50. For each resource, each tag key must be unique, and each tag key can have only one value. Maximum key length - 128 Unicode characters in UTF-8. Maximum value length - 256 Unicode characters in UTF-8. If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. Tag keys and values are case sensitive. Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.
source
Main.Forecast.create_dataset_groupMethod
create_dataset_group(dataset_group_name, domain)
create_dataset_group(dataset_group_name, domain, params::Dict{String,<:Any})

Creates a dataset group, which holds a collection of related datasets. You can add datasets to the dataset group when you create the dataset group, or later by using the UpdateDatasetGroup operation. After creating a dataset group and adding datasets, you use the dataset group when you create a predictor. For more information, see howitworks-datasets-groups. To get a list of all your datasets groups, use the ListDatasetGroups operation. The Status of a dataset group must be ACTIVE before you can use the dataset group to create a predictor. To get the status, use the DescribeDatasetGroup operation.

Arguments

  • dataset_group_name: A name for the dataset group.
  • domain: The domain associated with the dataset group. When you add a dataset to a dataset group, this value and the value specified for the Domain parameter of the CreateDataset operation must match. The Domain and DatasetType that you choose determine the fields that must be present in training data that you import to a dataset. For example, if you choose the RETAIL domain and TARGETTIMESERIES as the DatasetType, Amazon Forecast requires that item_id, timestamp, and demand fields are present in your data. For more information, see howitworks-datasets-groups.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "DatasetArns": An array of Amazon Resource Names (ARNs) of the datasets that you want to include in the dataset group.
  • "Tags": The optional metadata that you apply to the dataset group to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. The following basic restrictions apply to tags: Maximum number of tags per resource - 50. For each resource, each tag key must be unique, and each tag key can have only one value. Maximum key length - 128 Unicode characters in UTF-8. Maximum value length - 256 Unicode characters in UTF-8. If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. Tag keys and values are case sensitive. Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.
source
Main.Forecast.create_dataset_import_jobMethod
create_dataset_import_job(data_source, dataset_arn, dataset_import_job_name)
create_dataset_import_job(data_source, dataset_arn, dataset_import_job_name, params::Dict{String,<:Any})

Imports your training data to an Amazon Forecast dataset. You provide the location of your training data in an Amazon Simple Storage Service (Amazon S3) bucket and the Amazon Resource Name (ARN) of the dataset that you want to import the data to. You must specify a DataSource object that includes an AWS Identity and Access Management (IAM) role that Amazon Forecast can assume to access the data, as Amazon Forecast makes a copy of your data and processes it in an internal AWS system. For more information, see aws-forecast-iam-roles. The training data must be in CSV format. The delimiter must be a comma (,). You can specify the path to a specific CSV file, the S3 bucket, or to a folder in the S3 bucket. For the latter two cases, Amazon Forecast imports all files up to the limit of 10,000 files. Because dataset imports are not aggregated, your most recent dataset import is the one that is used when training a predictor or generating a forecast. Make sure that your most recent dataset import contains all of the data you want to model off of, and not just the new data collected since the previous import. To get a list of all your dataset import jobs, filtered by specified criteria, use the ListDatasetImportJobs operation.

Arguments

  • data_source: The location of the training data to import and an AWS Identity and Access Management (IAM) role that Amazon Forecast can assume to access the data. The training data must be stored in an Amazon S3 bucket. If encryption is used, DataSource must include an AWS Key Management Service (KMS) key and the IAM role must allow Amazon Forecast permission to access the key. The KMS key and IAM role must match those specified in the EncryptionConfig parameter of the CreateDataset operation.
  • dataset_arn: The Amazon Resource Name (ARN) of the Amazon Forecast dataset that you want to import data to.
  • dataset_import_job_name: The name for the dataset import job. We recommend including the current timestamp in the name, for example, 20190721DatasetImport. This can help you avoid getting a ResourceAlreadyExistsException exception.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "GeolocationFormat": The format of the geolocation attribute. The geolocation attribute can be formatted in one of two ways: LATLONG - the latitude and longitude in decimal format (Example: 47.61-122.33). CCPOSTALCODE (US Only) - the country code (US), followed by the 5-digit ZIP code (Example: US98121).
  • "Tags": The optional metadata that you apply to the dataset import job to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. The following basic restrictions apply to tags: Maximum number of tags per resource - 50. For each resource, each tag key must be unique, and each tag key can have only one value. Maximum key length - 128 Unicode characters in UTF-8. Maximum value length - 256 Unicode characters in UTF-8. If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. Tag keys and values are case sensitive. Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.
  • "TimeZone": A single time zone for every item in your dataset. This option is ideal for datasets with all timestamps within a single time zone, or if all timestamps are normalized to a single time zone. Refer to the Joda-Time API for a complete list of valid time zone names.
  • "TimestampFormat": The format of timestamps in the dataset. The format that you specify depends on the DataFrequency specified when the dataset was created. The following formats are supported "yyyy-MM-dd" For the following data frequencies: Y, M, W, and D "yyyy-MM-dd HH:mm:ss" For the following data frequencies: H, 30min, 15min, and 1min; and optionally, for: Y, M, W, and D If the format isn't specified, Amazon Forecast expects the format to be "yyyy-MM-dd HH:mm:ss".
  • "UseGeolocationForTimeZone": Automatically derive time zone information from the geolocation attribute. This option is ideal for datasets that contain timestamps in multiple time zones and those timestamps are expressed in local time.
source
Main.Forecast.create_forecastMethod
create_forecast(forecast_name, predictor_arn)
create_forecast(forecast_name, predictor_arn, params::Dict{String,<:Any})

Creates a forecast for each item in the TARGETTIMESERIES dataset that was used to train the predictor. This is known as inference. To retrieve the forecast for a single item at low latency, use the operation. To export the complete forecast into your Amazon Simple Storage Service (Amazon S3) bucket, use the CreateForecastExportJob operation. The range of the forecast is determined by the ForecastHorizon value, which you specify in the CreatePredictor request. When you query a forecast, you can request a specific date range within the forecast. To get a list of all your forecasts, use the ListForecasts operation. The forecasts generated by Amazon Forecast are in the same time zone as the dataset that was used to create the predictor. For more information, see howitworks-forecast. The Status of the forecast must be ACTIVE before you can query or export the forecast. Use the DescribeForecast operation to get the status.

Arguments

  • forecast_name: A name for the forecast.
  • predictor_arn: The Amazon Resource Name (ARN) of the predictor to use to generate the forecast.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "ForecastTypes": The quantiles at which probabilistic forecasts are generated. You can currently specify up to 5 quantiles per forecast. Accepted values include 0.01 to 0.99 (increments of .01 only) and mean. The mean forecast is different from the median (0.50) when the distribution is not symmetric (for example, Beta and Negative Binomial). The default value is ["0.1", "0.5", "0.9"].
  • "Tags": The optional metadata that you apply to the forecast to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. The following basic restrictions apply to tags: Maximum number of tags per resource - 50. For each resource, each tag key must be unique, and each tag key can have only one value. Maximum key length - 128 Unicode characters in UTF-8. Maximum value length - 256 Unicode characters in UTF-8. If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. Tag keys and values are case sensitive. Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.
source
Main.Forecast.create_forecast_export_jobMethod
create_forecast_export_job(destination, forecast_arn, forecast_export_job_name)
create_forecast_export_job(destination, forecast_arn, forecast_export_job_name, params::Dict{String,<:Any})

Exports a forecast created by the CreateForecast operation to your Amazon Simple Storage Service (Amazon S3) bucket. The forecast file name will match the following conventions: &lt;ForecastExportJobName&gt;&lt;ExportTimestamp&gt;&lt;PartNumber&gt; where the &lt;ExportTimestamp&gt; component is in Java SimpleDateFormat (yyyy-MM-ddTHH-mm-ssZ). You must specify a DataDestination object that includes an AWS Identity and Access Management (IAM) role that Amazon Forecast can assume to access the Amazon S3 bucket. For more information, see aws-forecast-iam-roles. For more information, see howitworks-forecast. To get a list of all your forecast export jobs, use the ListForecastExportJobs operation. The Status of the forecast export job must be ACTIVE before you can access the forecast in your Amazon S3 bucket. To get the status, use the DescribeForecastExportJob operation.

Arguments

  • destination: The location where you want to save the forecast and an AWS Identity and Access Management (IAM) role that Amazon Forecast can assume to access the location. The forecast must be exported to an Amazon S3 bucket. If encryption is used, Destination must include an AWS Key Management Service (KMS) key. The IAM role must allow Amazon Forecast permission to access the key.
  • forecast_arn: The Amazon Resource Name (ARN) of the forecast that you want to export.
  • forecast_export_job_name: The name for the forecast export job.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "Tags": The optional metadata that you apply to the forecast export job to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. The following basic restrictions apply to tags: Maximum number of tags per resource - 50. For each resource, each tag key must be unique, and each tag key can have only one value. Maximum key length - 128 Unicode characters in UTF-8. Maximum value length - 256 Unicode characters in UTF-8. If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. Tag keys and values are case sensitive. Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.
source
Main.Forecast.create_predictorMethod
create_predictor(featurization_config, forecast_horizon, input_data_config, predictor_name)
create_predictor(featurization_config, forecast_horizon, input_data_config, predictor_name, params::Dict{String,<:Any})

Creates an Amazon Forecast predictor. In the request, provide a dataset group and either specify an algorithm or let Amazon Forecast choose an algorithm for you using AutoML. If you specify an algorithm, you also can override algorithm-specific hyperparameters. Amazon Forecast uses the algorithm to train a predictor using the latest version of the datasets in the specified dataset group. You can then generate a forecast using the CreateForecast operation. To see the evaluation metrics, use the GetAccuracyMetrics operation. You can specify a featurization configuration to fill and aggregate the data fields in the TARGETTIMESERIES dataset to improve model training. For more information, see FeaturizationConfig. For RELATEDTIMESERIES datasets, CreatePredictor verifies that the DataFrequency specified when the dataset was created matches the ForecastFrequency. TARGETTIMESERIES datasets don't have this restriction. Amazon Forecast also verifies the delimiter and timestamp format. For more information, see howitworks-datasets-groups. By default, predictors are trained and evaluated at the 0.1 (P10), 0.5 (P50), and 0.9 (P90) quantiles. You can choose custom forecast types to train and evaluate your predictor by setting the ForecastTypes. AutoML If you want Amazon Forecast to evaluate each algorithm and choose the one that minimizes the objective function, set PerformAutoML to true. The objective function is defined as the mean of the weighted losses over the forecast types. By default, these are the p10, p50, and p90 quantile losses. For more information, see EvaluationResult. When AutoML is enabled, the following properties are disallowed: AlgorithmArn HPOConfig PerformHPO TrainingParameters To get a list of all of your predictors, use the ListPredictors operation. Before you can use the predictor to create a forecast, the Status of the predictor must be ACTIVE, signifying that training has completed. To get the status, use the DescribePredictor operation.

Arguments

  • featurization_config: The featurization configuration.
  • forecast_horizon: Specifies the number of time-steps that the model is trained to predict. The forecast horizon is also called the prediction length. For example, if you configure a dataset for daily data collection (using the DataFrequency parameter of the CreateDataset operation) and set the forecast horizon to 10, the model returns predictions for 10 days. The maximum forecast horizon is the lesser of 500 time-steps or 1/3 of the TARGETTIMESERIES dataset length.
  • input_data_config: Describes the dataset group that contains the data to use to train the predictor.
  • predictor_name: A name for the predictor.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "AlgorithmArn": The Amazon Resource Name (ARN) of the algorithm to use for model training. Required if PerformAutoML is not set to true. Supported algorithms: arn:aws:forecast:::algorithm/ARIMA arn:aws:forecast:::algorithm/CNN-QR arn:aws:forecast:::algorithm/DeepARPlus arn:aws:forecast:::algorithm/ETS arn:aws:forecast:::algorithm/NPTS arn:aws:forecast:::algorithm/Prophet
  • "EncryptionConfig": An AWS Key Management Service (KMS) key and the AWS Identity and Access Management (IAM) role that Amazon Forecast can assume to access the key.
  • "EvaluationParameters": Used to override the default evaluation parameters of the specified algorithm. Amazon Forecast evaluates a predictor by splitting a dataset into training data and testing data. The evaluation parameters define how to perform the split and the number of iterations.
  • "ForecastTypes": Specifies the forecast types used to train a predictor. You can specify up to five forecast types. Forecast types can be quantiles from 0.01 to 0.99, by increments of 0.01 or higher. You can also specify the mean forecast with mean. The default value is ["0.10", "0.50", "0.9"].
  • "HPOConfig": Provides hyperparameter override values for the algorithm. If you don't provide this parameter, Amazon Forecast uses default values. The individual algorithms specify which hyperparameters support hyperparameter optimization (HPO). For more information, see aws-forecast-choosing-recipes. If you included the HPOConfig object, you must set PerformHPO to true.
  • "PerformAutoML": Whether to perform AutoML. When Amazon Forecast performs AutoML, it evaluates the algorithms it provides and chooses the best algorithm and configuration for your training dataset. The default value is false. In this case, you are required to specify an algorithm. Set PerformAutoML to true to have Amazon Forecast perform AutoML. This is a good option if you aren't sure which algorithm is suitable for your training data. In this case, PerformHPO must be false.
  • "PerformHPO": Whether to perform hyperparameter optimization (HPO). HPO finds optimal hyperparameter values for your training data. The process of performing HPO is known as running a hyperparameter tuning job. The default value is false. In this case, Amazon Forecast uses default hyperparameter values from the chosen algorithm. To override the default values, set PerformHPO to true and, optionally, supply the HyperParameterTuningJobConfig object. The tuning job specifies a metric to optimize, which hyperparameters participate in tuning, and the valid range for each tunable hyperparameter. In this case, you are required to specify an algorithm and PerformAutoML must be false. The following algorithms support HPO: DeepAR+ CNN-QR
  • "Tags": The optional metadata that you apply to the predictor to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. The following basic restrictions apply to tags: Maximum number of tags per resource - 50. For each resource, each tag key must be unique, and each tag key can have only one value. Maximum key length - 128 Unicode characters in UTF-8. Maximum value length - 256 Unicode characters in UTF-8. If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. Tag keys and values are case sensitive. Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.
  • "TrainingParameters": The hyperparameters to override for model training. The hyperparameters that you can override are listed in the individual algorithms. For the list of supported algorithms, see aws-forecast-choosing-recipes.
source
Main.Forecast.create_predictor_backtest_export_jobMethod
create_predictor_backtest_export_job(destination, predictor_arn, predictor_backtest_export_job_name)
create_predictor_backtest_export_job(destination, predictor_arn, predictor_backtest_export_job_name, params::Dict{String,<:Any})

Exports backtest forecasts and accuracy metrics generated by the CreatePredictor operation. Two folders containing CSV files are exported to your specified S3 bucket. The export file names will match the following conventions: &lt;ExportJobName&gt;&lt;ExportTimestamp&gt;&lt;PartNumber&gt;.csv The &lt;ExportTimestamp&gt; component is in Java SimpleDate format (yyyy-MM-ddTHH-mm-ssZ). You must specify a DataDestination object that includes an Amazon S3 bucket and an AWS Identity and Access Management (IAM) role that Amazon Forecast can assume to access the Amazon S3 bucket. For more information, see aws-forecast-iam-roles. The Status of the export job must be ACTIVE before you can access the export in your Amazon S3 bucket. To get the status, use the DescribePredictorBacktestExportJob operation.

Arguments

  • destination:
  • predictor_arn: The Amazon Resource Name (ARN) of the predictor that you want to export.
  • predictor_backtest_export_job_name: The name for the backtest export job.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "Tags": Optional metadata to help you categorize and organize your backtests. Each tag consists of a key and an optional value, both of which you define. Tag keys and values are case sensitive. The following restrictions apply to tags: For each resource, each tag key must be unique and each tag key must have one value. Maximum number of tags per resource:
    1. Maximum key length: 128 Unicode characters in UTF-8. Maximum value length: 256
    Unicode characters in UTF-8. Accepted characters: all letters and numbers, spaces representable in UTF-8, and + - = . _ : / @. If your tagging schema is used across other services and resources, the character restrictions of those services also apply. Key prefixes cannot include any upper or lowercase combination of aws: or AWS:. Values can have this prefix. If a tag value has aws as its prefix but the key does not, Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit. You cannot edit or delete tag keys with this prefix.
source
Main.Forecast.delete_datasetMethod
delete_dataset(dataset_arn)
delete_dataset(dataset_arn, params::Dict{String,<:Any})

Deletes an Amazon Forecast dataset that was created using the CreateDataset operation. You can only delete datasets that have a status of ACTIVE or CREATE_FAILED. To get the status use the DescribeDataset operation. Forecast does not automatically update any dataset groups that contain the deleted dataset. In order to update the dataset group, use the operation, omitting the deleted dataset's ARN.

Arguments

  • dataset_arn: The Amazon Resource Name (ARN) of the dataset to delete.
source
Main.Forecast.delete_dataset_groupMethod
delete_dataset_group(dataset_group_arn)
delete_dataset_group(dataset_group_arn, params::Dict{String,<:Any})

Deletes a dataset group created using the CreateDatasetGroup operation. You can only delete dataset groups that have a status of ACTIVE, CREATEFAILED, or UPDATEFAILED. To get the status, use the DescribeDatasetGroup operation. This operation deletes only the dataset group, not the datasets in the group.

Arguments

  • dataset_group_arn: The Amazon Resource Name (ARN) of the dataset group to delete.
source
Main.Forecast.delete_dataset_import_jobMethod
delete_dataset_import_job(dataset_import_job_arn)
delete_dataset_import_job(dataset_import_job_arn, params::Dict{String,<:Any})

Deletes a dataset import job created using the CreateDatasetImportJob operation. You can delete only dataset import jobs that have a status of ACTIVE or CREATE_FAILED. To get the status, use the DescribeDatasetImportJob operation.

Arguments

  • dataset_import_job_arn: The Amazon Resource Name (ARN) of the dataset import job to delete.
source
Main.Forecast.delete_forecastMethod
delete_forecast(forecast_arn)
delete_forecast(forecast_arn, params::Dict{String,<:Any})

Deletes a forecast created using the CreateForecast operation. You can delete only forecasts that have a status of ACTIVE or CREATE_FAILED. To get the status, use the DescribeForecast operation. You can't delete a forecast while it is being exported. After a forecast is deleted, you can no longer query the forecast.

Arguments

  • forecast_arn: The Amazon Resource Name (ARN) of the forecast to delete.
source
Main.Forecast.delete_forecast_export_jobMethod
delete_forecast_export_job(forecast_export_job_arn)
delete_forecast_export_job(forecast_export_job_arn, params::Dict{String,<:Any})

Deletes a forecast export job created using the CreateForecastExportJob operation. You can delete only export jobs that have a status of ACTIVE or CREATE_FAILED. To get the status, use the DescribeForecastExportJob operation.

Arguments

  • forecast_export_job_arn: The Amazon Resource Name (ARN) of the forecast export job to delete.
source
Main.Forecast.delete_predictorMethod
delete_predictor(predictor_arn)
delete_predictor(predictor_arn, params::Dict{String,<:Any})

Deletes a predictor created using the CreatePredictor operation. You can delete only predictor that have a status of ACTIVE or CREATE_FAILED. To get the status, use the DescribePredictor operation.

Arguments

  • predictor_arn: The Amazon Resource Name (ARN) of the predictor to delete.
source
Main.Forecast.delete_predictor_backtest_export_jobMethod
delete_predictor_backtest_export_job(predictor_backtest_export_job_arn)
delete_predictor_backtest_export_job(predictor_backtest_export_job_arn, params::Dict{String,<:Any})

Deletes a predictor backtest export job.

Arguments

  • predictor_backtest_export_job_arn: The Amazon Resource Name (ARN) of the predictor backtest export job to delete.
source
Main.Forecast.delete_resource_treeMethod
delete_resource_tree(resource_arn)
delete_resource_tree(resource_arn, params::Dict{String,<:Any})

Deletes an entire resource tree. This operation will delete the parent resource and its child resources. Child resources are resources that were created from another resource. For example, when a forecast is generated from a predictor, the forecast is the child resource and the predictor is the parent resource. Amazon Forecast resources possess the following parent-child resource hierarchies: Dataset: dataset import jobs Dataset Group: predictors, predictor backtest export jobs, forecasts, forecast export jobs Predictor: predictor backtest export jobs, forecasts, forecast export jobs Forecast: forecast export jobs DeleteResourceTree will only delete Amazon Forecast resources, and will not delete datasets or exported files stored in Amazon S3.

Arguments

  • resource_arn: The Amazon Resource Name (ARN) of the parent resource to delete. All child resources of the parent resource will also be deleted.
source
Main.Forecast.describe_datasetMethod
describe_dataset(dataset_arn)
describe_dataset(dataset_arn, params::Dict{String,<:Any})

Describes an Amazon Forecast dataset created using the CreateDataset operation. In addition to listing the parameters specified in the CreateDataset request, this operation includes the following dataset properties: CreationTime LastModificationTime Status

Arguments

  • dataset_arn: The Amazon Resource Name (ARN) of the dataset.
source
Main.Forecast.describe_dataset_groupMethod
describe_dataset_group(dataset_group_arn)
describe_dataset_group(dataset_group_arn, params::Dict{String,<:Any})

Describes a dataset group created using the CreateDatasetGroup operation. In addition to listing the parameters provided in the CreateDatasetGroup request, this operation includes the following properties: DatasetArns - The datasets belonging to the group. CreationTime LastModificationTime Status

Arguments

  • dataset_group_arn: The Amazon Resource Name (ARN) of the dataset group.
source
Main.Forecast.describe_dataset_import_jobMethod
describe_dataset_import_job(dataset_import_job_arn)
describe_dataset_import_job(dataset_import_job_arn, params::Dict{String,<:Any})

Describes a dataset import job created using the CreateDatasetImportJob operation. In addition to listing the parameters provided in the CreateDatasetImportJob request, this operation includes the following properties: CreationTime LastModificationTime DataSize FieldStatistics Status Message - If an error occurred, information about the error.

Arguments

  • dataset_import_job_arn: The Amazon Resource Name (ARN) of the dataset import job.
source
Main.Forecast.describe_forecastMethod
describe_forecast(forecast_arn)
describe_forecast(forecast_arn, params::Dict{String,<:Any})

Describes a forecast created using the CreateForecast operation. In addition to listing the properties provided in the CreateForecast request, this operation lists the following properties: DatasetGroupArn - The dataset group that provided the training data. CreationTime LastModificationTime Status Message - If an error occurred, information about the error.

Arguments

  • forecast_arn: The Amazon Resource Name (ARN) of the forecast.
source
Main.Forecast.describe_forecast_export_jobMethod
describe_forecast_export_job(forecast_export_job_arn)
describe_forecast_export_job(forecast_export_job_arn, params::Dict{String,<:Any})

Describes a forecast export job created using the CreateForecastExportJob operation. In addition to listing the properties provided by the user in the CreateForecastExportJob request, this operation lists the following properties: CreationTime LastModificationTime Status Message - If an error occurred, information about the error.

Arguments

  • forecast_export_job_arn: The Amazon Resource Name (ARN) of the forecast export job.
source
Main.Forecast.describe_predictorMethod
describe_predictor(predictor_arn)
describe_predictor(predictor_arn, params::Dict{String,<:Any})

Describes a predictor created using the CreatePredictor operation. In addition to listing the properties provided in the CreatePredictor request, this operation lists the following properties: DatasetImportJobArns - The dataset import jobs used to import training data. AutoMLAlgorithmArns - If AutoML is performed, the algorithms that were evaluated. CreationTime LastModificationTime Status Message - If an error occurred, information about the error.

Arguments

  • predictor_arn: The Amazon Resource Name (ARN) of the predictor that you want information about.
source
Main.Forecast.describe_predictor_backtest_export_jobMethod
describe_predictor_backtest_export_job(predictor_backtest_export_job_arn)
describe_predictor_backtest_export_job(predictor_backtest_export_job_arn, params::Dict{String,<:Any})

Describes a predictor backtest export job created using the CreatePredictorBacktestExportJob operation. In addition to listing the properties provided by the user in the CreatePredictorBacktestExportJob request, this operation lists the following properties: CreationTime LastModificationTime Status Message (if an error occurred)

Arguments

  • predictor_backtest_export_job_arn: The Amazon Resource Name (ARN) of the predictor backtest export job.
source
Main.Forecast.get_accuracy_metricsMethod
get_accuracy_metrics(predictor_arn)
get_accuracy_metrics(predictor_arn, params::Dict{String,<:Any})

Provides metrics on the accuracy of the models that were trained by the CreatePredictor operation. Use metrics to see how well the model performed and to decide whether to use the predictor to generate a forecast. For more information, see Predictor Metrics. This operation generates metrics for each backtest window that was evaluated. The number of backtest windows (NumberOfBacktestWindows) is specified using the EvaluationParameters object, which is optionally included in the CreatePredictor request. If NumberOfBacktestWindows isn't specified, the number defaults to one. The parameters of the filling method determine which items contribute to the metrics. If you want all items to contribute, specify zero. If you want only those items that have complete data in the range being evaluated to contribute, specify nan. For more information, see FeaturizationMethod. Before you can get accuracy metrics, the Status of the predictor must be ACTIVE, signifying that training has completed. To get the status, use the DescribePredictor operation.

Arguments

  • predictor_arn: The Amazon Resource Name (ARN) of the predictor to get metrics for.
source
Main.Forecast.list_dataset_groupsMethod
list_dataset_groups()
list_dataset_groups(params::Dict{String,<:Any})

Returns a list of dataset groups created using the CreateDatasetGroup operation. For each dataset group, this operation returns a summary of its properties, including its Amazon Resource Name (ARN). You can retrieve the complete set of properties by using the dataset group ARN with the DescribeDatasetGroup operation.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "MaxResults": The number of items to return in the response.
  • "NextToken": If the result of the previous request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.
source
Main.Forecast.list_dataset_import_jobsMethod
list_dataset_import_jobs()
list_dataset_import_jobs(params::Dict{String,<:Any})

Returns a list of dataset import jobs created using the CreateDatasetImportJob operation. For each import job, this operation returns a summary of its properties, including its Amazon Resource Name (ARN). You can retrieve the complete set of properties by using the ARN with the DescribeDatasetImportJob operation. You can filter the list by providing an array of Filter objects.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "Filters": An array of filters. For each filter, you provide a condition and a match statement. The condition is either IS or ISNOT, which specifies whether to include or exclude the datasets that match the statement from the list, respectively. The match statement consists of a key and a value. Filter properties Condition - The condition to apply. Valid values are IS and ISNOT. To include the datasets that match the statement, specify IS. To exclude matching datasets, specify IS_NOT. Key - The name of the parameter to filter on. Valid values are DatasetArn and Status. Value - The value to match. For example, to list all dataset import jobs whose status is ACTIVE, you specify the following filter: "Filters": [ { "Condition": "IS", "Key": "Status", "Value": "ACTIVE" } ]
  • "MaxResults": The number of items to return in the response.
  • "NextToken": If the result of the previous request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.
source
Main.Forecast.list_datasetsMethod
list_datasets()
list_datasets(params::Dict{String,<:Any})

Returns a list of datasets created using the CreateDataset operation. For each dataset, a summary of its properties, including its Amazon Resource Name (ARN), is returned. To retrieve the complete set of properties, use the ARN with the DescribeDataset operation.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "MaxResults": The number of items to return in the response.
  • "NextToken": If the result of the previous request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.
source
Main.Forecast.list_forecast_export_jobsMethod
list_forecast_export_jobs()
list_forecast_export_jobs(params::Dict{String,<:Any})

Returns a list of forecast export jobs created using the CreateForecastExportJob operation. For each forecast export job, this operation returns a summary of its properties, including its Amazon Resource Name (ARN). To retrieve the complete set of properties, use the ARN with the DescribeForecastExportJob operation. You can filter the list using an array of Filter objects.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "Filters": An array of filters. For each filter, you provide a condition and a match statement. The condition is either IS or ISNOT, which specifies whether to include or exclude the forecast export jobs that match the statement from the list, respectively. The match statement consists of a key and a value. Filter properties Condition - The condition to apply. Valid values are IS and ISNOT. To include the forecast export jobs that match the statement, specify IS. To exclude matching forecast export jobs, specify IS_NOT. Key - The name of the parameter to filter on. Valid values are ForecastArn and Status. Value - The value to match. For example, to list all jobs that export a forecast named electricityforecast, specify the following filter: "Filters": [ { "Condition": "IS", "Key": "ForecastArn", "Value": "arn:aws:forecast:us-west-2:&lt;acct-id&gt;:forecast/electricityforecast" } ]
  • "MaxResults": The number of items to return in the response.
  • "NextToken": If the result of the previous request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.
source
Main.Forecast.list_forecastsMethod
list_forecasts()
list_forecasts(params::Dict{String,<:Any})

Returns a list of forecasts created using the CreateForecast operation. For each forecast, this operation returns a summary of its properties, including its Amazon Resource Name (ARN). To retrieve the complete set of properties, specify the ARN with the DescribeForecast operation. You can filter the list using an array of Filter objects.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "Filters": An array of filters. For each filter, you provide a condition and a match statement. The condition is either IS or ISNOT, which specifies whether to include or exclude the forecasts that match the statement from the list, respectively. The match statement consists of a key and a value. Filter properties Condition - The condition to apply. Valid values are IS and ISNOT. To include the forecasts that match the statement, specify IS. To exclude matching forecasts, specify ISNOT. Key - The name of the parameter to filter on. Valid values are DatasetGroupArn, PredictorArn, and Status. Value - The value to match. For example, to list all forecasts whose status is not ACTIVE, you would specify: "Filters": [ { "Condition": "ISNOT", "Key": "Status", "Value": "ACTIVE" } ]
  • "MaxResults": The number of items to return in the response.
  • "NextToken": If the result of the previous request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.
source
Main.Forecast.list_predictor_backtest_export_jobsMethod
list_predictor_backtest_export_jobs()
list_predictor_backtest_export_jobs(params::Dict{String,<:Any})

Returns a list of predictor backtest export jobs created using the CreatePredictorBacktestExportJob operation. This operation returns a summary for each backtest export job. You can filter the list using an array of Filter objects. To retrieve the complete set of properties for a particular backtest export job, use the ARN with the DescribePredictorBacktestExportJob operation.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "Filters": An array of filters. For each filter, provide a condition and a match statement. The condition is either IS or ISNOT, which specifies whether to include or exclude the predictor backtest export jobs that match the statement from the list. The match statement consists of a key and a value. Filter properties Condition - The condition to apply. Valid values are IS and ISNOT. To include the predictor backtest export jobs that match the statement, specify IS. To exclude matching predictor backtest export jobs, specify IS_NOT. Key - The name of the parameter to filter on. Valid values are PredictorArn and Status. Value - The value to match.
  • "MaxResults": The number of items to return in the response.
  • "NextToken": If the result of the previous request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.
source
Main.Forecast.list_predictorsMethod
list_predictors()
list_predictors(params::Dict{String,<:Any})

Returns a list of predictors created using the CreatePredictor operation. For each predictor, this operation returns a summary of its properties, including its Amazon Resource Name (ARN). You can retrieve the complete set of properties by using the ARN with the DescribePredictor operation. You can filter the list using an array of Filter objects.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "Filters": An array of filters. For each filter, you provide a condition and a match statement. The condition is either IS or ISNOT, which specifies whether to include or exclude the predictors that match the statement from the list, respectively. The match statement consists of a key and a value. Filter properties Condition - The condition to apply. Valid values are IS and ISNOT. To include the predictors that match the statement, specify IS. To exclude matching predictors, specify IS_NOT. Key - The name of the parameter to filter on. Valid values are DatasetGroupArn and Status. Value - The value to match. For example, to list all predictors whose status is ACTIVE, you would specify: "Filters": [ { "Condition": "IS", "Key": "Status", "Value": "ACTIVE" } ]
  • "MaxResults": The number of items to return in the response.
  • "NextToken": If the result of the previous request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.
source
Main.Forecast.list_tags_for_resourceMethod
list_tags_for_resource(resource_arn)
list_tags_for_resource(resource_arn, params::Dict{String,<:Any})

Lists the tags for an Amazon Forecast resource.

Arguments

  • resource_arn: The Amazon Resource Name (ARN) that identifies the resource for which to list the tags. Currently, the supported resources are Forecast dataset groups, datasets, dataset import jobs, predictors, forecasts, and forecast export jobs.
source
Main.Forecast.stop_resourceMethod
stop_resource(resource_arn)
stop_resource(resource_arn, params::Dict{String,<:Any})

Stops a resource. The resource undergoes the following states: CREATESTOPPING and CREATESTOPPED. You cannot resume a resource once it has been stopped. This operation can be applied to the following resources (and their corresponding child resources): Dataset Import Job Predictor Job Forecast Job Forecast Export Job Predictor Backtest Export Job

Arguments

  • resource_arn: The Amazon Resource Name (ARN) that identifies the resource to stop. The supported ARNs are DatasetImportJobArn, PredictorArn, PredictorBacktestExportJobArn, ForecastArn, and ForecastExportJobArn.
source
Main.Forecast.tag_resourceMethod
tag_resource(resource_arn, tags)
tag_resource(resource_arn, tags, params::Dict{String,<:Any})

Associates the specified tags to a resource with the specified resourceArn. If existing tags on a resource are not specified in the request parameters, they are not changed. When a resource is deleted, the tags associated with that resource are also deleted.

Arguments

  • resource_arn: The Amazon Resource Name (ARN) that identifies the resource for which to list the tags. Currently, the supported resources are Forecast dataset groups, datasets, dataset import jobs, predictors, forecasts, and forecast export jobs.
  • tags: The tags to add to the resource. A tag is an array of key-value pairs. The following basic restrictions apply to tags: Maximum number of tags per resource - 50. For each resource, each tag key must be unique, and each tag key can have only one value. Maximum key length - 128 Unicode characters in UTF-8. Maximum value length - 256 Unicode characters in UTF-8. If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. Tag keys and values are case sensitive. Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.
source
Main.Forecast.untag_resourceMethod
untag_resource(resource_arn, tag_keys)
untag_resource(resource_arn, tag_keys, params::Dict{String,<:Any})

Deletes the specified tags from a resource.

Arguments

  • resource_arn: The Amazon Resource Name (ARN) that identifies the resource for which to list the tags. Currently, the supported resources are Forecast dataset groups, datasets, dataset import jobs, predictors, forecasts, and forecast exports.
  • tag_keys: The keys of the tags to be removed.
source
Main.Forecast.update_dataset_groupMethod
update_dataset_group(dataset_arns, dataset_group_arn)
update_dataset_group(dataset_arns, dataset_group_arn, params::Dict{String,<:Any})

Replaces the datasets in a dataset group with the specified datasets. The Status of the dataset group must be ACTIVE before you can use the dataset group to create a predictor. Use the DescribeDatasetGroup operation to get the status.

Arguments

  • dataset_arns: An array of the Amazon Resource Names (ARNs) of the datasets to add to the dataset group.
  • dataset_group_arn: The ARN of the dataset group.
source