Lookoutequipment

This page documents function available when using the Lookoutequipment module, created with @service Lookoutequipment.

Index

Documentation

Main.Lookoutequipment.create_datasetMethod
create_dataset(client_token, dataset_name)
create_dataset(client_token, dataset_name, params::Dict{String,<:Any})

Creates a container for a collection of data being ingested for analysis. The dataset contains the metadata describing where the data is and what the data actually looks like. For example, it contains the location of the data source, the data schema, and other information. A dataset also contains any tags associated with the ingested data.

Arguments

  • client_token: A unique identifier for the request. If you do not set the client request token, Amazon Lookout for Equipment generates one.
  • dataset_name: The name of the dataset being created.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "DatasetSchema": A JSON description of the data that is in each time series dataset, including names, column names, and data types.
  • "ServerSideKmsKeyId": Provides the identifier of the KMS key used to encrypt dataset data by Amazon Lookout for Equipment.
  • "Tags": Any tags associated with the ingested data described in the dataset.
source
Main.Lookoutequipment.create_inference_schedulerMethod
create_inference_scheduler(client_token, data_input_configuration, data_output_configuration, data_upload_frequency, inference_scheduler_name, model_name, role_arn)
create_inference_scheduler(client_token, data_input_configuration, data_output_configuration, data_upload_frequency, inference_scheduler_name, model_name, role_arn, params::Dict{String,<:Any})

Creates a scheduled inference. Scheduling an inference is setting up a continuous real-time inference plan to analyze new measurement data. When setting up the schedule, you provide an S3 bucket location for the input data, assign it a delimiter between separate entries in the data, set an offset delay if desired, and set the frequency of inferencing. You must also provide an S3 bucket location for the output data.

Arguments

  • client_token: A unique identifier for the request. If you do not set the client request token, Amazon Lookout for Equipment generates one.
  • data_input_configuration: Specifies configuration information for the input data for the inference scheduler, including delimiter, format, and dataset location.
  • data_output_configuration: Specifies configuration information for the output results for the inference scheduler, including the S3 location for the output.
  • data_upload_frequency: How often data is uploaded to the source Amazon S3 bucket for the input data. The value chosen is the length of time between data uploads. For instance, if you select 5 minutes, Amazon Lookout for Equipment will upload the real-time data to the source bucket once every 5 minutes. This frequency also determines how often Amazon Lookout for Equipment runs inference on your data. For more information, see Understanding the inference process.
  • inference_scheduler_name: The name of the inference scheduler being created.
  • model_name: The name of the previously trained machine learning model being used to create the inference scheduler.
  • role_arn: The Amazon Resource Name (ARN) of a role with permission to access the data source being used for the inference.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "DataDelayOffsetInMinutes": The interval (in minutes) of planned delay at the start of each inference segment. For example, if inference is set to run every ten minutes, the delay is set to five minutes and the time is 09:08. The inference scheduler will wake up at the configured interval (which, without a delay configured, would be 09:10) plus the additional five minute delay time (so 09:15) to check your Amazon S3 bucket. The delay provides a buffer for you to upload data at the same frequency, so that you don't have to stop and restart the scheduler when uploading new data. For more information, see Understanding the inference process.
  • "ServerSideKmsKeyId": Provides the identifier of the KMS key used to encrypt inference scheduler data by Amazon Lookout for Equipment.
  • "Tags": Any tags associated with the inference scheduler.
source
Main.Lookoutequipment.create_labelMethod
create_label(client_token, end_time, label_group_name, rating, start_time)
create_label(client_token, end_time, label_group_name, rating, start_time, params::Dict{String,<:Any})

Creates a label for an event.

Arguments

  • client_token: A unique identifier for the request to create a label. If you do not set the client request token, Lookout for Equipment generates one.
  • end_time: The end time of the labeled event.
  • label_group_name: The name of a group of labels. Data in this field will be retained for service usage. Follow best practices for the security of your data.
  • rating: Indicates whether a labeled event represents an anomaly.
  • start_time: The start time of the labeled event.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "Equipment": Indicates that a label pertains to a particular piece of equipment. Data in this field will be retained for service usage. Follow best practices for the security of your data.
  • "FaultCode": Provides additional information about the label. The fault code must be defined in the FaultCodes attribute of the label group. Data in this field will be retained for service usage. Follow best practices for the security of your data.
  • "Notes": Metadata providing additional information about the label. Data in this field will be retained for service usage. Follow best practices for the security of your data.
source
Main.Lookoutequipment.create_label_groupMethod
create_label_group(client_token, label_group_name)
create_label_group(client_token, label_group_name, params::Dict{String,<:Any})

Creates a group of labels.

Arguments

  • client_token: A unique identifier for the request to create a label group. If you do not set the client request token, Lookout for Equipment generates one.
  • label_group_name: Names a group of labels. Data in this field will be retained for service usage. Follow best practices for the security of your data.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "FaultCodes": The acceptable fault codes (indicating the type of anomaly associated with the label) that can be used with this label group. Data in this field will be retained for service usage. Follow best practices for the security of your data.
  • "Tags": Tags that provide metadata about the label group you are creating. Data in this field will be retained for service usage. Follow best practices for the security of your data.
source
Main.Lookoutequipment.create_modelMethod
create_model(client_token, dataset_name, model_name)
create_model(client_token, dataset_name, model_name, params::Dict{String,<:Any})

Creates a machine learning model for data inference. A machine-learning (ML) model is a mathematical model that finds patterns in your data. In Amazon Lookout for Equipment, the model learns the patterns of normal behavior and detects abnormal behavior that could be potential equipment failure (or maintenance events). The models are made by analyzing normal data and abnormalities in machine behavior that have already occurred. Your model is trained using a portion of the data from your dataset and uses that data to learn patterns of normal behavior and abnormal patterns that lead to equipment failure. Another portion of the data is used to evaluate the model's accuracy.

Arguments

  • client_token: A unique identifier for the request. If you do not set the client request token, Amazon Lookout for Equipment generates one.
  • dataset_name: The name of the dataset for the machine learning model being created.
  • model_name: The name for the machine learning model to be created.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "DataPreProcessingConfiguration": The configuration is the TargetSamplingRate, which is the sampling rate of the data after post processing by Amazon Lookout for Equipment. For example, if you provide data that has been collected at a 1 second level and you want the system to resample the data at a 1 minute rate before training, the TargetSamplingRate is 1 minute. When providing a value for the TargetSamplingRate, you must attach the prefix "PT" to the rate you want. The value for a 1 second rate is therefore PT1S, the value for a 15 minute rate is PT15M, and the value for a 1 hour rate is PT1H
  • "DatasetSchema": The data schema for the machine learning model being created.
  • "EvaluationDataEndTime": Indicates the time reference in the dataset that should be used to end the subset of evaluation data for the machine learning model.
  • "EvaluationDataStartTime": Indicates the time reference in the dataset that should be used to begin the subset of evaluation data for the machine learning model.
  • "LabelsInputConfiguration": The input configuration for the labels being used for the machine learning model that's being created.
  • "ModelDiagnosticsOutputConfiguration": The Amazon S3 location where you want Amazon Lookout for Equipment to save the pointwise model diagnostics. You must also specify the RoleArn request parameter.
  • "OffCondition": Indicates that the asset associated with this sensor has been shut off. As long as this condition is met, Lookout for Equipment will not use data from this asset for training, evaluation, or inference.
  • "RoleArn": The Amazon Resource Name (ARN) of a role with permission to access the data source being used to create the machine learning model.
  • "ServerSideKmsKeyId": Provides the identifier of the KMS key used to encrypt model data by Amazon Lookout for Equipment.
  • "Tags": Any tags associated with the machine learning model being created.
  • "TrainingDataEndTime": Indicates the time reference in the dataset that should be used to end the subset of training data for the machine learning model.
  • "TrainingDataStartTime": Indicates the time reference in the dataset that should be used to begin the subset of training data for the machine learning model.
source
Main.Lookoutequipment.create_retraining_schedulerMethod
create_retraining_scheduler(client_token, lookback_window, model_name, retraining_frequency)
create_retraining_scheduler(client_token, lookback_window, model_name, retraining_frequency, params::Dict{String,<:Any})

Creates a retraining scheduler on the specified model.

Arguments

  • client_token: A unique identifier for the request. If you do not set the client request token, Amazon Lookout for Equipment generates one.
  • lookback_window: The number of past days of data that will be used for retraining.
  • model_name: The name of the model to add the retraining scheduler to.
  • retraining_frequency: This parameter uses the ISO 8601 standard to set the frequency at which you want retraining to occur in terms of Years, Months, and/or Days (note: other parameters like Time are not currently supported). The minimum value is 30 days (P30D) and the maximum value is 1 year (P1Y). For example, the following values are valid: P3M15D – Every 3 months and 15 days P2M – Every 2 months P150D – Every 150 days

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "PromoteMode": Indicates how the service will use new models. In MANAGED mode, new models will automatically be used for inference if they have better performance than the current model. In MANUAL mode, the new models will not be used until they are manually activated.
  • "RetrainingStartDate": The start date for the retraining scheduler. Lookout for Equipment truncates the time you provide to the nearest UTC day.
source
Main.Lookoutequipment.delete_datasetMethod
delete_dataset(dataset_name)
delete_dataset(dataset_name, params::Dict{String,<:Any})

Deletes a dataset and associated artifacts. The operation will check to see if any inference scheduler or data ingestion job is currently using the dataset, and if there isn't, the dataset, its metadata, and any associated data stored in S3 will be deleted. This does not affect any models that used this dataset for training and evaluation, but does prevent it from being used in the future.

Arguments

  • dataset_name: The name of the dataset to be deleted.
source
Main.Lookoutequipment.delete_inference_schedulerMethod
delete_inference_scheduler(inference_scheduler_name)
delete_inference_scheduler(inference_scheduler_name, params::Dict{String,<:Any})

Deletes an inference scheduler that has been set up. Prior inference results will not be deleted.

Arguments

  • inference_scheduler_name: The name of the inference scheduler to be deleted.
source
Main.Lookoutequipment.delete_labelMethod
delete_label(label_group_name, label_id)
delete_label(label_group_name, label_id, params::Dict{String,<:Any})

Deletes a label.

Arguments

  • label_group_name: The name of the label group that contains the label that you want to delete. Data in this field will be retained for service usage. Follow best practices for the security of your data.
  • label_id: The ID of the label that you want to delete.
source
Main.Lookoutequipment.delete_label_groupMethod
delete_label_group(label_group_name)
delete_label_group(label_group_name, params::Dict{String,<:Any})

Deletes a group of labels.

Arguments

  • label_group_name: The name of the label group that you want to delete. Data in this field will be retained for service usage. Follow best practices for the security of your data.
source
Main.Lookoutequipment.delete_modelMethod
delete_model(model_name)
delete_model(model_name, params::Dict{String,<:Any})

Deletes a machine learning model currently available for Amazon Lookout for Equipment. This will prevent it from being used with an inference scheduler, even one that is already set up.

Arguments

  • model_name: The name of the machine learning model to be deleted.
source
Main.Lookoutequipment.delete_resource_policyMethod
delete_resource_policy(resource_arn)
delete_resource_policy(resource_arn, params::Dict{String,<:Any})

Deletes the resource policy attached to the resource.

Arguments

  • resource_arn: The Amazon Resource Name (ARN) of the resource for which the resource policy should be deleted.
source
Main.Lookoutequipment.delete_retraining_schedulerMethod
delete_retraining_scheduler(model_name)
delete_retraining_scheduler(model_name, params::Dict{String,<:Any})

Deletes a retraining scheduler from a model. The retraining scheduler must be in the STOPPED status.

Arguments

  • model_name: The name of the model whose retraining scheduler you want to delete.
source
Main.Lookoutequipment.describe_data_ingestion_jobMethod
describe_data_ingestion_job(job_id)
describe_data_ingestion_job(job_id, params::Dict{String,<:Any})

Provides information on a specific data ingestion job such as creation time, dataset ARN, and status.

Arguments

  • job_id: The job ID of the data ingestion job.
source
Main.Lookoutequipment.describe_datasetMethod
describe_dataset(dataset_name)
describe_dataset(dataset_name, params::Dict{String,<:Any})

Provides a JSON description of the data in each time series dataset, including names, column names, and data types.

Arguments

  • dataset_name: The name of the dataset to be described.
source
Main.Lookoutequipment.describe_inference_schedulerMethod
describe_inference_scheduler(inference_scheduler_name)
describe_inference_scheduler(inference_scheduler_name, params::Dict{String,<:Any})

Specifies information about the inference scheduler being used, including name, model, status, and associated metadata

Arguments

  • inference_scheduler_name: The name of the inference scheduler being described.
source
Main.Lookoutequipment.describe_labelMethod
describe_label(label_group_name, label_id)
describe_label(label_group_name, label_id, params::Dict{String,<:Any})

Returns the name of the label.

Arguments

  • label_group_name: Returns the name of the group containing the label.
  • label_id: Returns the ID of the label.
source
Main.Lookoutequipment.describe_label_groupMethod
describe_label_group(label_group_name)
describe_label_group(label_group_name, params::Dict{String,<:Any})

Returns information about the label group.

Arguments

  • label_group_name: Returns the name of the label group.
source
Main.Lookoutequipment.describe_modelMethod
describe_model(model_name)
describe_model(model_name, params::Dict{String,<:Any})

Provides a JSON containing the overall information about a specific machine learning model, including model name and ARN, dataset, training and evaluation information, status, and so on.

Arguments

  • model_name: The name of the machine learning model to be described.
source
Main.Lookoutequipment.describe_model_versionMethod
describe_model_version(model_name, model_version)
describe_model_version(model_name, model_version, params::Dict{String,<:Any})

Retrieves information about a specific machine learning model version.

Arguments

  • model_name: The name of the machine learning model that this version belongs to.
  • model_version: The version of the machine learning model.
source
Main.Lookoutequipment.describe_resource_policyMethod
describe_resource_policy(resource_arn)
describe_resource_policy(resource_arn, params::Dict{String,<:Any})

Provides the details of a resource policy attached to a resource.

Arguments

  • resource_arn: The Amazon Resource Name (ARN) of the resource that is associated with the resource policy.
source
Main.Lookoutequipment.describe_retraining_schedulerMethod
describe_retraining_scheduler(model_name)
describe_retraining_scheduler(model_name, params::Dict{String,<:Any})

Provides a description of the retraining scheduler, including information such as the model name and retraining parameters.

Arguments

  • model_name: The name of the model that the retraining scheduler is attached to.
source
Main.Lookoutequipment.import_datasetMethod
import_dataset(client_token, source_dataset_arn)
import_dataset(client_token, source_dataset_arn, params::Dict{String,<:Any})

Imports a dataset.

Arguments

  • client_token: A unique identifier for the request. If you do not set the client request token, Amazon Lookout for Equipment generates one.
  • source_dataset_arn: The Amazon Resource Name (ARN) of the dataset to import.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "DatasetName": The name of the machine learning dataset to be created. If the dataset already exists, Amazon Lookout for Equipment overwrites the existing dataset. If you don't specify this field, it is filled with the name of the source dataset.
  • "ServerSideKmsKeyId": Provides the identifier of the KMS key key used to encrypt model data by Amazon Lookout for Equipment.
  • "Tags": Any tags associated with the dataset to be created.
source
Main.Lookoutequipment.import_model_versionMethod
import_model_version(client_token, dataset_name, source_model_version_arn)
import_model_version(client_token, dataset_name, source_model_version_arn, params::Dict{String,<:Any})

Imports a model that has been trained successfully.

Arguments

  • client_token: A unique identifier for the request. If you do not set the client request token, Amazon Lookout for Equipment generates one.
  • dataset_name: The name of the dataset for the machine learning model being imported.
  • source_model_version_arn: The Amazon Resource Name (ARN) of the model version to import.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "InferenceDataImportStrategy": Indicates how to import the accumulated inference data when a model version is imported. The possible values are as follows: NOIMPORT – Don't import the data. ADDWHEN_EMPTY – Only import the data from the source model if there is no existing data in the target model. OVERWRITE – Import the data from the source model and overwrite the existing data in the target model.
  • "LabelsInputConfiguration":
  • "ModelName": The name for the machine learning model to be created. If the model already exists, Amazon Lookout for Equipment creates a new version. If you do not specify this field, it is filled with the name of the source model.
  • "RoleArn": The Amazon Resource Name (ARN) of a role with permission to access the data source being used to create the machine learning model.
  • "ServerSideKmsKeyId": Provides the identifier of the KMS key key used to encrypt model data by Amazon Lookout for Equipment.
  • "Tags": The tags associated with the machine learning model to be created.
source
Main.Lookoutequipment.list_data_ingestion_jobsMethod
list_data_ingestion_jobs()
list_data_ingestion_jobs(params::Dict{String,<:Any})

Provides a list of all data ingestion jobs, including dataset name and ARN, S3 location of the input data, status, and so on.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "DatasetName": The name of the dataset being used for the data ingestion job.
  • "MaxResults": Specifies the maximum number of data ingestion jobs to list.
  • "NextToken": An opaque pagination token indicating where to continue the listing of data ingestion jobs.
  • "Status": Indicates the status of the data ingestion job.
source
Main.Lookoutequipment.list_datasetsMethod
list_datasets()
list_datasets(params::Dict{String,<:Any})

Lists all datasets currently available in your account, filtering on the dataset name.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "DatasetNameBeginsWith": The beginning of the name of the datasets to be listed.
  • "MaxResults": Specifies the maximum number of datasets to list.
  • "NextToken": An opaque pagination token indicating where to continue the listing of datasets.
source
Main.Lookoutequipment.list_inference_eventsMethod
list_inference_events(inference_scheduler_name, interval_end_time, interval_start_time)
list_inference_events(inference_scheduler_name, interval_end_time, interval_start_time, params::Dict{String,<:Any})

Lists all inference events that have been found for the specified inference scheduler.

Arguments

  • inference_scheduler_name: The name of the inference scheduler for the inference events listed.
  • interval_end_time: Returns all the inference events with an end start time equal to or greater than less than the end time given.
  • interval_start_time: Lookout for Equipment will return all the inference events with an end time equal to or greater than the start time given.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "MaxResults": Specifies the maximum number of inference events to list.
  • "NextToken": An opaque pagination token indicating where to continue the listing of inference events.
source
Main.Lookoutequipment.list_inference_executionsMethod
list_inference_executions(inference_scheduler_name)
list_inference_executions(inference_scheduler_name, params::Dict{String,<:Any})

Lists all inference executions that have been performed by the specified inference scheduler.

Arguments

  • inference_scheduler_name: The name of the inference scheduler for the inference execution listed.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "DataEndTimeBefore": The time reference in the inferenced dataset before which Amazon Lookout for Equipment stopped the inference execution.
  • "DataStartTimeAfter": The time reference in the inferenced dataset after which Amazon Lookout for Equipment started the inference execution.
  • "MaxResults": Specifies the maximum number of inference executions to list.
  • "NextToken": An opaque pagination token indicating where to continue the listing of inference executions.
  • "Status": The status of the inference execution.
source
Main.Lookoutequipment.list_inference_schedulersMethod
list_inference_schedulers()
list_inference_schedulers(params::Dict{String,<:Any})

Retrieves a list of all inference schedulers currently available for your account.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "InferenceSchedulerNameBeginsWith": The beginning of the name of the inference schedulers to be listed.
  • "MaxResults": Specifies the maximum number of inference schedulers to list.
  • "ModelName": The name of the machine learning model used by the inference scheduler to be listed.
  • "NextToken": An opaque pagination token indicating where to continue the listing of inference schedulers.
  • "Status": Specifies the current status of the inference schedulers.
source
Main.Lookoutequipment.list_label_groupsMethod
list_label_groups()
list_label_groups(params::Dict{String,<:Any})

Returns a list of the label groups.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "LabelGroupNameBeginsWith": The beginning of the name of the label groups to be listed.
  • "MaxResults": Specifies the maximum number of label groups to list.
  • "NextToken": An opaque pagination token indicating where to continue the listing of label groups.
source
Main.Lookoutequipment.list_labelsMethod
list_labels(label_group_name)
list_labels(label_group_name, params::Dict{String,<:Any})

Provides a list of labels.

Arguments

  • label_group_name: Returns the name of the label group.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "Equipment": Lists the labels that pertain to a particular piece of equipment.
  • "FaultCode": Returns labels with a particular fault code.
  • "IntervalEndTime": Returns all labels with a start time earlier than the end time given.
  • "IntervalStartTime": Returns all the labels with a end time equal to or later than the start time given.
  • "MaxResults": Specifies the maximum number of labels to list.
  • "NextToken": An opaque pagination token indicating where to continue the listing of label groups.
source
Main.Lookoutequipment.list_model_versionsMethod
list_model_versions(model_name)
list_model_versions(model_name, params::Dict{String,<:Any})

Generates a list of all model versions for a given model, including the model version, model version ARN, and status. To list a subset of versions, use the MaxModelVersion and MinModelVersion fields.

Arguments

  • model_name: Then name of the machine learning model for which the model versions are to be listed.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "CreatedAtEndTime": Filter results to return all the model versions created before this time.
  • "CreatedAtStartTime": Filter results to return all the model versions created after this time.
  • "MaxModelVersion": Specifies the highest version of the model to return in the list.
  • "MaxResults": Specifies the maximum number of machine learning model versions to list.
  • "MinModelVersion": Specifies the lowest version of the model to return in the list.
  • "NextToken": If the total number of results exceeds the limit that the response can display, the response returns an opaque pagination token indicating where to continue the listing of machine learning model versions. Use this token in the NextToken field in the request to list the next page of results.
  • "SourceType": Filter the results based on the way the model version was generated.
  • "Status": Filter the results based on the current status of the model version.
source
Main.Lookoutequipment.list_modelsMethod
list_models()
list_models(params::Dict{String,<:Any})

Generates a list of all models in the account, including model name and ARN, dataset, and status.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "DatasetNameBeginsWith": The beginning of the name of the dataset of the machine learning models to be listed.
  • "MaxResults": Specifies the maximum number of machine learning models to list.
  • "ModelNameBeginsWith": The beginning of the name of the machine learning models being listed.
  • "NextToken": An opaque pagination token indicating where to continue the listing of machine learning models.
  • "Status": The status of the machine learning model.
source
Main.Lookoutequipment.list_retraining_schedulersMethod
list_retraining_schedulers()
list_retraining_schedulers(params::Dict{String,<:Any})

Lists all retraining schedulers in your account, filtering by model name prefix and status.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "MaxResults": Specifies the maximum number of retraining schedulers to list.
  • "ModelNameBeginsWith": Specify this field to only list retraining schedulers whose machine learning models begin with the value you specify.
  • "NextToken": If the number of results exceeds the maximum, a pagination token is returned. Use the token in the request to show the next page of retraining schedulers.
  • "Status": Specify this field to only list retraining schedulers whose status matches the value you specify.
source
Main.Lookoutequipment.list_sensor_statisticsMethod
list_sensor_statistics(dataset_name)
list_sensor_statistics(dataset_name, params::Dict{String,<:Any})

Lists statistics about the data collected for each of the sensors that have been successfully ingested in the particular dataset. Can also be used to retreive Sensor Statistics for a previous ingestion job.

Arguments

  • dataset_name: The name of the dataset associated with the list of Sensor Statistics.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "IngestionJobId": The ingestion job id associated with the list of Sensor Statistics. To get sensor statistics for a particular ingestion job id, both dataset name and ingestion job id must be submitted as inputs.
  • "MaxResults": Specifies the maximum number of sensors for which to retrieve statistics.
  • "NextToken": An opaque pagination token indicating where to continue the listing of sensor statistics.
source
Main.Lookoutequipment.list_tags_for_resourceMethod
list_tags_for_resource(resource_arn)
list_tags_for_resource(resource_arn, params::Dict{String,<:Any})

Lists all the tags for a specified resource, including key and value.

Arguments

  • resource_arn: The Amazon Resource Name (ARN) of the resource (such as the dataset or model) that is the focus of the ListTagsForResource operation.
source
Main.Lookoutequipment.put_resource_policyMethod
put_resource_policy(client_token, resource_arn, resource_policy)
put_resource_policy(client_token, resource_arn, resource_policy, params::Dict{String,<:Any})

Creates a resource control policy for a given resource.

Arguments

  • client_token: A unique identifier for the request. If you do not set the client request token, Amazon Lookout for Equipment generates one.
  • resource_arn: The Amazon Resource Name (ARN) of the resource for which the policy is being created.
  • resource_policy: The JSON-formatted resource policy to create.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "PolicyRevisionId": A unique identifier for a revision of the resource policy.
source
Main.Lookoutequipment.start_data_ingestion_jobMethod
start_data_ingestion_job(client_token, dataset_name, ingestion_input_configuration, role_arn)
start_data_ingestion_job(client_token, dataset_name, ingestion_input_configuration, role_arn, params::Dict{String,<:Any})

Starts a data ingestion job. Amazon Lookout for Equipment returns the job status.

Arguments

  • client_token: A unique identifier for the request. If you do not set the client request token, Amazon Lookout for Equipment generates one.
  • dataset_name: The name of the dataset being used by the data ingestion job.
  • ingestion_input_configuration: Specifies information for the input data for the data ingestion job, including dataset S3 location.
  • role_arn: The Amazon Resource Name (ARN) of a role with permission to access the data source for the data ingestion job.
source
Main.Lookoutequipment.start_inference_schedulerMethod
start_inference_scheduler(inference_scheduler_name)
start_inference_scheduler(inference_scheduler_name, params::Dict{String,<:Any})

Starts an inference scheduler.

Arguments

  • inference_scheduler_name: The name of the inference scheduler to be started.
source
Main.Lookoutequipment.start_retraining_schedulerMethod
start_retraining_scheduler(model_name)
start_retraining_scheduler(model_name, params::Dict{String,<:Any})

Starts a retraining scheduler.

Arguments

  • model_name: The name of the model whose retraining scheduler you want to start.
source
Main.Lookoutequipment.stop_inference_schedulerMethod
stop_inference_scheduler(inference_scheduler_name)
stop_inference_scheduler(inference_scheduler_name, params::Dict{String,<:Any})

Stops an inference scheduler.

Arguments

  • inference_scheduler_name: The name of the inference scheduler to be stopped.
source
Main.Lookoutequipment.stop_retraining_schedulerMethod
stop_retraining_scheduler(model_name)
stop_retraining_scheduler(model_name, params::Dict{String,<:Any})

Stops a retraining scheduler.

Arguments

  • model_name: The name of the model whose retraining scheduler you want to stop.
source
Main.Lookoutequipment.tag_resourceMethod
tag_resource(resource_arn, tags)
tag_resource(resource_arn, tags, params::Dict{String,<:Any})

Associates a given tag to a resource in your account. A tag is a key-value pair which can be added to an Amazon Lookout for Equipment resource as metadata. Tags can be used for organizing your resources as well as helping you to search and filter by tag. Multiple tags can be added to a resource, either when you create it, or later. Up to 50 tags can be associated with each resource.

Arguments

  • resource_arn: The Amazon Resource Name (ARN) of the specific resource to which the tag should be associated.
  • tags: The tag or tags to be associated with a specific resource. Both the tag key and value are specified.
source
Main.Lookoutequipment.untag_resourceMethod
untag_resource(resource_arn, tag_keys)
untag_resource(resource_arn, tag_keys, params::Dict{String,<:Any})

Removes a specific tag from a given resource. The tag is specified by its key.

Arguments

  • resource_arn: The Amazon Resource Name (ARN) of the resource to which the tag is currently associated.
  • tag_keys: Specifies the key of the tag to be removed from a specified resource.
source
Main.Lookoutequipment.update_active_model_versionMethod
update_active_model_version(model_name, model_version)
update_active_model_version(model_name, model_version, params::Dict{String,<:Any})

Sets the active model version for a given machine learning model.

Arguments

  • model_name: The name of the machine learning model for which the active model version is being set.
  • model_version: The version of the machine learning model for which the active model version is being set.
source
Main.Lookoutequipment.update_inference_schedulerMethod
update_inference_scheduler(inference_scheduler_name)
update_inference_scheduler(inference_scheduler_name, params::Dict{String,<:Any})

Updates an inference scheduler.

Arguments

  • inference_scheduler_name: The name of the inference scheduler to be updated.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "DataDelayOffsetInMinutes": A period of time (in minutes) by which inference on the data is delayed after the data starts. For instance, if you select an offset delay time of five minutes, inference will not begin on the data until the first data measurement after the five minute mark. For example, if five minutes is selected, the inference scheduler will wake up at the configured frequency with the additional five minute delay time to check the customer S3 bucket. The customer can upload data at the same frequency and they don't need to stop and restart the scheduler when uploading new data.
  • "DataInputConfiguration": Specifies information for the input data for the inference scheduler, including delimiter, format, and dataset location.
  • "DataOutputConfiguration": Specifies information for the output results from the inference scheduler, including the output S3 location.
  • "DataUploadFrequency": How often data is uploaded to the source S3 bucket for the input data. The value chosen is the length of time between data uploads. For instance, if you select 5 minutes, Amazon Lookout for Equipment will upload the real-time data to the source bucket once every 5 minutes. This frequency also determines how often Amazon Lookout for Equipment starts a scheduled inference on your data. In this example, it starts once every 5 minutes.
  • "RoleArn": The Amazon Resource Name (ARN) of a role with permission to access the data source for the inference scheduler.
source
Main.Lookoutequipment.update_label_groupMethod
update_label_group(label_group_name)
update_label_group(label_group_name, params::Dict{String,<:Any})

Updates the label group.

Arguments

  • label_group_name: The name of the label group to be updated.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "FaultCodes": Updates the code indicating the type of anomaly associated with the label. Data in this field will be retained for service usage. Follow best practices for the security of your data.
source
Main.Lookoutequipment.update_modelMethod
update_model(model_name)
update_model(model_name, params::Dict{String,<:Any})

Updates a model in the account.

Arguments

  • model_name: The name of the model to update.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "LabelsInputConfiguration":
  • "ModelDiagnosticsOutputConfiguration": The Amazon S3 location where you want Amazon Lookout for Equipment to save the pointwise model diagnostics for the model. You must also specify the RoleArn request parameter.
  • "RoleArn": The ARN of the model to update.
source
Main.Lookoutequipment.update_retraining_schedulerMethod
update_retraining_scheduler(model_name)
update_retraining_scheduler(model_name, params::Dict{String,<:Any})

Updates a retraining scheduler.

Arguments

  • model_name: The name of the model whose retraining scheduler you want to update.

Optional Parameters

Optional parameters can be passed as a params::Dict{String,<:Any}. Valid keys are:

  • "LookbackWindow": The number of past days of data that will be used for retraining.
  • "PromoteMode": Indicates how the service will use new models. In MANAGED mode, new models will automatically be used for inference if they have better performance than the current model. In MANUAL mode, the new models will not be used until they are manually activated.
  • "RetrainingFrequency": This parameter uses the ISO 8601 standard to set the frequency at which you want retraining to occur in terms of Years, Months, and/or Days (note: other parameters like Time are not currently supported). The minimum value is 30 days (P30D) and the maximum value is 1 year (P1Y). For example, the following values are valid: P3M15D – Every 3 months and 15 days P2M – Every 2 months P150D – Every 150 days
  • "RetrainingStartDate": The start date for the retraining scheduler. Lookout for Equipment truncates the time you provide to the nearest UTC day.
source