Anomaly Detection Service – Sample Application¶
In this sample, a compressor represented by an asset has to be monitored. The compressor has different working modes, e.g. in high pressure, in low pressure, and hold-up. A notification shall be triggered, if the compressor does not behave normally. For instance, if the temperature does not rise as much as it should, when the pressure goes up. Using the Anomaly Detection Service, a detector shall be trained to detect abnormal behavior of the asset. Later, this detector will be applied to all assets of this type.
Preparation¶
-
Create the following files with the provided content for a simple HTML/js sample application:
- index.html for the page layout
- main.css for page style
- main.js for the methods to call model training and the model reasoning as well as hard-coded test data
-
Create an empty Staticfile and an empty manifest file so the folder looks like this:
-
Fill the manifest file with the content below:
manifest.yml
applications: - name: htmltest instances: 1 host: htmltest path: ./ memory: 64m
-
Deploy the sample application to Cloud Foundry as described in Running a Cloud Foundry Application. Make sure to assign it the
mdsp:core:analytics.user
role.
Training Phase¶
Training Data¶
The training data is provided in the main.js as a data set in JSON format containing 397 measurements obtained at 1 Hz. Each measurement is described by a timestamp, a temperature record and a pressure record as follows:
{"_time" : "2018-01-11T14:30:00.000Z", "temperature" : "1.25", "pressure" : "2.848605577689243" },
Trigger Training¶
Trigger the training phase by setting the DBSCAN parameters: Select Euclidean
for the distance measure algorithm and set Minimum cluster size to 4 and the Epsilon value to 0.5. If the training is successful, it returns the model ID of the newly created model. The Model Management Service is used for model storage and automatically sets the expiration date of a model to 14 days. This parameter might be changed in the future.
Scoring Phase¶
Scoring Data¶
The scoring data is provided in the main.js as a data set in JSON format containing 40 measurements with the same frequency and features as the training data.
Trigger Scoring¶
The model can now be used to evaluate if new measurements (other than the training data) are anomalies or not. The Anomaly Detection Service outputs a list of outliers with timestamp and likelihood for being considered an anomaly:
{'_time': '2018-01-11T14:36:25Z', 'anomalyExtent': 0.2659295810254081},
In the following figure the training data and the test data are plotted as 2D points, where Feature 1 and Feature 2 represent the temperature and pressure, respectively. The red points are outliers. Their size represents the anomaly likelihood. A point is more likely to be an anomaly the further away it is from the cluster borders.
Except where otherwise noted, content on this site is licensed under the Development License Agreement.