The MindConnect Service exposes an API that enables shop floor devices to send data securely and reliably to MindSphere. It allows custom applications (agents) to collect and upload data which shall be stored and used by applications in the cloud.
For accessing this service you need to have the respective roles listed in MindConnect API roles and scopes.
Agents need a field-side network infrastructure to forward and route outbound HTTPs requests to the Internet.
MindConnect supports multiple agent device classes: strong hardware platforms as well as resource constrained devices. All target agent platforms must comply to the following minimum requirements:
- HTTP processing
- JSON parsing
- JSON Web Token (JWT) generation
- HMAC generation (preferably SHA2 based hashing)
Users with IP based filtering on their firewall can use the following two static IPs for whitelisting the data upload traffic:
Enabling these two IP addresses is sufficient for agents and clients accessing *.eu1.mindsphere.io.
Region China 1
To access *.cn1.mindsphere-in.cn , you should add the following IP address in your firewall rules instead:
If the agent or client is using certificate revocation list URLs, these have to be whitelisted in addition. However, this does not cover the interactive login process by WebKey and native access to IDL (AWS S3).
A SSL implementation may want to check the revocation list of Certificate Authorities. If the SSL implementation is trying to get this certificate revocation list at runtime, it needs access to an external URL. It is used for distributing revoked CA certificates by the CA. However, it is more often done by operation system or java updates.
Data Source Configuration¶
A data source configuration is needed for interpreting the data it receives from an agent. This configuration contains data sources and data points. Data sources are logical groups, e.g. a sensor or a machine, which contain one or more measurable data points, e.g. temperature or pressure.
The data source configuration is defined using the Agent Management Service. For instructions refer to Creating a Data Source Configuration.
Data Point Mapping¶
Data point mapping is required for storing the data it receives from an agent. This maps the data points from the data source configuration to properties of the digital entity, that represents the agent. When MindSphere receives data from an agent, it looks up which property the data point is mapped to and stores the data there.
Use the MindConnect Service for defining the data point mapping. For instructions refer to Creating a Data Point Mapping.
The MindConnect API allows agents to upload their data to MindSphere. This data can be of type:
- Time Series
The format conforms to a subset of the HTTP multipart specification, but only permits nesting of 2 levels. For instructions refer to Uploading Agent Data.
Standard data types¶
The MindConnect Service uses standard data types, which allow MindSphere to automatically process the data without additional configuration or coding. This means:
- The API defines how standard data types are transmitted, e.g. how metadata and production data need to be formatted as HTTPs payloads.
- Standard data types are automatically parsed and the information is stored to (virtual) assets.
- For each of the standard data types, there is a preconfigured mass data storage available.
- Data of standard types can be accessed and queried in a standardized way by applications and analytical tools.
The following standard data types for production data are supported:
- Time Series
Time Series are data point values that change constantly over time, e.g. values from analog sensors like a temperature sensor. This also applies to any other measured values that have an associated timestamp.
Events are based on machine events, e.g. emergency stops or machine failures. However, this mechanism can also be used to upload custom notifications, e.g. if you do on-site threshold monitoring and want to report a broken threshold.
Files of up to 10 MB can be uploaded per exchange call. The files are attached to the corresponding (virtual) asset, e.g. device log files or complex sensor structures. Files that are uploaded can be referenced by the parent (virtual) asset. The content of these files is not parsed by MindSphere. It requires custom applications or analytical tools to interpret and visualize the data.
- Data Models
Data models describe the agent-side asset hierarchy and configuration including measurement points.
The MindConnect Service exposes its API to agents for realizing the following tasks:
- Upload time series
- Upload files
- Describe and upload asset data models
- Upload data of custom data types for custom handling
The manager of a wind farm wants to collect sensor data of a wind turbine. A developer implements a field application (agent) which collects the sensor data. The agent uses the MindConnect API for uploading the data to MindSphere.
Any questions left?
Except where otherwise noted, content on this site is licensed under the Development License Agreement.