Integrating an AWS Cloud - via MindConnect Integration¶
MindConnect Integration can transfer data from any cloud storage service into Insights Hub. In this example, multiple variable definitions for an aspect are read from a CSV file, which is stored in an AWS S3 bucket. A pipeline is set up to automatically create an asset, which has an aspect with these variables.
General Information
Duration: 60 mins
Tested with version: Release Notes 8th October 2018.
Prerequisites¶
- A Insights Hub Account
- The MindConnect Integration
- An Amazon S3 account with access to AWS Identity and Access Management
- The following roles must be assigned to your user:
mdsp:core:TenantAdmin
, assigned in user management.mdsp:core:mci:admin
ormdsp:core:mci.user
, assigned in user management.
Preparing Data in AWS¶
This section describes how to create an S3 bucket in AWS and upload aspect data from a CSV file to it.
Creating an AWS User with API Access¶
- Open the AWS IAM console via https://console.aws.amazon.com/iam/ (login required).
- Choose "Users" and then "Add user" in the navigation pane.
- Enter a user name for the user.
- Select "Programmatic access".
- Click on "Next: Permissions".
- Select "Attach existing policies to user directly" and pick the "AmazonS3ReadOnlyAccess" policy. (You can update the policies later, if necessary.)
- Finish the process by clicking "Next: Review" and then "Create user".
- Download the access key ID and secret access key and save them. You will not have access to these keys again after this step.
The generated access keys provide access to the AWS S3 APIs.
Creating an S3 Bucket¶
- Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
- Click on "Create bucket".
- Enter a bucket name and select an AWS region, e.g. "sensor-bucket-mindsphere" and "EU (Frankfurt)".
- Do not make any other configurations and click on "Create".
The new S3 Bucket is available in the Amazon S3 console.
Uploading a CSV file to the S3 Bucket¶
-
Create a CSV file for an aspect with multiple variables using the same format as shown below:
AspectName;SensorType;DataType;Unit; breweryAspect;temperature;DOUBLE;°C; breweryAspect;motorVoltage;DOUBLE;V; breweryAspect;fluidFlow;DOUBLE;m³/h; breweryAspect;pressureBefore;DOUBLE;bar; breweryAspect;pressureAfter;DOUBLE;bar;
-
Select your S3 bucket (here "sensor-bucket-mindsphere") in AWS.
- Drag and drop the CSV file into the Amazon S3 console window.
- Click "Upload".
The CSV file has been uploaded to your S3 bucket.
Integrating an Amazon S3 Bucket into MindConnect¶
A MindConnect integration is set up in three steps:
- Creating an account: MindConnect Integration requires an account for storing access information to connect to other cloud-based applications, e.g. Amazon S3 bucket.
- Adding an operation: MindConnect Integration uses application specific operations for reading or writing data to the connected cloud-based applications. Each operation performs one application specific task.
- Creating a pipeline: MindConnect Integration uses pipelines to define processes for transmitting, interpreting and transforming data. Pipelines can combine multiple operations and other pipelines to define multistep workflows.
Creating a MindConnect Integration Account¶
- Open MindConnect Integration from the Launchpad.
- Log in with your MindConnect Integration credentials.
- Go to "Connect".
- Open the application "Amazon Simple Storage Service (S3)".
- Click on "Add New Account".
- Enter a name for the account, e.g. "My_S3_Bucket".
- Enter the access key ID and secret access key generated before.
- Do not change any other settings.
- Click "Save".
The account has been created.
Adding a new Operation¶
- Switch to the "OPERATIONS" tab and click on "Add New Operation".
- Enter a name for your operation, e.g. "RetrieveS3Object".
- Select the account (here "My_S3_Bucket") from the drop-down and click on "Next".
- Select "GetObject" from the list of operations.
- Click "Next" without making any further configurations in the following dialogues.
- Check the information and click "Finish".
The "GetObject" operation is now visible in the list.
Setting up a Pipeline¶
This pipeline transfers the data from the S3 bucket into MindConnect. It is constructed using building blocks, which first retrieve the data from the CSV file, then convert it into bytes and finally store them in a document. The building blocks, as well as the input and the output for the pipeline itself, are configured after the assembly.
- Switch to the "INTEGRATIONS" tab and click on "Add New Integration".
- Select "Orchestrate two or more applications" in the pop-up window.
- Enter a name for your integration, e.g. "My_S3_Bucket_Pipeline".
- Open "Applications" in the tool bar on the left.
- Search for "Amazon Simple Storage Service (S3)" and drag it under the integration block so it connects with the anchor point.
- Click on the settings icon of this block and select the account (here "My_S3_Bucket") and operation (here "RetrieveS3Object") from the drop-down menus.
- Open "Services" in the tool bar on the left.
- Drag "IO" from Services under the "Amazon Simple Storage Service S3" block so it connects with the anchor point.
- Open the drop-down menu of the "IO" block and select "streamToBytes".
- Drag "Flat File" from Services under the "IO" block so it connects with the anchor point.
- Open the drop-down menu of the "Flat File" block and select "delimitedDataBytesToDocument".
- Click "Save".
The finished pipeline looks as shown below:
Note
For detailed information on Orchestrated Integrations and Point-to-Point Integrations, refer to the MindConnect Integration documentation.
Configuring the Input/Output Signature¶
Every MindConnect integration requires an input/output signature, which must define at least one input parameter. Output parameters are optional. This integration shall take an S3Object as input parameter and output a document with rows and columns.
- Click on the menu icon at the very right of the integration block on top of your pipeline.
- Select "Define Input/Output Signature".
- Click on the plus button in the Input tab to create an input field.
- Enter a name, e.g.
S3Object
, and set the type to "String". - Switch to the Output tab and click the plus button to add an output field of type "Document".
- Click the plus button again to add another field of type "Document" and activate the "Array" checkbox. This field is nested inside the other Document and represents the rows of the CSV file.
- Add four fields of type "String" representing the columns of the CSV file. The intended structure is shown below:
- Click "Apply" and then "Save".
The Input/Output signature is defined.
Configuring the Mapping for the Operation¶
The "Amazon Simple Storage Service (S3)" block is configured to retrieve an S3Object from the Amazon S3 bucket and forward it as a stream.
- Click on the menu icon at the very right of the "Amazon Simple Storage Service (S3)" block.
- Select "Map Input and Output".
- Configure the input mapping as shown below:
- Double-click on the field
bucketName
in "RetrieveS3ObjectInput" and enter the name of your S3 bucket (here "sensor-bucket-mindsphere"). - Click "Next".
- Configure the output mapping as shown below:
- Click on "Finish" and then "Save".
The mapping enables the operation "RetrieveS3Object" to read the data from the S3 bucket and output it as a stream.
Configuring the Mapping for the IO Service¶
The "IO" block is configured to convert the stream into bytes and forward it.
- Open the "Map Input and Output" dialogue of the "IO" block.
- Configure the input mapping as shown below:
- Configure the output mapping as shown below:
- Click "Finish" and then "Save".
The mapping enables the streamToBytes service to convert the input stream into bytes.
Configuring the Mapping for the Flat File Service¶
The "Flat File" block is configured to interpret the bytes and store them in a document with rows and columns.
- Open the "Map Input and Output" dialogue of the "Flat File" block.
- Configure the input mapping as shown below:
-
Double-click on the other four fields in "delimitedDataBytesToDocument Input" and fill in the following values:
Parameter Selection fieldQualifier
"Semicolon" textQualifier
"none" useHeaderRowForFieldNames
"true" Encoding
"windows-1252: Windows Latin" -
Configure the output mapping as shown below:
- Click on "Finish" and then "Save".
The mapping enables the delimitedDataBytesToDocument to interpret the bytes it receives as text and store it in a document.
Testing the Integration (optional)¶
- Click "Test" at the top right.
- Enter a name for the output document, e.g. "MindSphereAsset_Creation.csv".
- Click "Run".
The integration is executed in real time and the results are displayed on the "Test Results" panel. Make sure the success message is shown and verify that the data from the CSV file is displayed in the document.
Integrating the AWS Cloud into Insights Hub¶
The following steps show how to set up an automatic pipeline to create an asset in Insights Hub, which has an aspect with the variables given in CSV file. This pipeline automatically creates the required aspect type, asset type and asset.
Creating an Account for Connecting to Insights Hub¶
If you already have an account for the "Siemens Insights Hub" application in MindConnect Integration, jump to Adding Insights Hub Operations.
- Open the application "Siemens Insights Hub" in MindConnect Integration.
- Click on "Add new account".
- Enter an account name, e.g. "MindSphere_AWS_Integration".
- Leave the other settings as default and click "Save".
MindConnect Integration can now import data into your Insights Hub tenant.
Adding Insights Hub Operations¶
The pipeline requires operations for the "Siemens Insights Hub" application to create assets, asset types and aspect types, as well as read aspect types.
Create these operations using the configuration details listed below. The required steps are the same as described above.
Custom Name (Step 1) | Operation (Step 2) |
---|---|
"CreateAsset_AWS" | "Create An Asset" |
"CreateAssetType_AWS" | "Create Or Update An Asset Type" |
"CreateAspectType_AWS" | "Create Or Update An Aspect Type" |
"GetAspectType_AWS" | "Read An Aspect Type" |
Setting up the Pipeline for Creating Aspect Types¶
This pipeline shall create a new aspect type, if an aspect type of this name does not exist yet.
- Add a new orchestrated integration.
-
Set up the integration "MindSphereAspectType_AWS" as shown below using the following blocks:
Group Block name Configuration Control Flow try catch - Applications Siemens MindSphere 3.0 Account: "MindSphere_AWS_Integration"
Operation: "CreateAspectType_AWS"Applications Siemens MindSphere 3.0 Account: "MindSphere_AWS_Integration"
Operation: "GetAspectType_AWS" -
Click "Save".
Configuring the Input/Output Signature¶
Define the input signature as shown below in order to fill the required fields for creating aspect types using input parameters.
- Open the "Define Input/Output Signature" dialogue of the "MindSphereAspectType_AWS" block.
-
Create 3 input fields of type "String" and 1 input field of type "Document" as shown below:
-
Click on "Apply" and then "Save".
Configuring the Mapping¶
The first "Siemens Insights Hub" block is configured to fill all required fields for creating a new aspect type. The second "Siemens Insights Hub" block is configured to check if an aspect type with the given aspectTypeId
already exists on the tenant.
-
Configure the input mapping for the "Siemens Insights Hub" block in the "try" section according to the figure and table below:
Field Value category
"dynamic" scope
"private" searchable
"true" length
Do not set any value qualitycode
"false" -
Configure the output mapping as shown below:
- Click "Finish" and then "Save".
-
Configure the input mapping for the "Siemens Insights Hub" block in the "catch" section as shown below:
-
Configure the output mapping as shown below:
- Click "Finish" and then "Save".
Testing the Pipeline for Creating Aspect Types (optional)¶
Test the integration by manually providing the input values as shown below:
Note
If your test fails and asks you to update the If-match header, try using a different aspect name.
Setting up the Pipeline for Creating Asset Types¶
- Add a new orchestrated integration.
-
Set up the integration "MindSphereCreateAssetType_AWS" as shown below using the following blocks:
Group Block name Configuration Control Flow for each - Services String Service: "concat" Services String Service: "concat" Applications Siemens MindSphere 3.0 Account: "MindSphere_AWS_Integration"
Operation: "CreateAssetType_AWS" -
Click "Save".
Configuring the Input/Output Signature¶
Define the input signature as shown below in order to fill the required fields for creating asset types using input parameters.
- Open the "Define Input/Output Signature" dialogue of the "MindSphereCreateAssetType_AWS" block.
-
Create 2 input fields of type "String" and 1 input field of type "Document" as shown below:
-
Click on "Apply" and then "Save".
- Select "/document/rows" for the input field of the "for each" block.
- Click "Save".
Configuring the Mapping¶
The "String" blocks are configured to construct a string analogous to {tenantName}.{assetTypeId}
. The "Siemens Insights Hub" block is configured to fill all required fields for creating an asset type.
-
Configure the input mapping for the upper "String" block according to the figure and table below:
Field Value inString1
"." -
Configure the output mapping as shown below:
- Click "Finish" and then "Save".
- Configure the input mapping for the lower "String" block according to the figure below:
- Configure the same output mapping as in step 2.
- Click "Finish" and then "Save".
-
Configure the input mapping for the "Siemens Insights Hub" block as shown below:
Field Value scope
"private" parentTypeId
"core.basicasset" -
Configure the output mapping as shown below:
- Click "Finish" and then "Save".
Testing the Pipeline for Creating Asset Types (optional)¶
Test the integration by manually providing the input values as shown below:
Setting up the Pipeline for Creating Assets¶
This pipeline retrieves data from AWS and transfers it into Insights Hub as aspect data of an asset.
- Switch to the tab "Develop".
- Add a new orchestrated integration.
-
Set up the integration "MindSphereCreateAsset_AWS" as shown below using the following blocks:
Group Block name Configuration Integrations My_S3_Bucket - Control Workflow Transform Pipeline - Integrations MindSphereAspectType_AWS - Control Workflow Transform Pipeline - Integrations MindSphereCreateAssetType_AWS - Services String Service: "concat" Services String Service: "concat" Applications Siemens MindSphere 3.0 Account: "MindSphere_AWS_Integration"
Operation: "CreateAsset_AWS" -
Click "Save".
Configuring the Input/Output Signature¶
The pipeline receives variable definitions from a CSV file in AWS. In order to create an asset with these variables, the user must provide required parameters which cannot be read from the input file and specify the input file. The user input is defined in the input signature as shown below.
- Open the "Define Input/Output Signature" dialogue of the "MindSphereCreateAsset_AWS" block.
-
Create 4 input fields of type "String" as shown below:
-
Create 1 required output field of type "String" and name
assetId
(optional). - Click on "Apply" and then "Save".
Configuring the Mapping¶
The following configurations enable the integration to read an input file from AWS and create an aspect type with the variables provided in the file. Afterwards, the integration creates an associated asset type and instantiates it.
My_S3_Bucket_Pipeline¶
This block retrieves the user defined CSV file and forwards the content as document
.
-
Configure the input mapping for the "My_S3_Bucket_Pipeline" block according to the figure below:
-
Configure the output mapping as shown below:
- Click "Finish" and then "Save".
Upper Transform Pipeline¶
This block reads the first entry in the document
and forwards it as AspectName
.
- Open the mapping dialogue for the upper "Transform Pipeline" block.
- Add a new field of type "String" and name
AspectName
in the Pipeline Output. - Configure the mapping as shown below:
- Click "Finish" and then "Save".
MindSphereCreateAspectType_AWS¶
This block creates an aspect type using the AspectName
on the user defined tenant and forwards its configuration as document
.
-
Configure the input mapping for the "MindSphereCreateAspectType_AWS" block according to the figure below:
-
Click "Next", "Finish" and then "Save".
Lower Transform Pipeline¶
This block reads the aspectTypeId
and name
of the aspect type and forwards them as aspectTypeIdArray
.
- Open the mapping dialogue for the lower "Transform Pipeline" block.
- Add a new field of type "Document" and name
aspectTypeIdArray
in the Pipeline Output. - Add 2 fields of type "String" and names
aspectTypeId
andname
in this document. - Configure the mapping as shown below:
- Click "Finish" and then "Save".
MindSphereCreateAssetType_AWS¶
This block creates an asset type using the aspectTypeIdArray
on the user defined tenant and forwards its details as document
. The assettypeId
is given by the input signature.
-
Configure the input mapping for the "MindSphereCreateAssetType_AWS" block according to the figure below:
-
Click "Next", "Finish" and then "Save".
Upper String¶
This block creates a string starting with "." followed by the assetTypeId
and forwards it as assetTypeIdWithPrefix
.
-
Configure the input mapping for the "MindSphereCreateAspectType_AWS" block according to the figure and table below:
Field Value inString1
"." -
Create a new field of type "String" and name
assetTypeIdWithPrefix
"Pipeline Output". - Configure the output mapping according to the figure below:
- Click "Finish" and then "Save".
Lower String¶
This block creates a string starting with tenantPrefix
followed by assetTypeIdWithPrefix
and forwards it as assetTypeIdWithPrefix
.
- Configure the input mapping for the "MindSphereCreateAspectType_AWS" block according to the figure below:
- Configure the same output mapping as for the upper "String" block.
- Click "Finish" and then "Save".
CreateAsset_AWS¶
This block creates an asset using the assetTypeIdWithPrefix
on the user defined tenant. The assetName
is given by the input signature and the parentId
is set to a fixed value.
-
Configure the input mapping for the Siemens MindSphere" block according to the figure and table below:
Field Value parentId
Enter the ID of the desired parent asset Parent ID
Open the desired parent asset in the Asset Manager and get the
parentId
from the URL as shown below. -
Click "Next", "Finish" and then "Save".
Testing the Pipeline for Creating Assets (optional)¶
Test the integration by manually providing the input values as shown below:
In addition to verifying the results in "Test Results" window, you can open the Asset Manager from the MindSphere Launchpad in order to inspect the created asset.
Related Links¶
Except where otherwise noted, content on this site is licensed under the Development License Agreement.