sdi.models package¶
Submodules¶
sdi.models.aliases module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
Aliases
(attribute_name=None, alias_value=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
alias_value
¶ Gets the alias_value of this Aliases.
Returns: The alias_value of this Aliases. Return type: str
-
attribute_map
= {'alias_value': 'aliasValue', 'attribute_name': 'attributeName'}¶
-
attribute_name
¶ Gets the attribute_name of this Aliases.
Returns: The attribute_name of this Aliases. Return type: str
-
attribute_types
= {'alias_value': 'str', 'attribute_name': 'str'}¶
sdi.models.api_errors_view module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
ApiErrorsView
(errors=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'errors': 'errors'}¶
-
attribute_types
= {'errors': 'list[ApiFieldError]'}¶
-
errors
¶ Gets the errors of this ApiErrorsView.
Returns: The errors of this ApiErrorsView. Return type: list[ApiFieldError]
sdi.models.api_field_error module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
ApiFieldError
(code=None, logref=None, message=None, message_parameters=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'code': 'code', 'logref': 'logref', 'message': 'message', 'message_parameters': 'messageParameters'}¶
-
attribute_types
= {'code': 'str', 'logref': 'str', 'message': 'str', 'message_parameters': 'list[MessageParameter]'}¶
-
code
¶ Gets the code of this ApiFieldError.
Returns: The code of this ApiFieldError. Return type: str
-
logref
¶ Gets the logref of this ApiFieldError.
Returns: The logref of this ApiFieldError. Return type: str
-
message
¶ Gets the message of this ApiFieldError.
Returns: The message of this ApiFieldError. Return type: str
-
message_parameters
¶ Gets the message_parameters of this ApiFieldError.
Returns: The message_parameters of this ApiFieldError. Return type: list[MessageParameter]
sdi.models.create_data_lake_request module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
CreateDataLakeRequest
(name=None, type=None, base_path=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'base_path': 'basePath', 'name': 'name', 'type': 'type'}¶
-
attribute_types
= {'base_path': 'str', 'name': 'str', 'type': 'str'}¶
-
base_path
¶ Gets the base_path of this CreateDataLakeRequest. This is currently supported only for the IDL customer. Please refer to the document section “For Integrated Data Lake (IDL) customers” on correct basePath structure.
Returns: The base_path of this CreateDataLakeRequest. Return type: str
-
name
¶ Gets the name of this CreateDataLakeRequest.
Returns: The name of this CreateDataLakeRequest. Return type: str
-
type
¶ Gets the type of this CreateDataLakeRequest.
Returns: The type of this CreateDataLakeRequest. Return type: str
sdi.models.create_data_lake_response module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
CreateDataLakeResponse
(id=None, name=None, type=None, base_path=None, created_date=None, updated_date=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'base_path': 'basePath', 'created_date': 'createdDate', 'id': 'id', 'name': 'name', 'type': 'type', 'updated_date': 'updatedDate'}¶
-
attribute_types
= {'base_path': 'str', 'created_date': 'str', 'id': 'str', 'name': 'str', 'type': 'str', 'updated_date': 'str'}¶
-
base_path
¶ Gets the base_path of this CreateDataLakeResponse.
Returns: The base_path of this CreateDataLakeResponse. Return type: str
-
created_date
¶ Gets the created_date of this CreateDataLakeResponse.
Returns: The created_date of this CreateDataLakeResponse. Return type: str
-
id
¶ Gets the id of this CreateDataLakeResponse.
Returns: The id of this CreateDataLakeResponse. Return type: str
-
name
¶ Gets the name of this CreateDataLakeResponse.
Returns: The name of this CreateDataLakeResponse. Return type: str
-
type
¶ Gets the type of this CreateDataLakeResponse.
Returns: The type of this CreateDataLakeResponse. Return type: str
-
updated_date
¶ Gets the updated_date of this CreateDataLakeResponse.
Returns: The updated_date of this CreateDataLakeResponse. Return type: str
sdi.models.create_data_registry_request module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
CreateDataRegistryRequest
(data_tag=None, default_root_tag=None, file_pattern=None, file_upload_strategy=None, meta_data_tags=None, source_name=None, xml_process_rules=None, partition_keys=None, schema_frozen=False)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'data_tag': 'dataTag', 'default_root_tag': 'defaultRootTag', 'file_pattern': 'filePattern', 'file_upload_strategy': 'fileUploadStrategy', 'meta_data_tags': 'metaDataTags', 'partition_keys': 'partitionKeys', 'schema_frozen': 'schemaFrozen', 'source_name': 'sourceName', 'xml_process_rules': 'xmlProcessRules'}¶
-
attribute_types
= {'data_tag': 'str', 'default_root_tag': 'str', 'file_pattern': 'str', 'file_upload_strategy': 'str', 'meta_data_tags': 'list[str]', 'partition_keys': 'list[str]', 'schema_frozen': 'bool', 'source_name': 'str', 'xml_process_rules': 'list[str]'}¶
-
data_tag
¶ Gets the data_tag of this CreateDataRegistryRequest.
Returns: The data_tag of this CreateDataRegistryRequest. Return type: str
-
default_root_tag
¶ Gets the default_root_tag of this CreateDataRegistryRequest.
Returns: The default_root_tag of this CreateDataRegistryRequest. Return type: str
-
file_pattern
¶ Gets the file_pattern of this CreateDataRegistryRequest.
Returns: The file_pattern of this CreateDataRegistryRequest. Return type: str
-
file_upload_strategy
¶ Gets the file_upload_strategy of this CreateDataRegistryRequest.
Returns: The file_upload_strategy of this CreateDataRegistryRequest. Return type: str
Gets the meta_data_tags of this CreateDataRegistryRequest.
Returns: The meta_data_tags of this CreateDataRegistryRequest. Return type: list[str]
-
partition_keys
¶ Gets the partition_keys of this CreateDataRegistryRequest. A single paritionKey can be specified at the time of registry creation. It can be set to ‘sdi-default-partition-key’ to enable default paritioning. It can be set to a custom attribute present in the data to enable paritioning based on that attribute.
Returns: The partition_keys of this CreateDataRegistryRequest. Return type: list[str]
-
schema_frozen
¶ Gets the schema_frozen of this CreateDataRegistryRequest. This property must be set to false during initial creation of a registry. It can be changed to true after the initial schema creation to reuse the existing schema for the newly ingested data
Returns: The schema_frozen of this CreateDataRegistryRequest. Return type: bool
-
source_name
¶ Gets the source_name of this CreateDataRegistryRequest.
Returns: The source_name of this CreateDataRegistryRequest. Return type: str
-
xml_process_rules
¶ Gets the xml_process_rules of this CreateDataRegistryRequest.
Returns: The xml_process_rules of this CreateDataRegistryRequest. Return type: list[str]
sdi.models.data_lake_list module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
DataLakeList
(data_lakes=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'data_lakes': 'dataLakes'}¶
-
attribute_types
= {'data_lakes': 'list[DataLakeResponse]'}¶
-
data_lakes
¶ Gets the data_lakes of this DataLakeList.
Returns: The data_lakes of this DataLakeList. Return type: list[DataLakeResponse]
sdi.models.data_lake_response module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
DataLakeResponse
(base_path=None, created_date=None, id=None, updated_date=None, name=None, type=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'base_path': 'basePath', 'created_date': 'createdDate', 'id': 'id', 'name': 'name', 'type': 'type', 'updated_date': 'updatedDate'}¶
-
attribute_types
= {'base_path': 'str', 'created_date': 'str', 'id': 'str', 'name': 'str', 'type': 'str', 'updated_date': 'str'}¶
-
base_path
¶ Gets the base_path of this DataLakeResponse.
Returns: The base_path of this DataLakeResponse. Return type: str
-
created_date
¶ Gets the created_date of this DataLakeResponse.
Returns: The created_date of this DataLakeResponse. Return type: str
-
id
¶ Gets the id of this DataLakeResponse.
Returns: The id of this DataLakeResponse. Return type: str
-
name
¶ Gets the name of this DataLakeResponse.
Returns: The name of this DataLakeResponse. Return type: str
-
type
¶ Gets the type of this DataLakeResponse.
Returns: The type of this DataLakeResponse. Return type: str
-
updated_date
¶ Gets the updated_date of this DataLakeResponse.
Returns: The updated_date of this DataLakeResponse. Return type: str
sdi.models.data_query_execute_query_request module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
DataQueryExecuteQueryRequest
(description=None, parameters=None, aliases=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
aliases
¶ Gets the aliases of this DataQueryExecuteQueryRequest.
Returns: The aliases of this DataQueryExecuteQueryRequest. Return type: list[Aliases]
-
attribute_map
= {'aliases': 'aliases', 'description': 'description', 'parameters': 'parameters'}¶
-
attribute_types
= {'aliases': 'list[Aliases]', 'description': 'str', 'parameters': 'list[Parameters]'}¶
-
description
¶ Gets the description of this DataQueryExecuteQueryRequest.
Returns: The description of this DataQueryExecuteQueryRequest. Return type: str
-
parameters
¶ Gets the parameters of this DataQueryExecuteQueryRequest.
Returns: The parameters of this DataQueryExecuteQueryRequest. Return type: list[Parameters]
sdi.models.data_query_execute_query_response module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
DataQueryExecuteQueryResponse
(id=None, status=None, message=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'id': 'id', 'message': 'message', 'status': 'status'}¶
-
attribute_types
= {'id': 'str', 'message': 'str', 'status': 'str'}¶
-
id
¶ Gets the id of this DataQueryExecuteQueryResponse.
Returns: The id of this DataQueryExecuteQueryResponse. Return type: str
-
message
¶ Gets the message of this DataQueryExecuteQueryResponse.
Returns: The message of this DataQueryExecuteQueryResponse. Return type: str
-
status
¶ Gets the status of this DataQueryExecuteQueryResponse.
Returns: The status of this DataQueryExecuteQueryResponse. Return type: str
sdi.models.data_query_execution_response module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
DataQueryExecutionResponse
(id=None, description=None, parameters=None, aliases=None, query_id=None, status=None, created_date=None, updated_date=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
aliases
¶ Gets the aliases of this DataQueryExecutionResponse.
Returns: The aliases of this DataQueryExecutionResponse. Return type: list[Aliases]
-
attribute_map
= {'aliases': 'aliases', 'created_date': 'createdDate', 'description': 'description', 'id': 'id', 'parameters': 'parameters', 'query_id': 'queryId', 'status': 'status', 'updated_date': 'updatedDate'}¶
-
attribute_types
= {'aliases': 'list[Aliases]', 'created_date': 'str', 'description': 'str', 'id': 'str', 'parameters': 'list[Parameters]', 'query_id': 'str', 'status': 'str', 'updated_date': 'str'}¶
-
created_date
¶ Gets the created_date of this DataQueryExecutionResponse.
Returns: The created_date of this DataQueryExecutionResponse. Return type: str
-
description
¶ Gets the description of this DataQueryExecutionResponse.
Returns: The description of this DataQueryExecutionResponse. Return type: str
-
id
¶ Gets the id of this DataQueryExecutionResponse.
Returns: The id of this DataQueryExecutionResponse. Return type: str
-
parameters
¶ Gets the parameters of this DataQueryExecutionResponse.
Returns: The parameters of this DataQueryExecutionResponse. Return type: list[Parameters]
-
query_id
¶ Gets the query_id of this DataQueryExecutionResponse.
Returns: The query_id of this DataQueryExecutionResponse. Return type: str
-
status
¶ Gets the status of this DataQueryExecutionResponse. Status of execution job. - CURRENT: Job has executed successfully and results are current. - IN_PROGRESS: Job execution is in progress. - OUTDATED: Job execution completed but results are outdated. - FAILED: Job execution has failed. - OBSOLETE: Job execution completed but results are obsolete.
Returns: The status of this DataQueryExecutionResponse. Return type: str
-
updated_date
¶ Gets the updated_date of this DataQueryExecutionResponse.
Returns: The updated_date of this DataQueryExecutionResponse. Return type: str
sdi.models.data_query_sql_request module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
DataQuerySQLRequest
(description=None, is_business_query=False, ontology_id=None, is_dynamic=False, name=None, sql_statement=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'description': 'description', 'is_business_query': 'isBusinessQuery', 'is_dynamic': 'isDynamic', 'name': 'name', 'ontology_id': 'ontologyId', 'sql_statement': 'sqlStatement'}¶
-
attribute_types
= {'description': 'str', 'is_business_query': 'bool', 'is_dynamic': 'bool', 'name': 'str', 'ontology_id': 'str', 'sql_statement': 'str'}¶
-
description
¶ Gets the description of this DataQuerySQLRequest.
Returns: The description of this DataQuerySQLRequest. Return type: str
-
is_business_query
¶ Gets the is_business_query of this DataQuerySQLRequest.
Returns: The is_business_query of this DataQuerySQLRequest. Return type: bool
-
is_dynamic
¶ Gets the is_dynamic of this DataQuerySQLRequest.
Returns: The is_dynamic of this DataQuerySQLRequest. Return type: bool
-
name
¶ Gets the name of this DataQuerySQLRequest.
Returns: The name of this DataQuerySQLRequest. Return type: str
-
ontology_id
¶ Gets the ontology_id of this DataQuerySQLRequest. If isBusinessQuery is true then ontologyId must be passed.
Returns: The ontology_id of this DataQuerySQLRequest. Return type: str
-
sql_statement
¶ Gets the sql_statement of this DataQuerySQLRequest. Pass base64 encode value of spark sql like SELECT vehicle.vin, make.def FROM vehicle, make WHERE vehicle.make = make.id. Please refer SDI-How -to-create-query documentation for preparing sqlStatement.
Returns: The sql_statement of this DataQuerySQLRequest. Return type: str
sdi.models.data_query_sql_response module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
DataQuerySQLResponse
(created_date=None, description=None, executable=None, id=None, is_business_query=None, is_dynamic=None, name=None, ontology_id=None, pending_actions=None, sql_statement=None, updated_date=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'created_date': 'createdDate', 'description': 'description', 'executable': 'executable', 'id': 'id', 'is_business_query': 'isBusinessQuery', 'is_dynamic': 'isDynamic', 'name': 'name', 'ontology_id': 'ontologyId', 'pending_actions': 'pendingActions', 'sql_statement': 'sqlStatement', 'updated_date': 'updatedDate'}¶
-
attribute_types
= {'created_date': 'str', 'description': 'str', 'executable': 'bool', 'id': 'str', 'is_business_query': 'bool', 'is_dynamic': 'bool', 'name': 'str', 'ontology_id': 'str', 'pending_actions': 'list[MappingErrorSQLDetails]', 'sql_statement': 'str', 'updated_date': 'str'}¶
-
created_date
¶ Gets the created_date of this DataQuerySQLResponse.
Returns: The created_date of this DataQuerySQLResponse. Return type: str
-
description
¶ Gets the description of this DataQuerySQLResponse.
Returns: The description of this DataQuerySQLResponse. Return type: str
-
executable
¶ Gets the executable of this DataQuerySQLResponse.
Returns: The executable of this DataQuerySQLResponse. Return type: bool
-
id
¶ Gets the id of this DataQuerySQLResponse.
Returns: The id of this DataQuerySQLResponse. Return type: str
-
is_business_query
¶ Gets the is_business_query of this DataQuerySQLResponse.
Returns: The is_business_query of this DataQuerySQLResponse. Return type: bool
-
is_dynamic
¶ Gets the is_dynamic of this DataQuerySQLResponse.
Returns: The is_dynamic of this DataQuerySQLResponse. Return type: bool
-
name
¶ Gets the name of this DataQuerySQLResponse.
Returns: The name of this DataQuerySQLResponse. Return type: str
-
ontology_id
¶ Gets the ontology_id of this DataQuerySQLResponse.
Returns: The ontology_id of this DataQuerySQLResponse. Return type: str
-
pending_actions
¶ Gets the pending_actions of this DataQuerySQLResponse.
Returns: The pending_actions of this DataQuerySQLResponse. Return type: list[MappingErrorSQLDetails]
-
sql_statement
¶ Gets the sql_statement of this DataQuerySQLResponse.
Returns: The sql_statement of this DataQuerySQLResponse. Return type: str
-
updated_date
¶ Gets the updated_date of this DataQuerySQLResponse.
Returns: The updated_date of this DataQuerySQLResponse. Return type: str
sdi.models.data_query_sql_update_request module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
DataQuerySQLUpdateRequest
(description=None, is_business_query=False, ontology_id=None, is_dynamic=None, name=None, sql_statement=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'description': 'description', 'is_business_query': 'isBusinessQuery', 'is_dynamic': 'isDynamic', 'name': 'name', 'ontology_id': 'ontologyId', 'sql_statement': 'sqlStatement'}¶
-
attribute_types
= {'description': 'str', 'is_business_query': 'bool', 'is_dynamic': 'bool', 'name': 'str', 'ontology_id': 'str', 'sql_statement': 'str'}¶
-
description
¶ Gets the description of this DataQuerySQLUpdateRequest.
Returns: The description of this DataQuerySQLUpdateRequest. Return type: str
-
is_business_query
¶ Gets the is_business_query of this DataQuerySQLUpdateRequest.
Returns: The is_business_query of this DataQuerySQLUpdateRequest. Return type: bool
-
is_dynamic
¶ Gets the is_dynamic of this DataQuerySQLUpdateRequest.
Returns: The is_dynamic of this DataQuerySQLUpdateRequest. Return type: bool
-
name
¶ Gets the name of this DataQuerySQLUpdateRequest.
Returns: The name of this DataQuerySQLUpdateRequest. Return type: str
-
ontology_id
¶ Gets the ontology_id of this DataQuerySQLUpdateRequest. If isBusinessQuery is true then ontologyId must be passed.
Returns: The ontology_id of this DataQuerySQLUpdateRequest. Return type: str
-
sql_statement
¶ Gets the sql_statement of this DataQuerySQLUpdateRequest. Pass base64 encode value of spark sql like SELECT vehicle.vin, make.def FROM vehicle, make WHERE vehicle.make = make.id. Please refer SDI-How -to-create-query documentation for preparing sqlStatement.
Returns: The sql_statement of this DataQuerySQLUpdateRequest. Return type: str
sdi.models.data_registry module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
DataRegistry
(created_date=None, data_tag=None, default_root_tag=None, file_pattern=None, file_upload_strategy=None, updated_date=None, meta_data_tags=None, xml_process_rules=None, partition_keys=None, mutable=None, registry_id=None, source_id=None, source_name=None, schema_frozen=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'created_date': 'createdDate', 'data_tag': 'dataTag', 'default_root_tag': 'defaultRootTag', 'file_pattern': 'filePattern', 'file_upload_strategy': 'fileUploadStrategy', 'meta_data_tags': 'metaDataTags', 'mutable': 'mutable', 'partition_keys': 'partitionKeys', 'registry_id': 'registryId', 'schema_frozen': 'schemaFrozen', 'source_id': 'sourceId', 'source_name': 'sourceName', 'updated_date': 'updatedDate', 'xml_process_rules': 'xmlProcessRules'}¶
-
attribute_types
= {'created_date': 'str', 'data_tag': 'str', 'default_root_tag': 'str', 'file_pattern': 'str', 'file_upload_strategy': 'str', 'meta_data_tags': 'list[str]', 'mutable': 'bool', 'partition_keys': 'list[str]', 'registry_id': 'str', 'schema_frozen': 'bool', 'source_id': 'str', 'source_name': 'str', 'updated_date': 'str', 'xml_process_rules': 'list[str]'}¶
-
created_date
¶ Gets the created_date of this DataRegistry.
Returns: The created_date of this DataRegistry. Return type: str
-
data_tag
¶ Gets the data_tag of this DataRegistry.
Returns: The data_tag of this DataRegistry. Return type: str
-
default_root_tag
¶ Gets the default_root_tag of this DataRegistry.
Returns: The default_root_tag of this DataRegistry. Return type: str
-
file_pattern
¶ Gets the file_pattern of this DataRegistry.
Returns: The file_pattern of this DataRegistry. Return type: str
-
file_upload_strategy
¶ Gets the file_upload_strategy of this DataRegistry.
Returns: The file_upload_strategy of this DataRegistry. Return type: str
Gets the meta_data_tags of this DataRegistry.
Returns: The meta_data_tags of this DataRegistry. Return type: list[str]
-
mutable
¶ Gets the mutable of this DataRegistry.
Returns: The mutable of this DataRegistry. Return type: bool
-
partition_keys
¶ Gets the partition_keys of this DataRegistry.
Returns: The partition_keys of this DataRegistry. Return type: list[str]
-
registry_id
¶ Gets the registry_id of this DataRegistry.
Returns: The registry_id of this DataRegistry. Return type: str
-
schema_frozen
¶ Gets the schema_frozen of this DataRegistry.
Returns: The schema_frozen of this DataRegistry. Return type: bool
-
source_id
¶ Gets the source_id of this DataRegistry.
Returns: The source_id of this DataRegistry. Return type: str
-
source_name
¶ Gets the source_name of this DataRegistry.
Returns: The source_name of this DataRegistry. Return type: str
-
updated_date
¶ Gets the updated_date of this DataRegistry.
Returns: The updated_date of this DataRegistry. Return type: str
-
xml_process_rules
¶ Gets the xml_process_rules of this DataRegistry.
Returns: The xml_process_rules of this DataRegistry. Return type: list[str]
sdi.models.data_type_definition module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
DataTypeDefinition
(name=None, patterns=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'name': 'name', 'patterns': 'patterns'}¶
-
attribute_types
= {'name': 'str', 'patterns': 'list[str]'}¶
-
name
¶ Gets the name of this DataTypeDefinition.
Returns: The name of this DataTypeDefinition. Return type: str
-
patterns
¶ Gets the patterns of this DataTypeDefinition.
Returns: The patterns of this DataTypeDefinition. Return type: list[str]
sdi.models.data_type_pattern module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
DataTypePattern
(patterns=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'patterns': 'patterns'}¶
-
attribute_types
= {'patterns': 'list[str]'}¶
-
patterns
¶ Gets the patterns of this DataTypePattern.
Returns: The patterns of this DataTypePattern. Return type: list[str]
sdi.models.error_message module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
ErrorMessage
(errors=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'errors': 'errors'}¶
-
attribute_types
= {'errors': 'list[ApiFieldError]'}¶
-
errors
¶ Gets the errors of this ErrorMessage.
Returns: The errors of this ErrorMessage. Return type: list[ApiFieldError]
sdi.models.file module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
FileInput
(file=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'file': 'file'}¶
-
attribute_types
= {'file': 'file'}¶
-
file
¶ Gets the file of this FileInput.
Returns: The file of this FileInput. Return type: file
sdi.models.get_all_sql_queries_data module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
GetAllSQLQueriesData
(created_by=None, created_date=None, description=None, executable=None, id=None, is_business_query=None, is_dynamic=None, name=None, ontology_id=None, pending_actions=None, updated_by=None, updated_date=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'created_by': 'createdBy', 'created_date': 'createdDate', 'description': 'description', 'executable': 'executable', 'id': 'id', 'is_business_query': 'isBusinessQuery', 'is_dynamic': 'isDynamic', 'name': 'name', 'ontology_id': 'ontologyId', 'pending_actions': 'pendingActions', 'updated_by': 'updatedBy', 'updated_date': 'updatedDate'}¶
-
attribute_types
= {'created_by': 'str', 'created_date': 'str', 'description': 'str', 'executable': 'bool', 'id': 'str', 'is_business_query': 'bool', 'is_dynamic': 'bool', 'name': 'str', 'ontology_id': 'str', 'pending_actions': 'list[MappingErrorSQLDetails]', 'updated_by': 'str', 'updated_date': 'str'}¶
-
created_by
¶ Gets the created_by of this GetAllSQLQueriesData.
Returns: The created_by of this GetAllSQLQueriesData. Return type: str
-
created_date
¶ Gets the created_date of this GetAllSQLQueriesData.
Returns: The created_date of this GetAllSQLQueriesData. Return type: str
-
description
¶ Gets the description of this GetAllSQLQueriesData.
Returns: The description of this GetAllSQLQueriesData. Return type: str
-
executable
¶ Gets the executable of this GetAllSQLQueriesData.
Returns: The executable of this GetAllSQLQueriesData. Return type: bool
-
id
¶ Gets the id of this GetAllSQLQueriesData.
Returns: The id of this GetAllSQLQueriesData. Return type: str
-
is_business_query
¶ Gets the is_business_query of this GetAllSQLQueriesData.
Returns: The is_business_query of this GetAllSQLQueriesData. Return type: bool
-
is_dynamic
¶ Gets the is_dynamic of this GetAllSQLQueriesData.
Returns: The is_dynamic of this GetAllSQLQueriesData. Return type: bool
-
name
¶ Gets the name of this GetAllSQLQueriesData.
Returns: The name of this GetAllSQLQueriesData. Return type: str
-
ontology_id
¶ Gets the ontology_id of this GetAllSQLQueriesData.
Returns: The ontology_id of this GetAllSQLQueriesData. Return type: str
-
pending_actions
¶ Gets the pending_actions of this GetAllSQLQueriesData.
Returns: The pending_actions of this GetAllSQLQueriesData. Return type: list[MappingErrorSQLDetails]
-
updated_by
¶ Gets the updated_by of this GetAllSQLQueriesData.
Returns: The updated_by of this GetAllSQLQueriesData. Return type: str
-
updated_date
¶ Gets the updated_date of this GetAllSQLQueriesData.
Returns: The updated_date of this GetAllSQLQueriesData. Return type: str
sdi.models.infer_schema_search_request module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
InferSchemaSearchRequest
(schemas=None, exclude_properties=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'exclude_properties': 'excludeProperties', 'schemas': 'schemas'}¶
-
attribute_types
= {'exclude_properties': 'list[str]', 'schemas': 'list[InferSearchObject]'}¶
-
exclude_properties
¶ Gets the exclude_properties of this InferSchemaSearchRequest.
Returns: The exclude_properties of this InferSchemaSearchRequest. Return type: list[str]
-
schemas
¶ Gets the schemas of this InferSchemaSearchRequest.
Returns: The schemas of this InferSchemaSearchRequest. Return type: list[InferSearchObject]
sdi.models.infer_search_object module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
InferSearchObject
(data_tag=None, schema_name=None, source_name=None, asset_id=None, aspect_name=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
aspect_name
¶ Gets the aspect_name of this InferSearchObject.
Returns: The aspect_name of this InferSearchObject. Return type: str
-
asset_id
¶ Gets the asset_id of this InferSearchObject.
Returns: The asset_id of this InferSearchObject. Return type: str
-
attribute_map
= {'aspect_name': 'aspectName', 'asset_id': 'assetId', 'data_tag': 'dataTag', 'schema_name': 'schemaName', 'source_name': 'sourceName'}¶
-
attribute_types
= {'aspect_name': 'str', 'asset_id': 'str', 'data_tag': 'str', 'schema_name': 'str', 'source_name': 'str'}¶
-
data_tag
¶ Gets the data_tag of this InferSearchObject.
Returns: The data_tag of this InferSearchObject. Return type: str
-
schema_name
¶ Gets the schema_name of this InferSearchObject.
Returns: The schema_name of this InferSearchObject. Return type: str
-
source_name
¶ Gets the source_name of this InferSearchObject.
Returns: The source_name of this InferSearchObject. Return type: str
sdi.models.input_class module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
InputClass
(description=None, name=None, primary_schema=None, key_mapping_type=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'description': 'description', 'key_mapping_type': 'keyMappingType', 'name': 'name', 'primary_schema': 'primarySchema'}¶
-
attribute_types
= {'description': 'str', 'key_mapping_type': 'str', 'name': 'str', 'primary_schema': 'str'}¶
-
description
¶ Gets the description of this InputClass.
Returns: The description of this InputClass. Return type: str
-
key_mapping_type
¶ Gets the key_mapping_type of this InputClass. Class keyMappingType. If Parent level ‘keyMappingType’ is defined then class ‘keyMappingType’ property will overwrite Parent level ‘keyMappingType’ value.
Returns: The key_mapping_type of this InputClass. Return type: str
-
name
¶ Gets the name of this InputClass.
Returns: The name of this InputClass. Return type: str
-
primary_schema
¶ Gets the primary_schema of this InputClass. If class mapped with more than one schemas then user can define either one of them as primary schema.
Returns: The primary_schema of this InputClass. Return type: str
sdi.models.input_class_property module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
InputClassProperty
(datatype=None, description=None, name=None, parent_class=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'datatype': 'datatype', 'description': 'description', 'name': 'name', 'parent_class': 'parentClass'}¶
-
attribute_types
= {'datatype': 'str', 'description': 'str', 'name': 'str', 'parent_class': 'InputParent'}¶
-
datatype
¶ Gets the datatype of this InputClassProperty.
Returns: The datatype of this InputClassProperty. Return type: str
-
description
¶ Gets the description of this InputClassProperty.
Returns: The description of this InputClassProperty. Return type: str
-
name
¶ Gets the name of this InputClassProperty.
Returns: The name of this InputClassProperty. Return type: str
-
parent_class
¶ Gets the parent_class of this InputClassProperty.
Returns: The parent_class of this InputClassProperty. Return type: InputParent
sdi.models.input_mapping module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
InputMapping
(class_property=None, description=None, functional_mapping=None, key_mapping=None, mapping_function=None, name=None, schema_properties=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'class_property': 'classProperty', 'description': 'description', 'functional_mapping': 'functionalMapping', 'key_mapping': 'keyMapping', 'mapping_function': 'mappingFunction', 'name': 'name', 'schema_properties': 'schemaProperties'}¶
-
attribute_types
= {'class_property': 'InputMappingClassProperty', 'description': 'str', 'functional_mapping': 'bool', 'key_mapping': 'bool', 'mapping_function': 'MappingFunction', 'name': 'str', 'schema_properties': 'list[InputMappingSchemaProperty]'}¶
-
class_property
¶ Gets the class_property of this InputMapping.
Returns: The class_property of this InputMapping. Return type: InputMappingClassProperty
-
description
¶ Gets the description of this InputMapping.
Returns: The description of this InputMapping. Return type: str
-
functional_mapping
¶ Gets the functional_mapping of this InputMapping.
Returns: The functional_mapping of this InputMapping. Return type: bool
-
key_mapping
¶ Gets the key_mapping of this InputMapping.
Returns: The key_mapping of this InputMapping. Return type: bool
-
mapping_function
¶ Gets the mapping_function of this InputMapping.
Returns: The mapping_function of this InputMapping. Return type: MappingFunction
-
name
¶ Gets the name of this InputMapping.
Returns: The name of this InputMapping. Return type: str
-
schema_properties
¶ Gets the schema_properties of this InputMapping.
Returns: The schema_properties of this InputMapping. Return type: list[InputMappingSchemaProperty]
sdi.models.input_mapping_class_property module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
InputMappingClassProperty
(name=None, parent_class=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'name': 'name', 'parent_class': 'parentClass'}¶
-
attribute_types
= {'name': 'str', 'parent_class': 'InputParent'}¶
-
name
¶ Gets the name of this InputMappingClassProperty.
Returns: The name of this InputMappingClassProperty. Return type: str
-
parent_class
¶ Gets the parent_class of this InputMappingClassProperty.
Returns: The parent_class of this InputMappingClassProperty. Return type: InputParent
sdi.models.input_mapping_schema_property module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
InputMappingSchemaProperty
(name=None, order=None, parent_schema=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'name': 'name', 'order': 'order', 'parent_schema': 'parentSchema'}¶
-
attribute_types
= {'name': 'str', 'order': 'str', 'parent_schema': 'InputParent'}¶
-
name
¶ Gets the name of this InputMappingSchemaProperty.
Returns: The name of this InputMappingSchemaProperty. Return type: str
-
order
¶ Gets the order of this InputMappingSchemaProperty.
Returns: The order of this InputMappingSchemaProperty. Return type: str
-
parent_schema
¶ Gets the parent_schema of this InputMappingSchemaProperty.
Returns: The parent_schema of this InputMappingSchemaProperty. Return type: InputParent
sdi.models.input_parent module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
InputParent
(name=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'name': 'name'}¶
-
attribute_types
= {'name': 'str'}¶
-
name
¶ Gets the name of this InputParent.
Returns: The name of this InputParent. Return type: str
sdi.models.input_property_relation module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
InputPropertyRelation
(description=None, end_class_property=None, name=None, relation_type=None, start_class_property=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'description': 'description', 'end_class_property': 'endClassProperty', 'name': 'name', 'relation_type': 'relationType', 'start_class_property': 'startClassProperty'}¶
-
attribute_types
= {'description': 'str', 'end_class_property': 'InputMappingClassProperty', 'name': 'str', 'relation_type': 'str', 'start_class_property': 'InputMappingClassProperty'}¶
-
description
¶ Gets the description of this InputPropertyRelation.
Returns: The description of this InputPropertyRelation. Return type: str
-
end_class_property
¶ Gets the end_class_property of this InputPropertyRelation.
Returns: The end_class_property of this InputPropertyRelation. Return type: InputMappingClassProperty
-
name
¶ Gets the name of this InputPropertyRelation.
Returns: The name of this InputPropertyRelation. Return type: str
-
relation_type
¶ Gets the relation_type of this InputPropertyRelation.
Returns: The relation_type of this InputPropertyRelation. Return type: str
-
start_class_property
¶ Gets the start_class_property of this InputPropertyRelation.
Returns: The start_class_property of this InputPropertyRelation. Return type: InputMappingClassProperty
sdi.models.input_schema module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
InputSchema
(description=None, name=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'description': 'description', 'name': 'name'}¶
-
attribute_types
= {'description': 'str', 'name': 'str'}¶
-
description
¶ Gets the description of this InputSchema.
Returns: The description of this InputSchema. Return type: str
-
name
¶ Gets the name of this InputSchema.
Returns: The name of this InputSchema. Return type: str
sdi.models.input_schema_property module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
InputSchemaProperty
(datatype=None, description=None, name=None, parent_schema=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'datatype': 'datatype', 'description': 'description', 'name': 'name', 'parent_schema': 'parentSchema'}¶
-
attribute_types
= {'datatype': 'str', 'description': 'str', 'name': 'str', 'parent_schema': 'InputParent'}¶
-
datatype
¶ Gets the datatype of this InputSchemaProperty.
Returns: The datatype of this InputSchemaProperty. Return type: str
-
description
¶ Gets the description of this InputSchemaProperty.
Returns: The description of this InputSchemaProperty. Return type: str
-
name
¶ Gets the name of this InputSchemaProperty.
Returns: The name of this InputSchemaProperty. Return type: str
-
parent_schema
¶ Gets the parent_schema of this InputSchemaProperty.
Returns: The parent_schema of this InputSchemaProperty. Return type: InputParent
sdi.models.iot_data_registry module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
IotDataRegistry
(asset_id=None, aspect_name=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
aspect_name
¶ Gets the aspect_name of this IotDataRegistry.
Returns: The aspect_name of this IotDataRegistry. Return type: str
-
asset_id
¶ Gets the asset_id of this IotDataRegistry.
Returns: The asset_id of this IotDataRegistry. Return type: str
-
attribute_map
= {'aspect_name': 'aspectName', 'asset_id': 'assetId'}¶
-
attribute_types
= {'aspect_name': 'str', 'asset_id': 'str'}¶
sdi.models.iot_data_registry_response module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
IotDataRegistryResponse
(aspect_name=None, asset_id=None, category=None, created_date=None, data_tag=None, file_upload_strategy=None, updated_date=None, registry_id=None, source_name=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
aspect_name
¶ Gets the aspect_name of this IotDataRegistryResponse.
Returns: The aspect_name of this IotDataRegistryResponse. Return type: str
-
asset_id
¶ Gets the asset_id of this IotDataRegistryResponse.
Returns: The asset_id of this IotDataRegistryResponse. Return type: str
-
attribute_map
= {'aspect_name': 'aspectName', 'asset_id': 'assetId', 'category': 'category', 'created_date': 'createdDate', 'data_tag': 'dataTag', 'file_upload_strategy': 'fileUploadStrategy', 'registry_id': 'registryId', 'source_name': 'sourceName', 'updated_date': 'updatedDate'}¶
-
attribute_types
= {'aspect_name': 'str', 'asset_id': 'str', 'category': 'str', 'created_date': 'str', 'data_tag': 'str', 'file_upload_strategy': 'str', 'registry_id': 'str', 'source_name': 'str', 'updated_date': 'str'}¶
-
category
¶ Gets the category of this IotDataRegistryResponse. The category for this IoT Data Registry is always IOT
Returns: The category of this IotDataRegistryResponse. Return type: str
-
created_date
¶ Gets the created_date of this IotDataRegistryResponse.
Returns: The created_date of this IotDataRegistryResponse. Return type: str
-
data_tag
¶ Gets the data_tag of this IotDataRegistryResponse. The dataTag is combination of assetId and aspectName, separated by _
Returns: The data_tag of this IotDataRegistryResponse. Return type: str
-
file_upload_strategy
¶ Gets the file_upload_strategy of this IotDataRegistryResponse.
Returns: The file_upload_strategy of this IotDataRegistryResponse. Return type: str
-
registry_id
¶ Gets the registry_id of this IotDataRegistryResponse.
Returns: The registry_id of this IotDataRegistryResponse. Return type: str
-
source_name
¶ Gets the source_name of this IotDataRegistryResponse. The sourceName is always MindSphere.
Returns: The source_name of this IotDataRegistryResponse. Return type: str
-
updated_date
¶ Gets the updated_date of this IotDataRegistryResponse.
Returns: The updated_date of this IotDataRegistryResponse. Return type: str
sdi.models.job_status module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
JobStatus
(id=None, status=None, message=None, created_date=None, updated_date=None, ontology_response=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'created_date': 'createdDate', 'id': 'id', 'message': 'message', 'ontology_response': 'ontologyResponse', 'status': 'status', 'updated_date': 'updatedDate'}¶
-
attribute_types
= {'created_date': 'str', 'id': 'str', 'message': 'str', 'ontology_response': 'JobStatusOntologyResponse', 'status': 'str', 'updated_date': 'str'}¶
-
created_date
¶ Gets the created_date of this JobStatus. Start time of Ontology job created in UTC date format.
Returns: The created_date of this JobStatus. Return type: str
-
id
¶ Gets the id of this JobStatus. Unique Ontology job ID.
Returns: The id of this JobStatus. Return type: str
-
message
¶ Gets the message of this JobStatus. Contains an message in case the job created. Possible messages: - The Request for Create Ontology. - The Request for Create Ontology using owl file upload. - The Request for Update Ontology.
Returns: The message of this JobStatus. Return type: str
-
ontology_response
¶ Gets the ontology_response of this JobStatus.
Returns: The ontology_response of this JobStatus. Return type: JobStatusOntologyResponse
-
status
¶ Gets the status of this JobStatus. Status of ontology creation/updation job. - SUBMITTED: job has been created but creation/updation of ontology not yet started. - IN_PROGRESS: Ontology creation or updation started. - FAILED: Ontology creation or updation has failed. No data is available to be retrieved. - SUCCESS: Ontology creation or updation has been successfully finished.
Returns: The status of this JobStatus. Return type: str
-
updated_date
¶ Gets the updated_date of this JobStatus. Job last modified time in UTC date format. The backend updates this time whenever the job status changes.
Returns: The updated_date of this JobStatus. Return type: str
sdi.models.job_status_ontology_response module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
JobStatusOntologyResponse
(ontology_id=None, ontology_errors=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'ontology_errors': 'ontologyErrors', 'ontology_id': 'ontologyId'}¶
-
attribute_types
= {'ontology_errors': 'list[ApiFieldError]', 'ontology_id': 'str'}¶
-
ontology_errors
¶ Gets the ontology_errors of this JobStatusOntologyResponse. ontologyErrors will be present if ontology creation failed.
Returns: The ontology_errors of this JobStatusOntologyResponse. Return type: list[ApiFieldError]
-
ontology_id
¶ Gets the ontology_id of this JobStatusOntologyResponse. Ontology id will be present if ontology creation successful.
Returns: The ontology_id of this JobStatusOntologyResponse. Return type: str
sdi.models.list_of_data_type_definition module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
ListOfDataTypeDefinition
(data_types=None, page=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'data_types': 'dataTypes', 'page': 'page'}¶
-
attribute_types
= {'data_types': 'list[DataTypeDefinition]', 'page': 'TokenPage'}¶
-
data_types
¶ Gets the data_types of this ListOfDataTypeDefinition.
Returns: The data_types of this ListOfDataTypeDefinition. Return type: list[DataTypeDefinition]
sdi.models.list_of_io_t_registry_response module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
ListOfIoTRegistryResponse
(iot_data_registries=None, page=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'iot_data_registries': 'iotDataRegistries', 'page': 'page'}¶
-
attribute_types
= {'iot_data_registries': 'list[IotDataRegistryResponse]', 'page': 'TokenPage'}¶
-
iot_data_registries
¶ Gets the iot_data_registries of this ListOfIoTRegistryResponse.
Returns: The iot_data_registries of this ListOfIoTRegistryResponse. Return type: list[IotDataRegistryResponse]
sdi.models.list_of_job_ids module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
ListOfJobIds
(ingest_job_status=None, page=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'ingest_job_status': 'ingestJobStatus', 'page': 'page'}¶
-
attribute_types
= {'ingest_job_status': 'list[SdiJobStatusResponse]', 'page': 'TokenPage'}¶
-
ingest_job_status
¶ Gets the ingest_job_status of this ListOfJobIds.
Returns: The ingest_job_status of this ListOfJobIds. Return type: list[SdiJobStatusResponse]
sdi.models.list_of_patterns module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
ListOfPatterns
(suggest_patterns=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'suggest_patterns': 'suggestPatterns'}¶
-
attribute_types
= {'suggest_patterns': 'list[Pattern]'}¶
sdi.models.list_of_registry_response module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
ListOfRegistryResponse
(data_registries=None, page=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'data_registries': 'dataRegistries', 'page': 'page'}¶
-
attribute_types
= {'data_registries': 'list[DataRegistry]', 'page': 'TokenPage'}¶
-
data_registries
¶ Gets the data_registries of this ListOfRegistryResponse.
Returns: The data_registries of this ListOfRegistryResponse. Return type: list[DataRegistry]
sdi.models.list_of_schema_properties module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
sdi.models.list_of_schema_registry module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
ListOfSchemaRegistry
(schemas=None, page=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'page': 'page', 'schemas': 'schemas'}¶
-
attribute_types
= {'page': 'TokenPage', 'schemas': 'list[SDISchemaRegistry]'}¶
-
page
¶ Gets the page of this ListOfSchemaRegistry.
Returns: The page of this ListOfSchemaRegistry. Return type: TokenPage
-
schemas
¶ Gets the schemas of this ListOfSchemaRegistry.
Returns: The schemas of this ListOfSchemaRegistry. Return type: list[SDISchemaRegistry]
sdi.models.mapping_error_sql_details module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
MappingErrorSQLDetails
(field=None, message=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'field': 'field', 'message': 'message'}¶
-
attribute_types
= {'field': 'str', 'message': 'str'}¶
-
field
¶ Gets the field of this MappingErrorSQLDetails.
Returns: The field of this MappingErrorSQLDetails. Return type: str
-
message
¶ Gets the message of this MappingErrorSQLDetails.
Returns: The message of this MappingErrorSQLDetails. Return type: str
sdi.models.mapping_function module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
MappingFunction
(operator=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'operator': 'operator'}¶
-
attribute_types
= {'operator': 'str'}¶
-
operator
¶ Gets the operator of this MappingFunction.
Returns: The operator of this MappingFunction. Return type: str
sdi.models.mdsp_api_error module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
MdspApiError
(code=None, message=None, message_parameters=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'code': 'code', 'message': 'message', 'message_parameters': 'messageParameters'}¶
-
attribute_types
= {'code': 'str', 'message': 'str', 'message_parameters': 'list[MdspApiErrorMessageParameters]'}¶
-
code
¶ Gets the code of this MdspApiError. Unique error code. Every code is bound to one (parametrized) message.
Returns: The code of this MdspApiError. Return type: str
-
message
¶ Gets the message of this MdspApiError. Human readable error message in English.
Returns: The message of this MdspApiError. Return type: str
-
message_parameters
¶ Gets the message_parameters of this MdspApiError. In case an error message is parametrized, the parameter names and values are returned for, e.g., localization purposes. The parametrized error messages are defined at the operation error response descriptions in this API specification. Parameters are denoted by named placeholders ‘{<parameter name>}’ in the message specifications. At runtime, returned message placeholders are substituted by actual parameter values.
Returns: The message_parameters of this MdspApiError. Return type: list[MdspApiErrorMessageParameters]
sdi.models.mdsp_api_error_message_parameters module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
MdspApiErrorMessageParameters
(name=None, value=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'name': 'name', 'value': 'value'}¶
-
attribute_types
= {'name': 'str', 'value': 'str'}¶
-
name
¶ Gets the name of this MdspApiErrorMessageParameters. Name of message parameter as specified in parametrized error message.
Returns: The name of this MdspApiErrorMessageParameters. Return type: str
-
value
¶ Gets the value of this MdspApiErrorMessageParameters. Value of message parameter as substituted in returned error message.
Returns: The value of this MdspApiErrorMessageParameters. Return type: str
sdi.models.mdsp_error module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
MdspError
(code=None, message=None, message_parameters=None, logref=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'code': 'code', 'logref': 'logref', 'message': 'message', 'message_parameters': 'messageParameters'}¶
-
attribute_types
= {'code': 'str', 'logref': 'str', 'message': 'str', 'message_parameters': 'list[MdspApiErrorMessageParameters]'}¶
-
code
¶ Gets the code of this MdspError. Unique error code. Every code is bound to one (parametrized) message.
Returns: The code of this MdspError. Return type: str
-
logref
¶ Gets the logref of this MdspError. Logging correlation ID for debugging purposes.
Returns: The logref of this MdspError. Return type: str
-
message
¶ Gets the message of this MdspError. Human readable error message in English.
Returns: The message of this MdspError. Return type: str
-
message_parameters
¶ Gets the message_parameters of this MdspError. In case an error message is parametrized, the parameter names and values are returned for, e.g., localization purposes. The parametrized error messages are defined at the operation error response descriptions in this API specification. Parameters are denoted by named placeholders ‘{<parameter name>}’ in the message specifications. At runtime, returned message placeholders are substituted by actual parameter values.
Returns: The message_parameters of this MdspError. Return type: list[MdspApiErrorMessageParameters]
sdi.models.mdsp_errors module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
MdspErrors
(errors=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'errors': 'errors'}¶
-
attribute_types
= {'errors': 'list[MdspError]'}¶
sdi.models.message_parameter module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
MessageParameter
(name=None, value=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'name': 'name', 'value': 'value'}¶
-
attribute_types
= {'name': 'str', 'value': 'object'}¶
-
name
¶ Gets the name of this MessageParameter.
Returns: The name of this MessageParameter. Return type: str
-
value
¶ Gets the value of this MessageParameter.
Returns: The value of this MessageParameter. Return type: object
sdi.models.native_query_get_response module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
NativeQueryGetResponse
(created_date=None, description=None, executable=None, id=None, is_business_query=None, is_dynamic=None, name=None, ontology_id=None, pending_actions=None, sql_statement=None, updated_date=None, last_ten_execution_job_ids=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'created_date': 'createdDate', 'description': 'description', 'executable': 'executable', 'id': 'id', 'is_business_query': 'isBusinessQuery', 'is_dynamic': 'isDynamic', 'last_ten_execution_job_ids': 'lastTenExecutionJobIds', 'name': 'name', 'ontology_id': 'ontologyId', 'pending_actions': 'pendingActions', 'sql_statement': 'sqlStatement', 'updated_date': 'updatedDate'}¶
-
attribute_types
= {'created_date': 'str', 'description': 'str', 'executable': 'bool', 'id': 'str', 'is_business_query': 'bool', 'is_dynamic': 'bool', 'last_ten_execution_job_ids': 'list[str]', 'name': 'str', 'ontology_id': 'str', 'pending_actions': 'list[MappingErrorSQLDetails]', 'sql_statement': 'str', 'updated_date': 'str'}¶
-
created_date
¶ Gets the created_date of this NativeQueryGetResponse.
Returns: The created_date of this NativeQueryGetResponse. Return type: str
-
description
¶ Gets the description of this NativeQueryGetResponse.
Returns: The description of this NativeQueryGetResponse. Return type: str
-
executable
¶ Gets the executable of this NativeQueryGetResponse.
Returns: The executable of this NativeQueryGetResponse. Return type: bool
-
id
¶ Gets the id of this NativeQueryGetResponse.
Returns: The id of this NativeQueryGetResponse. Return type: str
-
is_business_query
¶ Gets the is_business_query of this NativeQueryGetResponse.
Returns: The is_business_query of this NativeQueryGetResponse. Return type: bool
-
is_dynamic
¶ Gets the is_dynamic of this NativeQueryGetResponse.
Returns: The is_dynamic of this NativeQueryGetResponse. Return type: bool
-
last_ten_execution_job_ids
¶ Gets the last_ten_execution_job_ids of this NativeQueryGetResponse.
Returns: The last_ten_execution_job_ids of this NativeQueryGetResponse. Return type: list[str]
-
name
¶ Gets the name of this NativeQueryGetResponse.
Returns: The name of this NativeQueryGetResponse. Return type: str
-
ontology_id
¶ Gets the ontology_id of this NativeQueryGetResponse.
Returns: The ontology_id of this NativeQueryGetResponse. Return type: str
-
pending_actions
¶ Gets the pending_actions of this NativeQueryGetResponse.
Returns: The pending_actions of this NativeQueryGetResponse. Return type: list[MappingErrorSQLDetails]
-
sql_statement
¶ Gets the sql_statement of this NativeQueryGetResponse.
Returns: The sql_statement of this NativeQueryGetResponse. Return type: str
-
updated_date
¶ Gets the updated_date of this NativeQueryGetResponse.
Returns: The updated_date of this NativeQueryGetResponse. Return type: str
sdi.models.null_request module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
NullRequest
(infer_schema_search_request=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'infer_schema_search_request': 'inferSchemaSearchRequest'}¶
-
attribute_types
= {'infer_schema_search_request': 'InferSchemaSearchRequest'}¶
-
infer_schema_search_request
¶ Gets the infer_schema_search_request of this NullRequest.
Returns: The infer_schema_search_request of this NullRequest. Return type: InferSchemaSearchRequest
sdi.models.ontology_create_request module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
OntologyCreateRequest
(ontology_description=None, id=None, ontology_name=None, key_mapping_type=None, file=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'file': 'file', 'id': 'id', 'key_mapping_type': 'keyMappingType', 'ontology_description': 'ontologyDescription', 'ontology_name': 'ontologyName'}¶
-
attribute_types
= {'file': 'file', 'id': 'str', 'key_mapping_type': 'str', 'ontology_description': 'str', 'ontology_name': 'str'}¶
-
file
¶ Gets the file of this OntologyMetadata.
Returns: The file of this OntologyMetadata. Return type: str
-
id
¶ Gets the id of this OntologyMetadata. Ontology id.
Returns: The id of this OntologyMetadata. Return type: str
-
key_mapping_type
¶ Gets the key_mapping_type of this OntologyMetadata. Ontology keyMappingType.
Returns: The key_mapping_type of this OntologyMetadata. Return type: str
-
ontology_description
¶ Gets the ontology_description of this OntologyMetadata. Ontology description.
Returns: The ontology_description of this OntologyMetadata. Return type: str
-
ontology_name
¶ Gets the ontology_name of this OntologyMetadata. Ontology name.
Returns: The ontology_name of this OntologyMetadata. Return type: str
sdi.models.ontology_job module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
OntologyJob
(id=None, status=None, message=None, created_date=None, updated_date=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'created_date': 'createdDate', 'id': 'id', 'message': 'message', 'status': 'status', 'updated_date': 'updatedDate'}¶
-
attribute_types
= {'created_date': 'str', 'id': 'str', 'message': 'str', 'status': 'str', 'updated_date': 'str'}¶
-
created_date
¶ Gets the created_date of this OntologyJob. Start time of Ontology job created in UTC date format.
Returns: The created_date of this OntologyJob. Return type: str
-
id
¶ Gets the id of this OntologyJob. Unique Ontology job ID.
Returns: The id of this OntologyJob. Return type: str
-
message
¶ Gets the message of this OntologyJob. Contains an message in case the job created. Possible messages: - The Resuest for Create Ontology. - The Resuest for Create Ontology using owl file upload. - The Resuest for Update Ontology.
Returns: The message of this OntologyJob. Return type: str
-
status
¶ Gets the status of this OntologyJob. Status of ontology creation/updation job. - SUBMITTED: job has been created but creation/updation of ontology not yet started. - IN_PROGRESS: Ontology creation or updation started. - FAILED: Ontology creation or updation has failed. No data is available to be retrieved. - SUCCESS: Ontology creation or updation has been successfully finished.
Returns: The status of this OntologyJob. Return type: str
-
updated_date
¶ Gets the updated_date of this OntologyJob. Job last modified time in UTC date format. The backend updates this time whenever the job status changes.
Returns: The updated_date of this OntologyJob. Return type: str
sdi.models.ontology_metadata module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
OntologyMetadata
(created_date=None, ontology_description=None, id=None, ontology_name=None, key_mapping_type=None, updated_date=None, file=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'created_date': 'createdDate', 'file': 'file', 'id': 'id', 'key_mapping_type': 'keyMappingType', 'ontology_description': 'ontologyDescription', 'ontology_name': 'ontologyName', 'updated_date': 'updatedDate'}¶
-
attribute_types
= {'created_date': 'str', 'file': 'file', 'id': 'str', 'key_mapping_type': 'str', 'ontology_description': 'str', 'ontology_name': 'str', 'updated_date': 'str'}¶
-
created_date
¶ Gets the created_date of this OntologyMetadata.
Returns: The created_date of this OntologyMetadata. Return type: str
-
file
¶ Gets the file of this OntologyMetadata.
Returns: The file of this OntologyMetadata. Return type: str
-
id
¶ Gets the id of this OntologyMetadata. Ontology id.
Returns: The id of this OntologyMetadata. Return type: str
-
key_mapping_type
¶ Gets the key_mapping_type of this OntologyMetadata. Ontology keyMappingType.
Returns: The key_mapping_type of this OntologyMetadata. Return type: str
-
ontology_description
¶ Gets the ontology_description of this OntologyMetadata. Ontology description.
Returns: The ontology_description of this OntologyMetadata. Return type: str
-
ontology_name
¶ Gets the ontology_name of this OntologyMetadata. Ontology name.
Returns: The ontology_name of this OntologyMetadata. Return type: str
-
updated_date
¶ Gets the updated_date of this OntologyMetadata.
Returns: The updated_date of this OntologyMetadata. Return type: str
sdi.models.ontology_response_data module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
OntologyResponseData
(created_date=None, ontology_description=None, id=None, ontology_name=None, key_mapping_type=None, updated_date=None, class_properties=None, classes=None, mappings=None, property_relations=None, schema_properties=None, schemas=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'class_properties': 'classProperties', 'classes': 'classes', 'created_date': 'createdDate', 'id': 'id', 'key_mapping_type': 'keyMappingType', 'mappings': 'mappings', 'ontology_description': 'ontologyDescription', 'ontology_name': 'ontologyName', 'property_relations': 'propertyRelations', 'schema_properties': 'schemaProperties', 'schemas': 'schemas', 'updated_date': 'updatedDate'}¶
-
attribute_types
= {'class_properties': 'list[InputClassProperty]', 'classes': 'list[InputClass]', 'created_date': 'str', 'id': 'str', 'key_mapping_type': 'str', 'mappings': 'list[InputMapping]', 'ontology_description': 'str', 'ontology_name': 'str', 'property_relations': 'list[InputPropertyRelation]', 'schema_properties': 'list[InputSchemaProperty]', 'schemas': 'list[InputSchema]', 'updated_date': 'str'}¶
-
class_properties
¶ Gets the class_properties of this OntologyResponseData.
Returns: The class_properties of this OntologyResponseData. Return type: list[InputClassProperty]
-
classes
¶ Gets the classes of this OntologyResponseData.
Returns: The classes of this OntologyResponseData. Return type: list[InputClass]
-
created_date
¶ Gets the created_date of this OntologyResponseData.
Returns: The created_date of this OntologyResponseData. Return type: str
-
id
¶ Gets the id of this OntologyResponseData. Ontology id.
Returns: The id of this OntologyResponseData. Return type: str
-
key_mapping_type
¶ Gets the key_mapping_type of this OntologyResponseData. Ontology keyMappingType.
Returns: The key_mapping_type of this OntologyResponseData. Return type: str
-
mappings
¶ Gets the mappings of this OntologyResponseData.
Returns: The mappings of this OntologyResponseData. Return type: list[InputMapping]
-
ontology_description
¶ Gets the ontology_description of this OntologyResponseData. Ontology description.
Returns: The ontology_description of this OntologyResponseData. Return type: str
-
ontology_name
¶ Gets the ontology_name of this OntologyResponseData. Ontology name.
Returns: The ontology_name of this OntologyResponseData. Return type: str
-
property_relations
¶ Gets the property_relations of this OntologyResponseData.
Returns: The property_relations of this OntologyResponseData. Return type: list[InputPropertyRelation]
-
schema_properties
¶ Gets the schema_properties of this OntologyResponseData.
Returns: The schema_properties of this OntologyResponseData. Return type: list[InputSchemaProperty]
-
schemas
¶ Gets the schemas of this OntologyResponseData.
Returns: The schemas of this OntologyResponseData. Return type: list[InputSchema]
-
updated_date
¶ Gets the updated_date of this OntologyResponseData.
Returns: The updated_date of this OntologyResponseData. Return type: str
sdi.models.page_token module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
PageToken
(page_token=None, alias_value=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'page_token': 'page_value'}¶
-
attribute_types
= {'page_token': 'str'}¶
-
page_token
¶ Gets the page_token of this PageToken.
Returns: The page_token of this PageToken. Return type: str
sdi.models.parameters module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
Parameters
(param_name=None, param_value=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'param_name': 'paramName', 'param_value': 'paramValue'}¶
-
attribute_types
= {'param_name': 'str', 'param_value': 'str'}¶
-
param_name
¶ Gets the param_name of this Parameters.
Returns: The param_name of this Parameters. Return type: str
-
param_value
¶ Gets the param_value of this Parameters.
Returns: The param_value of this Parameters. Return type: str
sdi.models.pattern module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
Pattern
(schema=None, matches=None, schema_valid=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'matches': 'matches', 'schema': 'schema', 'schema_valid': 'schemaValid'}¶
-
attribute_types
= {'matches': 'list[str]', 'schema': 'str', 'schema_valid': 'bool'}¶
-
matches
¶ Gets the matches of this Pattern.
Returns: The matches of this Pattern. Return type: list[str]
-
schema
¶ Gets the schema of this Pattern.
Returns: The schema of this Pattern. Return type: str
-
schema_valid
¶ Gets the schema_valid of this Pattern.
Returns: The schema_valid of this Pattern. Return type: bool
sdi.models.query_obsolete_result module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
QueryObsoleteResult
(errors=None, data=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'data': 'data', 'errors': 'errors'}¶
-
attribute_types
= {'data': 'list[ERRORUNKNOWN]', 'errors': 'list[ApiFieldError]'}¶
-
data
¶ Gets the data of this QueryObsoleteResult.
Returns: The data of this QueryObsoleteResult. Return type: list[ERRORUNKNOWN]
-
errors
¶ Gets the errors of this QueryObsoleteResult.
Returns: The errors of this QueryObsoleteResult. Return type: list[ApiFieldError]
sdi.models.query_parameters module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
QueryParameters
(data_tag=None, source_name=None, page_token=None, query_id=None, status=None, executable=None, is_dynamic=None, ontology_id=None, filter=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'data_tag': 'data_tag', 'executable': 'executable', 'filter': 'filter', 'is_dynamic': 'is_dynamic', 'ontology_id': 'ontology_id', 'page_token': 'page_value', 'query_id': 'query_id', 'source_name': 'source_name', 'status': 'status'}¶
-
attribute_types
= {'data_tag': 'str', 'executable': 'str', 'filter': 'filter', 'is_dynamic': 'str', 'ontology_id': 'ontology_id', 'page_token': 'str', 'query_id': 'str', 'source_name': 'str', 'status': 'str'}¶
-
data_tag
¶ Gets the data_tag of this QueryParameters.
Returns: The data_tag of this QueryParameters. Return type: str
-
executable
¶ Gets the executable of this QueryParameters.
Returns: The executable of this QueryParameters. Return type: str
-
filter
¶ Gets the filter of this QueryParameters.
Returns: The filter of this QueryParameters. Return type: str
-
is_dynamic
¶ Gets the is_dynamic of this QueryParameters.
Returns: The is_dynamic of this QueryParameters. Return type: str
-
ontology_id
¶ Gets the ontology_id of this QueryParameters.
Returns: The ontology_id of this QueryParameters. Return type: str
-
page_token
¶ Gets the page_token of this QueryParameters.
Returns: The page_token of this QueryParameters. Return type: str
-
query_id
¶ Gets the query_id of this QueryParameters.
Returns: The query_id of this QueryParameters. Return type: str
-
source_name
¶ Gets the source_name of this QueryParameters.
Returns: The source_name of this QueryParameters. Return type: str
-
status
¶ Gets the status of this QueryParameters.
Returns: The status of this QueryParameters. Return type: str
sdi.models.query_result module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
QueryResult
(status=None, timestamp=None, data=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'data': 'data', 'status': 'status', 'timestamp': 'timestamp'}¶
-
attribute_types
= {'data': 'list[ERRORUNKNOWN]', 'status': 'str', 'timestamp': 'str'}¶
-
data
¶ Gets the data of this QueryResult.
Returns: The data of this QueryResult. Return type: list[ERRORUNKNOWN]
-
status
¶ Gets the status of this QueryResult. Query result status.
Returns: The status of this QueryResult. Return type: str
-
timestamp
¶ Gets the timestamp of this QueryResult.
Returns: The timestamp of this QueryResult. Return type: str
sdi.models.response_all_data_query_execution_response module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
ResponseAllDataQueryExecutionResponse
(page=None, jobs=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'jobs': 'jobs', 'page': 'page'}¶
-
attribute_types
= {'jobs': 'list[DataQueryExecutionResponse]', 'page': 'TokenPage'}¶
-
jobs
¶ Gets the jobs of this ResponseAllDataQueryExecutionResponse.
Returns: The jobs of this ResponseAllDataQueryExecutionResponse. Return type: list[DataQueryExecutionResponse]
sdi.models.response_all_data_sql_query module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
ResponseAllDataSQLQuery
(page=None, queries=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'page': 'page', 'queries': 'queries'}¶
-
attribute_types
= {'page': 'TokenPage', 'queries': 'list[GetAllSQLQueriesData]'}¶
-
page
¶ Gets the page of this ResponseAllDataSQLQuery.
Returns: The page of this ResponseAllDataSQLQuery. Return type: TokenPage
-
queries
¶ Gets the queries of this ResponseAllDataSQLQuery.
Returns: The queries of this ResponseAllDataSQLQuery. Return type: list[GetAllSQLQueriesData]
sdi.models.response_all_ontologies module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
ResponseAllOntologies
(ontologies=None, page=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'ontologies': 'ontologies', 'page': 'page'}¶
-
attribute_types
= {'ontologies': 'list[OntologyMetadata]', 'page': 'TokenPage'}¶
-
ontologies
¶ Gets the ontologies of this ResponseAllOntologies.
Returns: The ontologies of this ResponseAllOntologies. Return type: list[OntologyMetadata]
sdi.models.schema_search_object module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
SchemaSearchObject
(data_tag=None, schema_name=None, category=None, aspect_name=None, asset_id=None, source_name=None, meta_data_tags=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
aspect_name
¶ Gets the aspect_name of this SchemaSearchObject.
Returns: The aspect_name of this SchemaSearchObject. Return type: str
-
asset_id
¶ Gets the asset_id of this SchemaSearchObject.
Returns: The asset_id of this SchemaSearchObject. Return type: str
-
attribute_map
= {'aspect_name': 'aspectName', 'asset_id': 'assetId', 'category': 'category', 'data_tag': 'dataTag', 'meta_data_tags': 'metaDataTags', 'schema_name': 'schemaName', 'source_name': 'sourceName'}¶
-
attribute_types
= {'aspect_name': 'str', 'asset_id': 'str', 'category': 'str', 'data_tag': 'str', 'meta_data_tags': 'list[str]', 'schema_name': 'str', 'source_name': 'str'}¶
-
category
¶ Gets the category of this SchemaSearchObject.
Returns: The category of this SchemaSearchObject. Return type: str
-
data_tag
¶ Gets the data_tag of this SchemaSearchObject.
Returns: The data_tag of this SchemaSearchObject. Return type: str
Gets the meta_data_tags of this SchemaSearchObject. metaDataTags can be defined while creating a data registry.
Returns: The meta_data_tags of this SchemaSearchObject. Return type: list[str]
-
schema_name
¶ Gets the schema_name of this SchemaSearchObject.
Returns: The schema_name of this SchemaSearchObject. Return type: str
-
source_name
¶ Gets the source_name of this SchemaSearchObject.
Returns: The source_name of this SchemaSearchObject. Return type: str
sdi.models.schema_search_request module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
SchemaSearchRequest
(schemas=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'schemas': 'schemas'}¶
-
attribute_types
= {'schemas': 'list[SchemaSearchObject]'}¶
-
schemas
¶ Gets the schemas of this SchemaSearchRequest.
Returns: The schemas of this SchemaSearchRequest. Return type: list[SchemaSearchObject]
sdi.models.sdi_file_upload_response module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
SdiFileUploadResponse
(file_path=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'file_path': 'filePath'}¶
-
attribute_types
= {'file_path': 'str'}¶
-
file_path
¶ Gets the file_path of this SdiFileUploadResponse.
Returns: The file_path of this SdiFileUploadResponse. Return type: str
sdi.models.sdi_ingest_data module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
SDIIngestData
(data_tag=None, file_path=None, root_tag=None, source_name=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'data_tag': 'dataTag', 'file_path': 'filePath', 'root_tag': 'rootTag', 'source_name': 'sourceName'}¶
-
attribute_types
= {'data_tag': 'str', 'file_path': 'str', 'root_tag': 'str', 'source_name': 'str'}¶
-
data_tag
¶ Gets the data_tag of this SDIIngestData.
Returns: The data_tag of this SDIIngestData. Return type: str
-
file_path
¶ Gets the file_path of this SDIIngestData.
Returns: The file_path of this SDIIngestData. Return type: str
-
root_tag
¶ Gets the root_tag of this SDIIngestData.
Returns: The root_tag of this SDIIngestData. Return type: str
-
source_name
¶ Gets the source_name of this SDIIngestData.
Returns: The source_name of this SDIIngestData. Return type: str
sdi.models.sdi_job_status_response module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
SdiJobStatusResponse
(job_id=None, started_date=None, finished_date=None, message=None, file_name=None, status=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'file_name': 'fileName', 'finished_date': 'finishedDate', 'job_id': 'jobId', 'message': 'message', 'started_date': 'startedDate', 'status': 'status'}¶
-
attribute_types
= {'file_name': 'str', 'finished_date': 'str', 'job_id': 'str', 'message': 'str', 'started_date': 'str', 'status': 'str'}¶
-
file_name
¶ Gets the file_name of this SdiJobStatusResponse.
Returns: The file_name of this SdiJobStatusResponse. Return type: str
-
finished_date
¶ Gets the finished_date of this SdiJobStatusResponse.
Returns: The finished_date of this SdiJobStatusResponse. Return type: str
-
job_id
¶ Gets the job_id of this SdiJobStatusResponse.
Returns: The job_id of this SdiJobStatusResponse. Return type: str
-
message
¶ Gets the message of this SdiJobStatusResponse.
Returns: The message of this SdiJobStatusResponse. Return type: str
-
started_date
¶ Gets the started_date of this SdiJobStatusResponse.
Returns: The started_date of this SdiJobStatusResponse. Return type: str
-
status
¶ Gets the status of this SdiJobStatusResponse.
Returns: The status of this SdiJobStatusResponse. Return type: str
sdi.models.sdi_schema_property module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
SDISchemaProperty
(data_type=None, custom_types=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'custom_types': 'customTypes', 'data_type': 'dataType'}¶
-
attribute_types
= {'custom_types': 'list[str]', 'data_type': 'str'}¶
-
custom_types
¶ Gets the custom_types of this SDISchemaProperty.
Returns: The custom_types of this SDISchemaProperty. Return type: list[str]
-
data_type
¶ Gets the data_type of this SDISchemaProperty.
Returns: The data_type of this SDISchemaProperty. Return type: str
sdi.models.sdi_schema_registry module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
SDISchemaRegistry
(created_date=None, id=None, updated_date=None, original_file_names=None, schema=None, schema_name=None, registry_id=None, category=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'category': 'category', 'created_date': 'createdDate', 'id': 'id', 'original_file_names': 'originalFileNames', 'registry_id': 'registryId', 'schema': 'schema', 'schema_name': 'schemaName', 'updated_date': 'updatedDate'}¶
-
attribute_types
= {'category': 'str', 'created_date': 'str', 'id': 'str', 'original_file_names': 'list[str]', 'registry_id': 'str', 'schema': 'dict(str, ListOfSchemaProperties)', 'schema_name': 'str', 'updated_date': 'str'}¶
-
category
¶ Gets the category of this SDISchemaRegistry.
Returns: The category of this SDISchemaRegistry. Return type: str
-
created_date
¶ Gets the created_date of this SDISchemaRegistry.
Returns: The created_date of this SDISchemaRegistry. Return type: str
-
id
¶ Gets the id of this SDISchemaRegistry.
Returns: The id of this SDISchemaRegistry. Return type: str
-
original_file_names
¶ Gets the original_file_names of this SDISchemaRegistry.
Returns: The original_file_names of this SDISchemaRegistry. Return type: list[str]
-
registry_id
¶ Gets the registry_id of this SDISchemaRegistry.
Returns: The registry_id of this SDISchemaRegistry. Return type: str
-
schema
¶ Gets the schema of this SDISchemaRegistry. This is a loosely defined schema string containing property name, one or more data type for given property and list of regex patterns for a type.
Returns: The schema of this SDISchemaRegistry. Return type: dict(str, ListOfSchemaProperties)
-
schema_name
¶ Gets the schema_name of this SDISchemaRegistry.
Returns: The schema_name of this SDISchemaRegistry. Return type: str
-
updated_date
¶ Gets the updated_date of this SDISchemaRegistry.
Returns: The updated_date of this SDISchemaRegistry. Return type: str
sdi.models.suggest_patterns_post module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
SuggestPatternsPostRequest
(sample_values=None, test_values=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'sample_values': 'sample_values', 'test_values': 'test_values'}¶
-
attribute_types
= {'sample_values': 'list[str]', 'test_values': 'list[str]'}¶
-
name
¶ Gets the sample_values of this SuggestPatternsPostRequest.
Returns: The sample_values of this SuggestPatternsPostRequest. Return type: list[str]
-
patterns
¶ Gets the test_values of this SuggestPatternsPostRequest.
Returns: The test_values of this SuggestPatternsPostRequest. Return type: list[str]
-
sample_values
¶ Gets the sample_values of this SuggestPatternsPostRequest.
Returns: The sample_values of this SuggestPatternsPostRequest. Return type: list[str]
-
test_values
¶ Gets the test_values of this SuggestPatternsPostRequest.
Returns: The test_values of this SuggestPatternsPostRequest. Return type: list[str]
sdi.models.token_page module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
TokenPage
(next_token=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'next_token': 'nextToken'}¶
-
attribute_types
= {'next_token': 'str'}¶
-
next_token
¶ Gets the next_token of this TokenPage. Opaque token to next page. Can be used in query paramter ‘pageToken’ to request next page. The property is only present in case there is a next page.
Returns: The next_token of this TokenPage. Return type: str
sdi.models.update_data_lake_request module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
UpdateDataLakeRequest
(base_path=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'base_path': 'basePath'}¶
-
attribute_types
= {'base_path': 'str'}¶
-
base_path
¶ Gets the base_path of this UpdateDataLakeRequest.
Returns: The base_path of this UpdateDataLakeRequest. Return type: str
sdi.models.update_data_registry_request module¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501
-
class
UpdateDataRegistryRequest
(default_root_tag=None, file_pattern=None, file_upload_strategy=None, meta_data_tags=None, xml_process_rules=None, schema_frozen=None)[source]¶ Bases:
object
- Attributes:
- attribute_types (dict): The key is attribute name
- and the value is attribute type.
- attribute_map (dict): The key is attribute name
- and the value is json key in definition.
-
attribute_map
= {'default_root_tag': 'defaultRootTag', 'file_pattern': 'filePattern', 'file_upload_strategy': 'fileUploadStrategy', 'meta_data_tags': 'metaDataTags', 'schema_frozen': 'schemaFrozen', 'xml_process_rules': 'xmlProcessRules'}¶
-
attribute_types
= {'default_root_tag': 'str', 'file_pattern': 'str', 'file_upload_strategy': 'str', 'meta_data_tags': 'list[str]', 'schema_frozen': 'bool', 'xml_process_rules': 'list[str]'}¶
-
default_root_tag
¶ Gets the default_root_tag of this UpdateDataRegistryRequest.
Returns: The default_root_tag of this UpdateDataRegistryRequest. Return type: str
-
file_pattern
¶ Gets the file_pattern of this UpdateDataRegistryRequest.
Returns: The file_pattern of this UpdateDataRegistryRequest. Return type: str
-
file_upload_strategy
¶ Gets the file_upload_strategy of this UpdateDataRegistryRequest.
Returns: The file_upload_strategy of this UpdateDataRegistryRequest. Return type: str
Gets the meta_data_tags of this UpdateDataRegistryRequest.
Returns: The meta_data_tags of this UpdateDataRegistryRequest. Return type: list[str]
-
schema_frozen
¶ Gets the schema_frozen of this UpdateDataRegistryRequest. This property can be changed to true after creating the initial schema to reuse the schema for the newly ingested data
Returns: The schema_frozen of this UpdateDataRegistryRequest. Return type: bool
-
xml_process_rules
¶ Gets the xml_process_rules of this UpdateDataRegistryRequest.
Returns: The xml_process_rules of this UpdateDataRegistryRequest. Return type: list[str]
Module contents¶
SDI - Semantic Data Interconnect APIs
The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere’s Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups: Data Registration for SDI This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation. Custom Data Type for SDI The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values. Data Lake for SDI The SDI can process files uploaded provides endpoints to manage customer’s data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification. Data Ingest for SDI This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses. Schema Registry for SDI The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name. Data Query for SDI allows querying based on the extracted schemas. Important supported APIs are: * Query interface for querying semantically correlated and transformed data. * Stores and executes data queries. * Uses a semantic model to translate model-based query to physical queries. Semantic Model for SDI allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are: * Contextual correlation of data from different systems. * Infers & Recommends mappings between different schemas. * Import and store Semantic model. # noqa: E501