Source code for sdi.clients.data_ingest_client

# coding: utf-8

"""
    SDI - Semantic Data Interconnect APIs

    The Semantic Data Interconnect (SDI) is a collection of APIs that allows the user to unlock the potential of disparate big data by connecting external data. The SDI can infer the schemas of data based on schema-on-read, allow creating a semantic model and perform big data semantic queries. It seamlessly connects to MindSphere's Integrated Data Lake (IDL), but it can work independently as well. There are two mechanisms that can be used to upload files so that SDI can generate schemas and make data ready for query. The SDI operations are divided into the following groups:    **Data Registration for SDI** This set of APIs is used to organize the incoming data. When configuring a Data Registry, you have the option to update your data based on a replace or append strategy. If you consider a use case where schema may change and incoming data files are completely different every time then replace is a good strategy. The replace strategy will replace the existing schema and data during each data ingest operation whereas the append strategy will update the existing schema and data during each data ingest operation.    **Custom Data Type for SDI** The SDI by default identifies basic data types for each property, such as String, Integer, Float, Date, etc. The user can use this set of APIs to create their own custom data type. The SDI also provides an API under this category to suggest data type based on user-provided sample test values.    **Data Lake for SDI** The SDI can process files uploaded provides endpoints to manage customer's data lake registration based on tenant id, cloud provider and data lake type. The set of REST endpoint allows to create, update and retrieve base path for their data lake. The IDL customer needs to create an SDI folder that is under the root folder. Any file uploaded in this folder is automatically picked up by SDI to process via IDL notification.    **Data Ingest for SDI** This set of APIs allows user to upload files, start an ingest job for uploaded files, find job status for ingested jobs or retrieve all job statuses.    **Schema Registry for SDI** The SDI provides a way to find the generated schema in this category. Users can find an SDI generated schema for uploaded files based on source name, data tag or schema name.   **Data Query for SDI** allows querying based on the extracted schemas. Important supported APIs are:   * Query interface for querying semantically correlated and transformed data.   * Stores and executes data queries.   * Uses a semantic model to translate model-based query to physical queries.   **Semantic Model for SDI** allows user to create semantic model ontologies based on the extracted one or more schemas. The important functionalities achieved with APIs are:   * Contextual correlation of data from different systems.   * Infers & Recommends mappings between different schemas.   * Import and store Semantic model.   # noqa: E501
"""


from __future__ import absolute_import

from mindsphere_core.mindsphere_core import logger
from mindsphere_core import mindsphere_core, exceptions, token_service
from mindsphere_core.token_service import init_credentials


[docs]class DataIngestClient: __base_path__ = '/api/sdi/v4' __model_package__ = __name__.split('.')[0] def __init__(self, rest_client_config=None, mindsphere_credentials=None): self.rest_client_config = rest_client_config self.mindsphere_credentials = init_credentials(mindsphere_credentials)
[docs] def create_data_upload(self, file): """Upload the given file for the current tenant. The input file must be less than 100MB in size. Initiate the file upload process for provided file under the current tenant. :param DataUploadPostRequest file: It contains the below parameters --> |br| ( file* - Select the file to Upload for SDI process ) :return: SdiFileUploadResponse """ logger.info('DataIngestClient.create_data_upload() invoked.') if file is None: raise exceptions.MindsphereClientError('`file` is not passed when calling `create_data_upload`') end_point_url = '/dataUpload' end_point_url = end_point_url.format() token = token_service.fetch_token(self.rest_client_config, self.mindsphere_credentials) api_url = mindsphere_core.build_url(self.__base_path__, end_point_url, self.rest_client_config) headers = {'Accept': 'application/json', 'Authorization': 'Bearer ' + str(token)} query_params = {} form_params, local_var_files, body_params = {}, {}, None local_var_files['file'] = file logger.info('DataIngestClient.data_upload_post() --> Proceeding for API Invoker.') return mindsphere_core.invoke_service(self.rest_client_config, api_url, headers, 'POST', query_params, form_params, body_params, local_var_files, 'SdiFileUploadResponse', self.__model_package__)
[docs] def get_ingest_job_status(self, request_object): """Retrieve a list of jobIds. Get a list of jobIds that is ingested, this jobId can be used to find detailed status using ingestJobStatus. :param IngestJobStatusGetRequest request_object: It contains the below parameters --> |br| ( pageToken - Selects next page. Value must be taken rom response body property 'page.nextToken'. If omitted, first page is returned. ) :return: ListOfJobIds """ logger.info('DataIngestClient.get_ingest_job_status() invoked.') end_point_url = '/ingestJobStatus' end_point_url = end_point_url.format() token = token_service.fetch_token(self.rest_client_config, self.mindsphere_credentials) api_url = mindsphere_core.build_url(self.__base_path__, end_point_url, self.rest_client_config) headers = {'Accept': 'application/json','Content-Type': 'application/json', 'Authorization': 'Bearer ' + str(token)} if(request_object != None): query_params = {'pageToken': request_object.page_token} else: query_params = {} form_params, local_var_files, body_params = {}, {}, None logger.info('DataIngestClient.get_ingest_job_status() --> Proceeding for API Invoker.') return mindsphere_core.invoke_service(self.rest_client_config, api_url, headers, 'GET', query_params, form_params, body_params, local_var_files, 'ListOfJobIds', self.__model_package__)
[docs] def get_ingest_job_status_id(self, id): """Retreive the job status. Retrieve the job status based on jobid for the current tenant. The jobid belongs to data ingestion process started for the current tenant. :param IngestJobStatusIdGetRequest id: It contains the below parameters --> |br| ( id* - job ID ) :return: SdiJobStatusResponse """ logger.info('DataIngestClient.get_ingest_job_status_id() invoked.') if id is None: raise exceptions.MindsphereClientError('`id` is not passed when calling `get_ingest_job_status_id`') end_point_url = '/ingestJobStatus/{id}' end_point_url = end_point_url.format(id=id) token = token_service.fetch_token(self.rest_client_config, self.mindsphere_credentials) api_url = mindsphere_core.build_url(self.__base_path__, end_point_url, self.rest_client_config) headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'Authorization': 'Bearer ' + str(token)} query_params = {} form_params, local_var_files, body_params = {}, {}, None logger.info('DataIngestClient.get_ingest_job_status_id() --> Proceeding for API Invoker.') return mindsphere_core.invoke_service(self.rest_client_config, api_url, headers, 'GET', query_params, form_params, body_params, local_var_files, 'SdiJobStatusResponse', self.__model_package__)
[docs] def create_ingest_jobs(self, request_object): """Ingest the provided file and start SDI's schema generation process. Initiate the data ingestion and start the SDI's schema generation process for the current tenant. This operation currently supports CSV and XML format files. The XML format file requires root element information that is entry point for this operation. This is provided as either rootTag parameter to this operation or registered as part of Data Registry API operation. There are two modes for data ingest. * Default: Allows performing the data ingest without need of any Data Registry. In this case service processes files with default policy for schema generation. The schema generated this way cannot be versioned or changed with different files. This mode is used for quick validating the generated schema. * Dataregistry: The operation uses Data Registry for ingested file to generate schema. This is preferred mode as it allows more validation against the Data Registry and create multiple schema based on different domains created under Data Registry. Using this mode customer can create combination of schemas from different domain and query them or use for analytical modelling. This works in combination with Data Registry API. :param IngestJobsPostRequest request_object: It contains the below parameters --> |br| ( ingestData* - Specifies the file path and Data Registry information to initiate data ingest process. The '{filePath}' is required parameter and valid file path used during file upload operations. The '{dataTag}' and '{sourceName}' are the valid Data Registry source name and data tag. The '{rootTag}' is optional and applies to XML formatted files. ) :return: SdiJobStatusResponse """ logger.info('DataIngestClient.create_ingest_jobs() invoked.') if request_object is None: raise exceptions.MindsphereClientError('`request_object` is not passed when calling `create_ingest_jobs`') end_point_url = '/ingestJobs' end_point_url = end_point_url.format() token = token_service.fetch_token(self.rest_client_config, self.mindsphere_credentials) api_url = mindsphere_core.build_url(self.__base_path__, end_point_url, self.rest_client_config) headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'Authorization': 'Bearer ' + str(token)} query_params = {} form_params, local_var_files, body_params = {}, {}, request_object logger.info('DataIngestClient.create_ingest_jobs() --> Proceeding for API Invoker.') return mindsphere_core.invoke_service(self.rest_client_config, api_url, headers, 'POST', query_params, form_params, body_params, local_var_files, 'SdiJobStatusResponse', self.__model_package__)