Model Management Service¶
The Model Management for Analytical solutions helps customers to store single file models, algorithms, scripts, docker images and training or validation data, used for machine learning or AI tasks.
The Model Management Service is exposed as a REST API. Storing, retrieving, updating of models and their versions, along with the associated metadata can be done by simple API calls.
For accessing this service you need to have the respective roles listed in Model Management Service roles and scopes.
The Model Management Service stores and serves models for either active users or applications, which require storing of (large) binaries. It supports both versioning and metadata information.
Model Management supports structured information associated with models, such as:
- Model Metadata: Provides general model information
- Version Metadata: Provides traceability of model versions
- Version Payload: Provides traceability of the actual binary content of a model, which is always associated with version information
The model metadata stores general model information like name, author, creation date and its type.
The version metadata stores detailed information regarding the stored version. Those are the version number, the type (like Zeppelin, Jupyter, Protobuff, Docker etc.), in/out parameters, freeform parameters, build and/or run dependencies (libraries and associated version), and dependencies on other models. A dependency on another model is for example given, if the other model produces a payload, which is required as an input. This dependency is defined using the
The version payload stores the actual model content in a file, which can be of any type, including .json, .pmml, .py, .ipynb, or .pb.
The Model Management Service exposes its API for realizing the following tasks:
- Storing analytical model binaries and versioning info
- Managing versions of a model
- Downloading a model for examination or execution
- Defining dependencies needed to execute a model
- Defining parameters required to execute a model
- Currently, the Model Management Service can only store one version payload (file) for a specific version of a model.
- Model needs to be kept in sync with asset modelling. The user needs to update the model and retain it, if there is any change in asset modelling which is used in the algorithm/model. If this is not done, then jobs using such models will start failing as they would be looking for a variable/aspect/asset by name which does not exists in the system.
- All tenants have 100 GB storage allocated by default irresepctive of the offering
A client has their own analytical models for training or forecasting. They use the Model Management Service store these models and retrieve them for training. After the training is finished, the model can output weights or another type of trained model binaries. Additional models or simple inference services can load the weights files to perform predictions.
Any questions left?
Except where otherwise noted, content on this site is licensed under the MindSphere Development License Agreement.