Managing Docker Models¶
Docker Image Overview and Constraints¶
In addition to supporting Python (2, 3) and R models developed in Jupyter or Notebook, Docker is among the types of models PrL supports. Docker models have the advantage of being able to run any custom code, in any program language, and also Linux distribution preferred by users. The default operating system for all other types of models is the AWS AMI Linux distribution. There are few constraints related to the data ingestion and persistence functions in the docker image setup and, specifically the Docker image persisted in Model Management has these contraints:
- Data will be consumed from the /data/input folder.
- Data that is to be persisted, will be written in the /data/output folder.
These folders will be correctly setup for automated execution by the Job Manager service, which will retrieve the data for the job and persist it in /data/input, as well as data written in the /data/input folder, and will place it into the designated persistence service, such as Data Exchange, Predictive Learning Storage, or Integrated Data Lake (IDL).
About Creating a Docker Image to Use in MindSphere¶
When creating a Docker image to use in MindSphere, and to ensure that the image contains the proper folders for Job Manager to copy in input files, or to copy from results, add the following lines close to the beginning of the Docker file:
Persisting a Docker Image in Model Management¶
Follow these steps to create a new Docker model:
- Access the Manage Analytical Models Details page. The page opens in a new tab.
- Click the New Version button. The Create New Version pop-up window opens.
- From the Type drop-down list, select Docker Image.
This updates the dialog window, and displays these Docker-relevant controls:
- A Generate Token button
- A text field in which users must provide a complete Docker image repository and tag version.
Docker Options Illustration¶
This image illustrates the Docker options in the dialog window.
Generate Token Option¶
Before a Docker Image can be associated to a Model, it needs to be brought into the Predictive Learning (PrL) service, which requires the user to push the Docker Image into the PrL service repository.
These steps illustrate the order of events when using the Generate Token option, and must be completed within two hours of clicking the Generate Token button:
- The service generates a unique repository, into which the user can push the Docker image.
- The user must create a tag for the image, which will serve as a reference to the specific upload version.
- The user has to log in first so as to push the tagged image into the service's own repository.
- The user invokes the Docker push command.
Failure to complete the steps above within the two hour window means the user must run through all the steps of the process again. Also, once the two hours have elapsed, no updates can be made to the uploaded Docker image.
Once the token is generated, the Docker image (repository and tag) must link to the Model Management model within 24 hours; if this step is not performed within the time period,the repository will be automatically deleted.
The following illustrates the dialog window and the ready-to-copy instructions for the Generate Token button:
Once the image has been uploaded, the user can now associated the Docker image, by referencing the correct repository and tag, with a Model Management entry.
The dialog window can be safely closed between the Docker image upload, and the creation of the Model Management entry.
There is no need to generate a docker image if a valid repository and tag are already available - you can copy them directly into the Image repository text field.
Any questions left?
Except where otherwise noted, content on this site is licensed under the Development License Agreement.