Managing Docker Models¶
Docker Image Overview and Constraints¶
In addition to supporting Python (2, 3) and R models developed in Jupyter or Notebook, Docker is among the types of models PrL supports. Docker models have the advantage of being able to run any custom code, in any program language, and also Linux distribution preferred by users. The default operating system for all other types of models is the AWS AMI Linux distribution. There are few constraints related to the data ingestion and persistence functions in the docker image setup and, specifically the Docker image persisted in Model Management has these contraints:
- Data will be consumed from the /data/input folder.
- Data that is to be persisted, will be written in the /data/output folder.
These folders will be correctly setup for automated execution by the Job Manager service, which will retrieve the data for the job and persist it in /data/input, as well as data written in the /data/input folder, and will place it into the designated persistence service, such as Data Exchange, Predictive Learning Storage, or Integrated Data Lake (IDL).
About Creating a Docker Image to Use in Predictive Learning¶
If you want to create your own Docker image to hold your code or model, you will require at a very minimum a Dockerfile. Usually, you 'inherit' one of the public images that provides minimal support for your code or model. Here's a short example:
ARG BASE_CONTAINER=python:3.9-slim-bullseye FROM $BASE_CONTAINER USER root RUN ["mkdir", "/tmp/input"] RUN ["mkdir", "/tmp/output"] RUN chmod 777 -R /tmp RUN ["mkdir", "/data"] RUN ["mkdir", "/data/input"] RUN ["mkdir", "/data/output"] RUN chmod 777 -R /data RUN ["mkdir", "/iot_data"] RUN ["mkdir", "/iot_data/input"] RUN ["mkdir", "/iot_data/output"] RUN ["mkdir", "/iot_data/datasets"] RUN chmod 777 -R /iot_data RUN ["mkdir", "/prl_storage_data"] RUN chmod 777 -R /prl_storage_data RUN pip install awscli RUN apt-get update RUN apt-get install wget -y RUN apt-get install curl -y RUN apt-get install jq -y COPY . . ENTRYPOINT ["python3", "./my_python_script.py"]
The lines that create folders 'RUN ["mkdir", ...]' will create the proper folders for Job Manager to copy in input files, or to copy from results. If you do not pass in any inputs or outputs to your container, then, these are not needed. In addition, if you want your Docker image to contain additional libraries, you can install these here using 'RUN apt-get install ...'. These commands depend on your operating system, and they should be adapted to each. For detailed instructions on how to design your Dockerfile please check Dockerfile reference.
Persisting a Docker Image in Model Management¶
Follow these steps to create a new Docker model:
- Access the Manage Analytical Models Details page. The page opens in a new tab.
- Click the New Version button. The Create New Version pop-up window opens.
- From the Type drop-down list, select Docker Image.
This updates the dialog window, and displays these Docker-relevant controls:
- A Generate Token button
- A text field in which users must provide a complete Docker image repository and tag version.
Importing a Model¶
When importing an existing model, the process begins with the "Import a Model/Develop a New Model" pop-up Window:
Follow these steps to import a model:
- Click "Add/Develop Model" on the Landing or Models list page. The Import/Develop a Model pop-up window displays.
- Make sure you are on the "Import a Model" tab.
- Enter a name and description (optional).
- Select an expiration date from the Calendar pop-up window.
- Select a model type from the Type drop-down list, or select "Browse" to locate and select a model file.
- Click "Save". Your imported model displays in the Models table.
Importing Docker Images¶
If you select the model Type to "Docker Image" the "Browse for Model File" button will be replaced with the "Generate Token" button. This is required due to way Docker images can be imported in the application. In general, Docker images are developed locally or, it can be imported from an external source.
Clicking the "Generate Token" will provide you with a temporary session credentials that will allow you to upload the Docker image to our Docker registry. We require this in order to allow secure and high-performance on any usages of your Docker image. After you upload your Docker image, we will hold a secure and private copy of it that can be accessed only by your tenant. In addition, we wrap the image with metadata needed to execute it and map any inputs and outputs to it, as well as to show logs that come out of its execution. Now let's proceed and click "Generate Token":
Already built Docker images tend to be large files as they are they contain complete setups of operating systems with your own additions. This allows replicating environments that you have built and prepared, as well as their execution in most of the other external environments, such as public or private cloud environments. Docker images pack everything into an hierarchical structure (layers) and contain the metadata needed to interact with the exterior and with its own container engine. These images are built with a Docker compliant engine following a set of instructions that are described in a file named Dockerfile. A Docker engine compiles these instructions into a Docker image that can be distributed and instantiated as a Docker container by any container compliant engine. Building the image is often done with the help of a command line interface, and we are requiring the same Docker compliant command line to upload the image into our system. Therefore, the instructions in the pop-up are meant to be used with such a command line, but they target the Docker CLI.
- This is meant for reference only, our system designates an URI that will tell you where your Docker image will be uploaded. This is immutable and attempting to change it, will make our system unaware of where you have uploaded your Docker image
- Tag your local image with the instructions from this step. You need to replace
with your local IMAGE_ID, that you find by using "docker images" command; the can be found under the "IMAGE_ID" column
- login to our Docker registry using the command provided at this step. You can expand the textbox containing the long session string to reveal the registry where your Docker image will be uploaded
- after you get a successful login at the above step, you can start "pushing" (uploading) your local Docker image to our registry using the command from this step.
Now you can close the pop-up.
Please note that your local image might have a "tag" that is usually the string that follows after the URL and is separated by a colon, like in "URL:tag". The tag is helpful to denote versions for example, like "v1.0.1" or "final-v1.0". If your tagged image at step 2 above includes this tag, then, after closing the pop-up, you need to paste the URL stated at step 1 above, in the pop-up, in the "Image Repository URI (with tag)" field, including the tag, as in the picture below.
Make sure that you click "Save" only after pushing your Docker image has been finished.
Clicking "Save" will instruct the system to verify the Docker's image existence in our registry and its validity.
Downloading a Docker Image¶
You can download a previously uploaded Docker image by using similar steps as the ones above. Instead of pushing you will be able to download (pull) a Docker image once you have a valid temporary session with our Docker registry. From the Models list, click the "..." button and use the "Download model" action menu. This will not download the actual image but the access session in the form of a JSON file. From the JSON file you can depict the keys needed to login to our registry.
Using Docker CLI, you can proceed using a similar "docker login -u AWS -p
Provided JSON file contains two types of authentication: 1. first part for Docker compliant CLIs under the "credentials" key, that contains "user", "password"; these can be used with Docker CLI to connect to our registry 2. second part, "providerCredentials" containing "accessKey", "secret" and "sessionToken" for AWS CLI For the second option you can use the AWS CLI tools to interact with your image. It provides additional -but limited to AWS ECR- functionality than Docker CLI (e.g. docker image scanning); the list of capabilities can be explored directly from the AWS CLI once you logged in the registry.
Except where otherwise noted, content on this site is licensed under the Development License Agreement.