Link Search Menu Expand Document

Adding Applications with the Python Library

Adding new module

Adding an application to Nuvla is straightforward; you simply must reference an existing Docker image and provide some simple metadata for the application.

For example, the Jupyter Notebook application was added with the following Python script.

import os

#
# Creates the services and credenials needed for the ESA Swarm/Minio
# infrastructure for GNSS.
#
# The following environmenal variables can/must be defined:
#
# NUVLA_ENDPOINT: endpoint of Nuvla server, defaults to localhost
# NUVLA_USERNAME: username to access Nuvla
# NUVLA_PASSWORD: password to access Nuvla
#

from nuvla.api import Api as nuvla_Api

nuvla_api = nuvla_Api(os.environ['NUVLA_ENDPOINT'], insecure=True)

nuvla_api.login_password(os.environ['NUVLA_USERNAME'], os.environ['NUVLA_PASSWORD'])

#
# Add component for GNSS Python application
#

gnss_comp = {"author": "esa",
             "commit": "initial commit",
             "architectures": ["amd64"],
             "image": {"repository": "sixsq",
                       "image-name": "gssc-jupyter",
                       "tag": "latest"},
             "output-parameters": [{"name": "jupyter-token", "description": "jupyter authentication token"}],
             "ports": [{"protocol": "tcp",
                        "target-port": 8888}],
             "urls": [["jupyter", "http://${hostname}:${tcp.8888}/?token=${jupyter-token}"]],
             }

gnss_module = {"name": "GNSS Jupyter Notebook",
               "description": "Jupyter notebook application integrated with Nuvla data management",
               "logo-url": "https://upload.wikimedia.org/wikipedia/commons/thumb/8/80/ESA_logo.svg/320px-ESA_logo.svg.png",
               "subtype": "component",
               "path": "gssc-jupyter",
               "parent-path": "",
               "data-accept-content-types": ["text/plain", "application/octet-stream"],
               "content": gnss_comp}

gnss_module_response = nuvla_api.add('module', gnss_module)
gnss_module_id = gnss_module_response.data['resource-id']
print("module id: %s\n" % gnss_module_id)

The first important field to point out is the image definition. This looks like the following:

"image": {"repository": "sixsq",
          "image-name": "gssc-jupyter",
          "tag": "latest"},

The repository, image-name, and tag are exactly the values that you would use to start the image with Docker. If the image is an “official” image from Docker, then simply leave out the “repository” field. The “repository” can also contain the server address, if you are not using the Docker Hub.

WARNING: Only open repositories are supported at the moment. You cannot use repositories that require authentication.

The second important field is “output-parameters”. These list the values that will be set by the container itself. A common use of output parameters is to provide authentication tokens or passwords, as is the case for the Jupyter Notebook.

The third important field is the “ports”. This field lists the ports that are exposed by the container. These ports will be mapped automatically to ephemeral ports. The actual port mappings can be recovered from the output parameters. For example, a port definition like the following:

"ports": [{"protocol": "tcp",
           "target-port": 8888}] 

will produce the output parameter “tcp.8888” which will contain the actual port.

Fourth, the application developer can provide a list of URL patterns that is used by the UI (or other clients) to construct URLs to the application services. For the Jupyter Notebook, the value is:

"urls": [["jupyter", "http://${hostname}:${tcp.8888}/?token=${jupyter-token}"]],

NOTE: The syntax for referencing the values of output parameters. The use of output parameter values allows the developer to specify the URLs independently of a particular deployment.

Last, the application developer can provide a list of accepted content types for the application:

"data-accept-content-types": ["text/plain", "application/octet-stream"]

This allows the UI (and other clients) to associate appropriate data formats with a given application.