llama serve jupyter

Launch a Jupyter Notebook server.

usage: llama serve jupyter [-h] [-l LOGFILE]
                           [-v {debug,info,warning,error,critical,none}]
                           [-p PORT] [--notebook-dir NOTEBOOK_DIR] [--ip IP]
                           [-a [DOMAIN-NAME]] [-w] [-k]

Named Arguments

-p, --port

The port to listen on. (default: 8080)

Default: “8080”


Where to store jupyter notebook files. By default, they will be saved in the LLAMA data directory in /root/.local/share/llama

Default: “/root/.local/share/llama”


The IP address the server will listen on. (default:

Default: “”

-a, --alert-maintainers

If provided, use llama.files.slack.utils.alert_maintainers to message LLAMA maintainers with a list of active Jupyter notebooks and their login tokens. The IP address specified in ip and the port specified in port will be replaced with the DOMAIN-NAME if provided; if you want to manually specify an output port, tack it on after the domain name with a colon-separator (as you would with a normal URL), e.g. multimessenger.science:8080. If no port is specified as DOMAIN-NAME, port will be reused. This allows Slack users to access the notebook at the provided URL. These tokens can be used to log in to this Jupyter notebook (and any others running on this server/container). BE CAREFUL WITH THESE TOKENS! They provide full access to the Jupyter notebook; you should probably only use this in production, and make sure not to share those tokens. You should also make sure to regenerate those tokens regularly. Note also that the script will fail if you try to alert maintainers without providing valid Slack credentials (see llama.files.slack.utils).

Default: “”

-w, --writeable-docs

If provided, set any documentation notebooks (like README.ipynb) to writeable. Use this for development mode and then commit the notebook with llama dev upload -g README.ipynb (from the notebook directory) and update the remote URL for llama.serve.jupyter.README with the printed URL.

Default: False

-k, --keep-docs

If provided, don’t bother downloading new README if it’s stored locally; in other words, keep existing documentation if present so as not to accidentally delete development work. As with --writeable-docs, this should probably only be used in development mode.

Default: False

logging settings

-l, --logfile

File where logs should be written. By default, all logging produced by llama run goes to both an archival logfile shared by all instances of the process as well as STDERR. The archival logfile can be overridden with this argument. If you specify /dev/null or a path that resolves to the same, logfile output will be suppressed automatically. Logs written to the logfile are always at maximum verbosity, i.e. DEBUG. (default: /root/.local/share/llama/logs/jupyter.log)

Default: “/root/.local/share/llama/logs/jupyter.log”

-v, --verbosity

Possible choices: debug, info, warning, error, critical, none

Set the verbosity level at which to log to STDOUT; the --logfile will ALWAYS receive maximum verbosity logs (unless it is completely supressed by writing to /dev/null). Available choices correspond to logging severity levels from the logging library, with the addition of none if you want to completely suppress logging to standard out. (default: info)

Default: “info”