llama serve jupyter
¶
Launch a Jupyter Notebook server. Specify the domain using environment variable
LLAMA_DOMAIN
(default: localhost) and the port as
LLAMA_JUP_PORT
(default: 8080).
usage: llama serve jupyter [-h] [-l LOGFILE]
[-v {debug,info,warning,error,critical,none}]
[--notebook-dir NOTEBOOK_DIR] [--ip IP] [-a] [-w]
[-k] [--show-hidden]
Named Arguments¶
- --notebook-dir
Where to store jupyter notebook files. By default, they will be saved in the LLAMA data directory in /root/.local/share/llama
Default: “/root/.local/share/llama”
- --ip
The IP address the server will listen on. (default: 0.0.0.0)
Default: “0.0.0.0”
- -a, --alert-maintainers
If provided, use
llama.com.slack.alert_maintainers
to message LLAMA maintainers with a list of active Jupyter notebooks and their login tokens. This allows Slack users to access the notebook at the provided URL. These tokens can be used to log in to this Jupyter notebook (and any others running on this server/container). BE CAREFUL WITH THESE TOKENS! They provide full access to the Jupyter notebook; you should probably only use this in production, and make sure not to share those tokens. You should also make sure to regenerate those tokens regularly. Note also that the script will fail if you try to alert maintainers without providing valid Slack credentials (seellama.com.slack
).Default: False
- -w, --writeable-docs
If provided, set any documentation notebooks (like README.ipynb) to writeable. Use this for development mode and then commit the notebook with
llama dev upload -g README.ipynb
(from the notebook directory) and update the remote URL forllama.serve.jupyter.README
with the printed URL.Default: False
- -k, --keep-docs
If provided, don’t bother downloading new README if it’s stored locally; in other words, keep existing documentation if present so as not to accidentally delete development work. As with
--writeable-docs
, this should probably only be used in development mode.Default: False
- --show-hidden
If provided, show hidden files in the file browser. This is useful for modifying metadata rider files used by the pipeline.
Default: False
logging settings¶
- -l, --logfile
File where logs should be written. By default, all logging produced by
llama run
goes to both an archival logfile shared by all instances of the process as well as STDERR. The archival logfile can be overridden with this argument. If you specify/dev/null
or a path that resolves to the same, logfile output will be suppressed automatically. Logs written to the logfile are always at maximum verbosity, i.e. DEBUG. (default: /root/.local/share/llama/logs/jupyter.log)Default: “/root/.local/share/llama/logs/jupyter.log”
- -v, --verbosity
Possible choices: debug, info, warning, error, critical, none
Set the verbosity level at which to log to STDOUT; the
--logfile
will ALWAYS receive maximum verbosity logs (unless it is completely supressed by writing to /dev/null). Available choices correspond to logging severity levels from thelogging
library, with the addition ofnone
if you want to completely suppress logging to standard out. (default: info)Default: “info”