llama.cli module¶
Command-Line Interface (CLI) primitives to be used by scripts throughout the pipeline. Base your scripts off of these classes as much as possible to shorten development time and provide a unified “look-and-feel” to the entire library’s CLI.
-
class
llama.cli.
CanonicalPathAction
(option_strings, dest, nargs=None, const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)¶ Bases:
argparse.Action
Canonicalize a collection of paths with
os.path.realpath
, remove duplicates, and sort the canonicalized paths so that the full list is an unambiguous representation of the specified values.
-
class
llama.cli.
CliParser
(*args, parents=(), **kwargs)¶ Bases:
argparse.ArgumentParser
Extend
ArgumentParser
with postprocessing functions that run on the parsed arguments whenparse_args
orparse_known_args
are called to adjust their values or raise errors as necessary.-
POSTPROCESSORS
= ()¶
-
error
(message)¶ Same as
ArgumentParser.error
but with a bright red error message.
-
parse_known_args
(args: List[str] = None, namespace=None)¶ Parse known arguments and apply all functions in
POSTPROCESSORS
to the returnednamespace
. Also return unrecognized arguments.- Parameters
args (List[str], optional) – The arguments to parse. Will parse from
sys.argv
usingArgumentParser.parse_args
if not provided.namespace (optional) – Namespace to pass to
ArgumentParser.parse_args
.
- Returns
parsed (argparse.Namespace) – Arguments with
self.postprocess
applied.unrecognized (List[str]) – Unrecognized arguments.
-
postprocess
(namespace: argparse.Namespace)¶ A method that acts on the
argparse.Namespace
returned byArgumentParser.parse_args
and returns the samenamespace
with any necessary modifications. A good place to raise exceptions or execute actions based on the full combination of parsed arguments. Works by callingself.POSTPROCESSORS
in order (a tuple of functions with the same signature as the unboundself.postprocess
method).- Parameters
namespace (argparse.Namespace) – The return value of
ArgumentParser.parse_args
.- Returns
namespace – The input with any necessary transformations applied.
- Return type
argparse.Namespace
-
print_help
(file=None)¶ Print the help string for command line consumption. Same as
ArgumentParser.print_help
, but cleans up ReST directives in default output of the parser for improved legibility.
-
-
class
llama.cli.
ErrAlertAction
(option_strings, dest, nargs=None, const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)¶ Bases:
argparse.Action
Indicate that maintainer alerts should be on, letting maintainers know about broken functionality and providing error tracebacks.
-
class
llama.cli.
Parsers
¶ Bases:
object
Use a
Parsers
instance to accessArgumentParser
classes that can be passed as a list in any combination to a newArgumentParser
instance as theparents
keyword argument. This prevents you from having to write the same help documentation repeatedly. You can override any keyword arguments-
clobber
= CliParser(prog='sphinx-build', usage=None, description=None, formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=False)¶
-
dev_mode
= CliParser(prog='sphinx-build', usage=None, description=None, formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=False)¶
-
erralert
= CliParser(prog='sphinx-build', usage=None, description=None, formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=False)¶
-
helpenv
= CliParser(prog='sphinx-build', usage=None, description=None, formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=False)¶
-
outdir
= CliParser(prog='sphinx-build', usage=None, description=None, formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=False)¶
-
outfile
= CliParser(prog='sphinx-build', usage=None, description=None, formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=False)¶
-
version
= CliParser(prog='sphinx-build', usage=None, description=None, formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=False)¶
-
-
class
llama.cli.
PrintEnvAction
(option_strings, dest, nargs=None, const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)¶ Bases:
argparse.Action
Print environmental variables loaded by
llama
and quit.
-
class
llama.cli.
RecursiveCli
(prog: str = None, description: str = None, subcommands: Dict[str, module] = None, localparser: argparse.ArgumentParser = None, preprocessor: function = None)¶ Bases:
object
A recursive command line interface that allows the user to access the
__main__.py:main()
functions of submodules using aCMD SUBCMD
notation with clever recursive helpstring documentation to enable straightforward subcommand discovery by the user and to avoid cluttering a namespace with hyphen-separated subcommands.Examples
Using
RecursiveCli
to implementmain
inllama.__main__
lets you accessllama.files.__main__:main()
in a convenient way from the command line. The commands below are equivalent:python -m llama.files python -m llama files
This becomes useful when you realize that you now only need a single script/alias/entry point for your script’s submodules. So if you add an entry point or launcher script for
llama
to your distribution, you can replacepython -m llama
withllama
, and you get thellama files
subcommand without any extra work. With the addition of a single entry point, the above command becomes:llama files
Better still, this can be applied recursively to every subpackage to access its mixture of package-level CLI commands and submodule CLIs (with no changes to your single point mentioned above). So, for example, the following two can be equivalent:
python -m llama.files.i3 llama files i3
Most importantly, this automatic feature discovery enables users to heirarchically find commands and features without necessitating changes to higher-level packages docstrings or CLIs. The same code that enables the subcommand syntax shown above allows those subcommands to be listed with a help string, so that the following command will tell you that
llama files
is a valid command and summarize what it does:llama --help
You can then, of course, run
llama files --help
to learn details about that module (and see which submodules it offers). This is similar togit
and other programs, but it can be recursed ad-infinitum for rich libraries.-
classmethod
from_module
(modulename: str, **kwargs)¶ Autogenerate the
submodules
dictionary by finding available CLIs inmodule
.- Parameters
modulename (str) – fully-qualified module name in which to search for
subcommands
to pass to__init__
. The keys ofsubcommands
will simply be the module names of the submodules ofmodulename
.prog
will be themodulename
with periods replaced by spaces, e.g.'llama.files'
will turn intoprog='llama files'
, since this reflects the way the command is used (assuming the top level module,llama
in the given example, implements aRecursiveCli
and is callable from the command line using the top-level module name, again,llama
in this example).**kwargs – Remaining arguments (besides
subcommands
andprog
) will be passed to__init__
along with the subcommands described above.
- Returns
cli – A new instance with
subcommands
automatically discovered frommodule
.- Return type
- Raises
TypeError – If
subcommands
orprog
is in**kwargs
or if any other arguments not recognized by__init__
are passed.
-
get_epilog
()¶ Get an
epilog
to pass toargparse.ArgumentParser
that contains information on available subcommands.
-
get_parser
()¶ Get a command-line argument parser for this CLI.
- Returns
parser – An
CliParser
instance (seeCliParser
, a subclass ofargparse.ArgumentParser
implementing post parsing hooks) containing all specifiedsubcommands
. Ifself.localparser
was passed at initialization time, thenparser
will be initialized therefrom.- Return type
-
main
()¶ The
main
function that you should run if this module is called as a script. Parses the command line options usingself.get_parser()
, prints the help documentation if noSUBCOMMAND
is specified at the command line, runsself.preprocessor(self.get_parser().parse_args())
and then, if it completes without exiting, runs aSUBCOMMAND
if specified.
-
print_help_if_no_cli_args
(_args)¶ If no command-line arguments are parsed, prints the help documentation and exits. This is the default
preprocessor
(since, in the absence of another preprocessor, a complete absence of CLI arguments results in a no-op likely indicating a lack of user understanding).Uses an
CliParser
to parse arguments (since it knows how to deal with variations in executable names, e.g.python -m llama
vs.llama/__main__.py
vs.python3 llama/__main__.py
) and, if no arguments whatsoever are discernable, runsself.get_parser().print_help()
and quits.
-
classmethod
-
llama.cli.
close_stdout_stderr
(outfile='/dev/null')¶ Redirect stdout and stderr to
outfile
at the file descriptor level so that, when the process is forked to the background, no new data is written to stdout/stderr. Since everything should be getting logged anyway, this should not be necessary, but unfortunately we need to use production code that prints to these file descriptors instead of usinglogging
.- Parameters
outfile (str, optional) – The path to the logfile that should collect all of the output printed to STDOUT and STDERR. Defaults to
os.devnull
(i.e. delete all outputs).
-
llama.cli.
get_logging_cli
(default_logfile, default_loglevel='error')¶ Create a
CliParser
that automatically turns on logging to the specified output file (--logfile
, always at maximum verbosity) as well as the terminal at the specified verbosity level (--verbosity
) withdefault_logfile
anddefault_loglevel
as defaults.verbosity
should benone
if no terminal output is to be printed or else one of thelogging
log levels in lower case form (see:LOGLEVELS). Again, output will be logged to the ``logfile
at the maximum verbosity (DEBUG
) to make sure nothing is lost; suppress this behavior by setting/dev/null
as the logfile.- Parameters
default_logfile (str) – Path to which the script should log by default.
default_loglevel (int or NoneType, optional) – How verbose to be by default.
none
means to print nothing; other values are typical log levels. Must be a value specified in LOGLEVELS.
- Returns
parser – A parser to use as one of the
parents
to a newCliParser
instance, which will inherit the logging CLI options and automatic logging setup behavior.- Return type
- Raises
ValueError – If
default_loglevel
is not a value inLOGLEVELS
.
-
llama.cli.
get_postprocess_required_arg
(*arg_names)¶ Return a postprocessor that prints help if the required argument
arg_names
are not specified at the command line by checking whether they’re set to a value that evaluates asFalse
. Use this if you want to defer checking for a required argument until postprocessing.
-
llama.cli.
log_exceptions_and_recover
(callbacks=(<function traceback_alert_maintainers>, ))¶ Decorator to run
func
with no arguments. Log stack trace and run callbacks to clean up (default: send the traceback to maintainers) when anyException
is raised, returning the value of the wrapped function or else the exception that was caught (note that this will break functionality of any function that is supposed to return anException
, and that you should only apply this sort of behavior in a command line script that needs to recover from all exceptions). Error tracebacks are syntax-highlighted (for 256-color terminals) for readability.Optionally provide an iterable of callbacks to run to override the default,
traceback_alert_maintainers
. Callbacks passed in this list must have the same signature as that function. Use this to perform other cleanup tasks or to avoid sending an alert on error.Signature below is for the returned decorator.
- Parameters
func (function) – A function that is supposed to recover from all possible exceptions. Exceptions with tracebacks will be logged and sent to maintainers using
alert_maintainers
. You should only wrap functions that cannot be allowed to crash, e.g. themain
function of a long-running CLI script.- Returns
func – The wrapped function. Has the same inputs and return values as the original function unless the original function raises an
Exception
while executing. In this case, the return value will be the exception instance that was raised. Note that you probably don’t care about this value in normal use patterns, and also note that you should therefore not wrap a function that would ever nominally return an exception instance since there will no longer be any way to distinguish an expected return value from a caught exception.- Return type
function
-
llama.cli.
loglevel_from_cli_count
(loglevel)¶ Get the label, e.g.
DEBUG
, corresponding to the number of times the user typed-v
at the command line.
-
llama.cli.
parse_atom
(*args, postprocessors=(), **kwargs)¶ Create a new
CliParser
class with no help documentation added and add a single argument to it.- Parameters
*args – Positional arguments to pass to the new parser’s
add_argument
method.postprocessors (list-like) – A list of functions to set
POSTPROCESSORS
to in the returnedCliParser
**kwargs – Keyword arguments to pass to the new parser’s
add_argument
method.
- Returns
parser – A new parser with a single argument. Pass this to other new
ArgumentParser
instances as one of theparents
.- Return type
ArgumentParser
-
llama.cli.
pidfile
(lockdir)¶ Get the path to the file containing the process ID for the
llama run
instance running on this run directory with this eventidfilter.
-
llama.cli.
postprocess_dev_mode
(self: llama.cli.CliParser, namespace: argparse.Namespace)¶ If we’re not running on a clean git repository, quit (unless
namespace.dev_mode
isTrue
, indicating that the developer knows the repository is in an unclean state). Intended to help reproducibility and correctness.
-
llama.cli.
postprocess_logging
(self: llama.cli.CliParser, namespace: argparse.Namespace)¶ Run
setup_logger(namespace.logfile, loglevel)
to set up a logger for this script based on user input.
-
llama.cli.
postprocess_version
(self: llama.cli.CliParser, namespace: argparse.Namespace)¶ If
namespace.version
isTrue
, print the LLAMA version and exit.
-
llama.cli.
print_running_procs_action
(running_lock_dir: str, command_nicknames: Dict[tuple, str])¶ Get a
PrintRunningProcsAction
class that can be used in anArgumentParser
argument as the action to show which processes whose lockfiles are stored inrunning_lock_dir
are currently running.
-
llama.cli.
printprocs
(pids, command_nicknames=frozenset({}))¶ Print a nicely-formatted list of processes and their subprocesses using
proc_printer
.- Parameters
pids (list-like,) – An iterable of process ids.
command_nicknames (Dict[tuple, str], optional) – Specify nicknames for commands as a dictionary mapping from the arguments associated with a command (e.g.
['llama', 'run']
to the replacement nicknames for each (each of which will be wrapped in square brackets like[llama run]
) when the processes are printed. Thepython
version printed before these arguments is omitted to save space. This command shortening is intended to highlight and shorten interesting commands.
-
llama.cli.
proc_formatter
(pid)¶ Get a dictionary that can be used to format
PROC_FMT
viaproc_printer
with information about the process with idpid
.
-
llama.cli.
proc_printer
(procs, command_nicknames: Dict[tuple, str] = frozenset({}), indent: list = [])¶ Print a bunch of processes in a nice tree-like way to STDOUT.
-
llama.cli.
register_exit_handler
()¶ Make sure we perform exit actions by calling
exit
explicitly on interrupts and SIGTERMs.
-
llama.cli.
running_pids
(running_lock_dir: str)¶ Find all running instances of whatever program is using
running_lock_dir
to store its lock directories (each of which should contain a pidfile with the process ID stored in it).
-
llama.cli.
safe_launch_daemon
(lockdir: str, post_fork: function = None)¶ Fork this process twice, exiting the parent and grandparent, to put this script into the background and creating a lock directory (atomic on most filesystems) specific to this process, then run a
post_fork
function to do any extra initialization or conflict checking (e.g. to check whether this process is conflicting with other processes in a way not accounted for by the initial lock aquisition check). Continues execution in the new grandchild process.
-
llama.cli.
spawn_daemon
()¶ Do the UNIX double-fork magic, see Stevens’ “Advanced Programming in the UNIX Environment” for details (ISBN 0201563177). Taken from https://stackoverflow.com/questions/6011235.
-
llama.cli.
traceback_alert_maintainers
(func: function, err: Exception, tb: str, self, *args, **kwargs)¶ An
error_callback
function forlog_exceptions_and_recover
. Runsalert_maintainers
with the current traceback and logs the stack trace. Does not send out alerts if--err-alert
is not set at the command line.- Parameters
func (FunctionType) – The function that caused the error.
err (Exception) – The exception that was caught.
tb (str) – The traceback to send as a message.
self (object or None) – If
func
is a method, this will be the__self__
value bound to it, otherwiseNone
.*args – The positional arguments passed to
func
that causederr
.**kwargs – The keyword arguments passed to
func
that causederr
.