llama.dev.dv package

Execute actions across a bunch of DigitalOcean servers based on their names. Useful for background and sensitivity runs. Executes a shell command on each server using SSH. You can either specify shell scripts to run (in non-interactive mode, as if you were calling “ssh foo@bar ‘my shell command’”) or can provide your own shell command as an argument to the script.

class llama.dev.dv.Command(commandname, local, args)

Bases: object

A command that can be used on many Droplets. Can have arguments passed to it.

cleanup(env)

Run a cleanup command AFTER the parallel processes run for each server. Specified as a standard bash command in ‘CLEANUP=…’ anywhere in the file (including in a comment, which is useful for non-bash scripts with different variable setting syntax). This command is ALWAYS run once locally, regardless of how the vectorized commands run. STDOUT and STDERR are not piped.

cmdline(ip)

The command line arguments in a list format (as expected by the first argument of subprocess.Popen).

property default_str

Get the raw defaults spec string for this command from the original script (parsed into argparse arguments using Command.defaults).

property defaults

Get the default argparse args for this command. Specified as ‘DROPVEC=…’ anywhere in the file (including in a comment, which is useful for non-bash scripts with different variable setting syntax).

property help

Return the help string for this command. Specified as ‘HELP=…’ anywhere in the file (including in a comment, which is useful for non-bash scripts with different variable setting syntax).

run(droplet_name, ip, timeout, i, num_drops)

Run a command synchronously using subprocess.Popen.

Parameters
  • droplet_name (str) – The name of the droplet to run on.

  • ip (str) – The IP address of the droplet to run on.

  • timeout (float) – The maximum execution time in seconds, after which the process will be killed.

  • i (int) – The number of the droplet in the full list of droplets.

  • num_drops (int) – The total number of droplets run.

Returns

  • stdout (str) – STDOUT from the process.

  • stderr (str) – STDERR from the process.

  • retval (str) – The return value of the process.

setup(env)

Run a setup command BEFORE the parallel processes run for each server. Specified as a standard bash command in ‘SETUP=…’ anywhere in the file (including in a comment, which is useful for non-bash scripts with different variable setting syntax). This command is ALWAYS run once locally, regardless of how the vectorized commands run. STDOUT and STDERR are not piped.

llama.dev.dv.apply_default_cli_args(args)

Modify the command line arguments with the args provided in args. Returns the provided command line arguments modified using defaults from get_parser when neither CLI args nor args overrides them.

llama.dev.dv.argparse_to_dict(args)

Get a dictionary representing an argparse.Namespace.

llama.dev.dv.argset_string(args)

Set a bunch of arguments as if they were passed through the command line of a bash script.

llama.dev.dv.get_droplets(pattern)

Get droplets whose names match the fnmatch pattern pattern.

llama.dev.dv.get_parser()

Get an ArgumentParser with this tool’s command line interface.

llama.dev.dv.print_defaults()

Print default values of command line arguments for dropvec for a given command.

llama.dev.dv.print_descriptions()

Print descriptions of the available commands.

llama.dev.dv.run_workers(cmd, droplets, fmt, prelaunch, connections, timeout)

Run the given bash commmand string on the specified droplets, printing the given fmt string after each one completes, using up to connections concurrent connections while spending up to timeout seconds on each connection.

llama.dev.dv.scripts()

Print a list of available scripts.