llama.test.test_bin module

Test various LLAMA command line executables/scripts.

class llama.test.test_bin.AbstractTestS190412mSkymapInfoCli

Bases: llama.test.classes.S190412mMixin, llama.test.test_bin.AbstractTestSkymapInfoCli

Test our ability to generate the correct skymap_info.json file through the skymap_info CLI for S190412m. Subclass this with specific flag combinations to test multiple cases.

SKYMAP_FILENAME = 'bayestar.fits,0'
class llama.test.test_bin.AbstractTestSkymapInfoCli

Bases: llama.test.classes.AbstractFileGenerationComparator

Test the ability of the skymap_info command line interface to correctly register and save a new event. Specify different cli_args, FLAGS, and FLAGS_PRESET to test different cases (though note that only one of FLAGS or FLAGS_PRESET should be overridden with non-None values).

FLAGS = None
FLAGS_PRESET = None
NON_DETERMINISTIC_JSON_FIELDS = ['notice_time_iso']
SKYMAP_FILENAME = None
STARTING_MANIFEST = ()
cli_args

Specify the command-line arguments (besides --flags-preset, which is handled by FLAGS_PRESET, and --flags, which is handled by FLAGS; both of these will be prepended to cli_args) to pass to run_on_args when testing the skymap_info CLI. By default, the output directory, skymap_filename (if it is specified as something other than None in SKYMAP_FILENAME), and graceid are the only things specified.

cli_flags

Get the flag-specification command-line arguments for this test by constructing them from either self.FLAGS or self.FLAGS_PRESET. Exactly one of these must be specified or else an assertion will fail.

compare()

We are comparing two JSON files, but some of the contents are non-deterministic; ignore those fields and compare the outputs as JSON dictionaries. Also checks that the saved flag values are as expected.

execute()

Run the skymap_info CLI to generate the expected output file.

pipeline

A Pipeline instance defining the output files generated by this test. These are the output files that will be compared by AbstractFileGenerationComparator.compare to make sure that all outputs are as expected (and, as a sanity check, that no inputs have been mutated).

spec_flags

Get a dictionary of the flags specified by either FLAGS or FLAGS_PRESET with which to check the output flag values. Exactly one of these must be specified or else an assertion will fail.

class llama.test.test_bin.TestS190412mSkymapInfoCliPublic

Bases: llama.test.test_bin.AbstractTestS190412mSkymapInfoCli

AbstractTestS190412mSkymapInfoCli with TRIGGERED_PUBLIC flag preset.

FLAGS_PRESET = 'TRIGGERED_PUBLIC'
class llama.test.test_bin.TestS190412mSkymapInfoCliTest

Bases: llama.test.test_bin.AbstractTestS190412mSkymapInfoCli

AbstractTestS190412mSkymapInfoCli with TRIGGERED_PUBLIC flag preset.

FLAGS_PRESET = 'TRIGGERED_TEST'
llama.test.test_bin.test_llama_pipeline_parser()

Test the llama.pipeline.Parsers.pipeline parsers with various command line inputs. Make sure they parse the expected Pipeline instances.

llama.test.test_bin.test_llama_run_parser()

Test the llama.run.Parsers.eventfiltering CLI parser, which interprets which directories a run should operate on, by making sure it parses inputs as expected.

llama.test.test_bin.test_mjd_interval()

Make sure llama.files.i3.__main__.mjd_interval correctly parses input times making sure that a few equivalent input argument combinations produce the same output.