Low-Latency Algorithm for Multi-messenger Astrophysics (LLAMA)’s Documentation

Conda py36 Conda py37 Docker Base Docker Env Docker Dev
py36env py37env dockerbase dockerenv dockerdev

LLAMA is a reliable and flexible multi-messenger astrophysics framework and search pipeline. It identifies astrophysical signals by combining observations from multiple types of astrophysical messengers and can either be run in online mode as a self-contained low-latency pipeline or in offline mode for post-hoc analyses. It was first used during Advanced LIGO’s second observing run (O2) for the joint online LIGO/Virgo/IceCube Gravitational Wave/High-Energy Neutrino (GWHEN) search.

LLAMA’s search alorithm has been upgraded to run during LIGO’s third observing run (O3).

_images/bayesian-calculation-diagram.png

Flowchart describing the significance calculation used in the O3 version of the pipeline.

This full website can also be downloaded in printer-friendly PDF format.

The LLAMA team thanks the many researchers working in LIGO/Virgo, IceCube, and other astrophysics projects that enable this pipeline to operate successfully. In particular, they thank Scott Barthelmy and Leo Singer of NASA for providing helpful code and advice for working with GCN.

The authors are grateful to the IceCube Collaboration for providing neutrino datasets and support for testing this algorithm. The Columbia Experimental Gravity group is grateful for the generous support of Columbia University in the City of New York and the National Science Foundation under grants PHY-1404462 and PHY-1708028. The authors are thankful for the generous support of the University of Florida and Columbia University in the City of New York.

Developer's Guide

API Documentation

Indices and tables