Installation

These installation instructions are meant for users to get up and running with LLAMA with minimal effort. See the developer instructions for information on developer dependencies and tools as well as further background documentation.

System Requirements

Make sure you have at least 4GB of memory (physical or virtual; :ref:`swap space`_ is fine) and 15GB of free space on your file system.

Installing Conda

LLAMA depends on LIGO tools that are only distributed via conda, so you’ll need to install an Anaconda python distribution to get up and running (developer notes on Conda <migrating-to-conda>): Conda installs are done on a per-user basis, so you won’t need to use sudo for any of the below. Start by installing the latest version of Conda:

curl -O https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh

Log out and log back in again, then activate conda-forge and install LIGO tools:

conda activate
conda config --add channels conda-forge
# old method: use LIGO's environment
# wget -q https://git.ligo.org/lscsoft/conda/raw/master/environment-py36.yml
# conda env create -f environment-py36.yml
curl -O https://raw.githubusercontent.com/stefco/llama-env/master/llama-py36.yml
conda env create -f llama-py36.yml

Activate the LIGO virtualenv (NOTE: You will need to do this every time you want to use this python setup! Consider putting this command in your .bashrc file.):

conda activate llama-py36

Clone the LLAMA repository into ~/dev/multimessenger-pipeline:

mkdir -p ~/dev
cd ~/dev
git clone git@bitbucket.org:stefancountryman/multimessenger-pipeline.git
cd multimessenger-pipeline

Fetch all data files (make sure you have git-lfs installed)

git lfs fetch
git lfs checkout

Install dependencies:

curl -O https://raw.githubusercontent.com/stefco/llama-env/master/requirements.txt
pip install -r requirements.txt

Install the pipeline in developer mode:

python setup.py develop

Confirm installation succeeded by seeing if you can print the help command from the command-line interface (CLI):

llama --help

Optionally, run LLAMA’s test suite to make sure things are working okay (though note that many tests will fail if you haven’t entered your authentication credentials for external services):

make test

That’s it! All important llama tools can be accessed at the command line as subcommands of the llama command; run llama --help to see what’s available. The llama CLI follows the same structure as the llama python modules, which you can import into your python scripts.