llama.io.default package

A default implementation of llama.io.IO. Store and calculate LLAMA results on disk as plain files using git to track version changes.

class llama.io.default.MultiprocessingGraphExecutor

Bases: llama.io.classes.GraphExecutor

Submit iterables of AbstractFileHandler instances and generate their files in parallel in subprocesses of the main process.

classmethod gen_manager()

Get a file generation manager (following the concurrent.futures.Executor interface). This base implementation uses ProcessPoolExecutor to generate files in parallel using multiple subprocesses.

classmethod submit(graph) → Tuple[Tuple[llama.classes.AbstractFileHandler, concurrent.futures._base.Future]]

Submit each file in the FileGraph graph that is ready to be generated to the file generation ProcessPoolExecutor and generate them in parallel. Returns an iterable of Tuple[AbstractFileHandler, Future] instances matching the AbstractFileHandler instance that is being generated to a Future that will either return the same successfully-generated AbstractFileHandler instance or raise any exceptions occuring during generation when its result method is called. An attempt will be made to generate all files in the graph, so downselect accordingly.

Submodules