A default implementation of
llama.io.IO. Store and calculate LLAMA results
on disk as plain files using
git to track version changes.
Submit iterables of
AbstractFileHandlerinstances and generate their files in parallel in subprocesses of the main process.
Get a file generation manager (following the
concurrent.futures.Executorinterface). This base implementation uses
ProcessPoolExecutorto generate files in parallel using multiple subprocesses.
submit(graph) → Tuple[Tuple[llama.classes.AbstractFileHandler, concurrent.futures._base.Future]]¶
Submit each file in the
graphthat is ready to be generated to the file generation
ProcessPoolExecutorand generate them in parallel. Returns an iterable of
Tuple[AbstractFileHandler, Future]instances matching the
AbstractFileHandlerinstance that is being generated to a
Futurethat will either return the same successfully-generated
AbstractFileHandlerinstance or raise any exceptions occuring during generation when its
resultmethod is called. An attempt will be made to generate all files in the graph, so downselect accordingly.