llama.io.default.generate module¶
Generate files locally.
-
class
llama.io.default.generate.
MultiprocessingGraphExecutor
¶ Bases:
llama.io.classes.GraphExecutor
Submit iterables of
AbstractFileHandler
instances and generate their files in parallel in subprocesses of the main process.-
classmethod
gen_manager
()¶ Get a file generation manager (following the
concurrent.futures.Executor
interface). This base implementation usesProcessPoolExecutor
to generate files in parallel using multiple subprocesses.
-
classmethod
submit
(graph) → Tuple[Tuple[llama.classes.AbstractFileHandler, concurrent.futures._base.Future]]¶ Submit each file in the
FileGraph
graph
that is ready to be generated to the file generationProcessPoolExecutor
and generate them in parallel. Returns an iterable ofTuple[AbstractFileHandler, Future]
instances matching theAbstractFileHandler
instance that is being generated to aFuture
that will either return the same successfully-generatedAbstractFileHandler
instance or raise any exceptions occuring during generation when itsresult
method is called. An attempt will be made to generate all files in the graph, so downselect accordingly.
-
classmethod