Often it's useful to spawn tasks for each dependency result yielded. Got the idea from coala/coala-bears#1871:
we need a "link status bear" or "link bear" which extracts all links from a file and stores information about those links in a re-usable manner.
We could parallelize link-wise in this case with a bear dependency. But the general idea was born from that, that for each dependency we spawn a new task, instead of each file.
I'm not sure how the task generation shall exactly look like, there are multiple possibilities (because self.dependency_results is a dict):
- Pass every dependency-result to the analyze function:
def analyze(self, bear, dependency_result)
- Provide every dependency result at once for each bear:
def analyze(self, bear, dependency_results)
- Configurable behaviour allowing all of these (either using new bear classes or allowing to pass some settings).
- ...
Currently I'm favoring the 1st one.
The name of DependencyBear might change :)
Often it's useful to spawn tasks for each dependency result yielded. Got the idea from coala/coala-bears#1871:
We could parallelize link-wise in this case with a bear dependency. But the general idea was born from that, that for each dependency we spawn a new task, instead of each file.
I'm not sure how the task generation shall exactly look like, there are multiple possibilities (because
self.dependency_resultsis a dict):def analyze(self, bear, dependency_result)def analyze(self, bear, dependency_results)Currently I'm favoring the 1st one.
The name of
DependencyBearmight change :)