TL;DR: DyFlow introduces a two-level Designer–Executor architecture with dynamic operators that adaptively re-plan subgoals during execution based on intermediate feedback. This enables more generalizable and robust reasoning across diverse domains and tasks.
- Execution-adaptive workflows: Dynamically adjust reasoning processes and subgoals according to intermediate feedback
- Two-core components:
- Designer — performs high-level task decomposition and planning
- Executor — carries out low-level execution and tool invocation
- Cross-domain evaluation: Demonstrated effectiveness across multiple domains
git clone https://github.com/wyf23187/DyFlow.git
cd DyFlow
pip install -r requirements.txtCreate a .env file with your API keys:
OPENAI_API_KEY=your_openai_key
ANTHROPIC_API_KEY=your_anthropic_key
DEEPINFRA_API_KEY=your_deepinfra_keyDyPlanner uses a locally deployed model via vLLM. First, deploy the DyPlanner model:
# Download and deploy the DyPlanner model from Hugging Face
# Model: https://huggingface.co/wyf23187/DyPlanner
vllm serve wyf23187/DyPlanner \
--port 8000 \The ModelService.local() will automatically connect to this vLLM endpoint at http://localhost:8000 to get responses from DyPlanner.
For basic usage and benchmark evaluation examples, please refer to:
scripts/run_workflow.py- Single problem workflow executionscripts/run_dataset.py- Batch benchmark evaluation
Available benchmarks: HumanEval, MATH, LiveBench, SocialMaze, PubMedQA
For generating training data from DyFlow execution traces, see train/.
If you find our work useful, please cite:
@inproceedings{wang2025dyflow,
title={DyFlow: Dynamic Workflow Framework for Agentic Reasoning},
author={Wang, Yanbo and Xu, Zixiang and Huang, Yue and Wang, Xiangqi and Song, Zirui and Gao, Lang and Wang, Chenxi and Tang, Xiangru and Zhao, Yue and Cohan, Arman and others},
booktitle={Advances in Neural Information Processing Systems},
year={2025}
}