This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

preprint

trusted source

proofread

Chain of Draft approach allows AI models to carry out tasks using far fewer resources

Chain of Draft approach allows AI models to carry out tasks using far fewer resources
Comparison of Claude 3.5 Sonnet’s accuracy and token usage across different tasks with three different prompt strategies: direct answer (Standard), Chain of Thought (CoT), and Chain of Draft (CoD). Credit: arXiv (2025). DOI: 10.48550/arxiv.2502.18600

A small team of AI engineers at Zoom Communications has developed a new approach to training AI systems that uses far fewer resources than the standard approach now in use. The team has published their results on the arXiv preprint server.

The new approach developed at Zoom is called Chain of Draft (CoD), an update of the traditional approach now in use called Chain of Thought (CoT). CoT uses a step-by-step approach to solving a problem, similar in many ways to human problem-solving. The research team noted that CoT tends to generate more steps than are needed to solve a problem and found a way to reduce them.

Humans do not usually think about every step involved in solving a problem, especially if they are writing them down, because some steps are seen as basic knowledge. Instead, they jump over or combine some of them. The result is a list of essential steps.

That, the researchers suggest, is the essence of CoD. They accomplished this in practice by limiting a prompt engine to allow a maximum of five words. This forced the engine to be more concise and clear, and reduced the number of steps needed to describe how a problem should be solved.

To test their ideas, the researchers modified test AI models, such as Claude 3.5 Sonnet, to force them to use CoD instead of CoT. They found that the number of tokens needed to solve a problem was greatly reduced.

In one line of sports-related questions, for example, they found the tokens used by the system were reduced from 189.4 to just 14.3, even as accuracy improved from 93.2 to 97.35. Their approach allowed LLMs to provide answers using fewer words—in some cases, using 7.6% of the words used by traditional models using CoT, while also improving accuracy.

Using CoD instead of CoT in many , such as math, coding or other logic, could use far fewer computational resources, which in turn would mean both a reduction in processing time and associated costs. The team claims that organizations using AI applications based on CoT could be switched to CoD with minimal effort.

The code and data for its use have been posted on GitHub.

More information: Silei Xu et al, Chain of Draft: Thinking Faster by Writing Less, arXiv (2025). DOI: 10.48550/arxiv.2502.18600

Code and data: github.com/sileix/chain-of-draft

Journal information: arXiv

© 2025 Science X Network

Citation: Chain of Draft approach allows AI models to carry out tasks using far fewer resources (2025, March 4) retrieved 5 March 2025 from https://techxplore.com/news/2025-03-chain-approach-ai-tasks-resources.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

LlamaV-o1: Curriculum learning–based LLM shows benefits of step-by-step reasoning in AI systems

36 shares

Feedback to editors