Three Ways of Using Large Language Models to Evaluate Chat
This paper describes the systems submitted by team6 for ChatEval, the DSTC 11 Track 4 competition.
Tags:Paper and LLMsChatbotPricing Type
- Pricing Type: Free
- Price Range Start($):
GitHub Link
The GitHub link is https://github.com/oplatek/chateval-llm
Introduce
Title GitHub – oplatek/chateval-llm Enhancing Chat Evaluation Using Large Language Models
Summary The GitHub repository “oplatek/chateval-llm” presents a system description of the Dstc11 Track 4 submission, focusing on three different approaches to leveraging Large Language Models for improved chat evaluation. The repository provides valuable insights into utilizing these methods to enhance the assessment of conversational agents.
Three Ways of Using Large Language Models to Evaluate Chat. A system description of Dstc11 Track 4 submission.
Content
A system description of Dstc11 Track 4 submission.














