Testing & evaluation library for LLM applications, in particular RAGs
a framework that helps you evaluate your Retrieval Augmented Generation (RAG) pipelines.
a unified platform from LangChain framework for: evaluation, collaboration HITL (Human In The Loop), logging and monitoring LLM applications.
A reliable click-and-go evaluation suite compatible with both open-source and proprietary models, supporting MixEval and other benchmarks.
A framework for few-shot evaluation of language models.
Eval tools by OpenAI.
a lightweight LLM evaluation suite that Hugging Face has been using internally.
Your email address will not be published. Required fields are marked *
Comment *
Name *
Email *
Website
Captcha: 11 + 16 = ?*
Save my name, email, and website in this browser for the next time I comment.
a framework that helps you evaluate your Retrieval Augmented Generation (RAG) pipelines.