Eval tools by OpenAI.
A framework for few-shot evaluation of language models.
a unified platform from LangChain framework for: evaluation, collaboration HITL (Human In The Loop), logging and monitoring LLM applications.
This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.
a framework that helps you evaluate your Retrieval Augmented Generation (RAG) pipelines.
Testing & evaluation library for LLM applications, in particular RAGs
A reliable click-and-go evaluation suite compatible with both open-source and proprietary models, supporting MixEval and other benchmarks.
Your email address will not be published. Required fields are marked *
Comment *
Name *
Email *
Website
Captcha: 14 - 18 = ?*
Save my name, email, and website in this browser for the next time I comment.
A framework for few-shot evaluation of language models.