Easily build, version, evaluate and deploy your LLM-powered apps.
A method designed to enhance the efficiency of Transformer models
Formerly langchain-ChatGLM, local knowledge based LLM (like ChatGLM) QA app with langchain.
Test your prompts. Evaluate and compare LLM outputs, catch regressions, and improve prompt quality.
A Device-Inference framework, including LLM Inference on device(Mobile Phone/PC/IOT)
Run LLMs and batch jobs on any cloud. Get maximum cost savings, highest GPU availability, and managed execution -- all with a simple interface.
Playground for devs to finetune & deploy LLMs
Your email address will not be published. Required fields are marked *
Comment *
Name *
Email *
Website
Captcha: 10 - 15 = ?*
Save my name, email, and website in this browser for the next time I comment.
A method designed to enhance the efficiency of Transformer models