a toolkit for deploying and serving Large Language Models (LLMs).
A Device-Inference framework, including LLM Inference on device(Mobile Phone/PC/IOT)
an open-source NLP framework that allows you to use LLMs and transformer-based models from Hugging Face, OpenAI and Cohere to interact with your own data.
Blazingly fast LLM inference.
Use ChatGPT On Wechat via wechaty
Fine-tune, serve, deploy, and monitor any open-source LLMs in production. Used in production at BentoML for LLMs-based applications.
Test your prompts. Evaluate and compare LLM outputs, catch regressions, and improve prompt quality.
Your email address will not be published. Required fields are marked *
Comment *
Name *
Email *
Website
Captcha: 11 - 16 = ?*
Save my name, email, and website in this browser for the next time I comment.
A Device-Inference framework, including LLM Inference on device(Mobile Phone/PC/IOT)