Use ChatGPT On Wechat via wechaty
A Device-Inference framework, including LLM Inference on device(Mobile Phone/PC/IOT)
Interact with LLM using Ollama models(or openAI, mistralAI)via pure shell scripts on your Linux(or MacOS) system, enhancing intelligent system management without any dependencies.
a chat interface crafted with llama.cpp for running Alpaca models. No API keys, entirely self-hosted!
Build your own conversational search engine using less than 500 lines of code by LeptonAI.
Confidently evaluate, test, and ship LLM applications with a suite of observability tools to calibrate language model outputs across your dev and production lifecycle.
simplifies the evaluation of LLMs by providing a unified microservice to access and test multiple AI models.
Your email address will not be published. Required fields are marked *
Comment *
Name *
Email *
Website
Captcha: 14 + 17 = ?*
Save my name, email, and website in this browser for the next time I comment.
A Device-Inference framework, including LLM Inference on device(Mobile Phone/PC/IOT)