Run LLMs entirely in the browser with a simple headless React hook, useLLM().

Run LLMs entirely in the browser with a simple headless React hook, useLLM(). Live demo: http://chat.matt-rickard.com GitHub: https://github.com/r2d4/react-llm react-llm/headless lets you customi...

Run LLMs entirely in the browser with a simple headless React hook, useLLM().

Live demo: http://chat.matt-rickard.com
GitHub: https://github.com/r2d4/react-llm

react-llm/headless lets you customize everything from the system prompt to the user/assistant role names. It manages a WebGPU-powered background worker.

react-llm sets everything up for you — an off-the-main-thread worker that fetches the model from a CDN (HuggingFace), cross-compiles the WebAssembly components (like the tokenizer and model bindings), and manages the model state (attention kv cache, and more).

Everything runs clientside — the model is cached and inferenced in the browser. Conversations are stored in session storage.

Under the hood, it’s powered by Apache TVM Unity and MLC.

 


给TA打赏
共{{data.count}}人
人已打赏
AI头条

古色古香的有轨电车漂浮在拥挤的大街上 – Midjourney Prompt

2023-5-9 12:57:32

AI头条

2023 年,GPT 刚刚开始起航。

2023-5-17 8:54:32

0 条回复 A文章作者 M管理员
    暂无讨论,说说你的看法吧
个人中心
购物车
优惠劵
今日签到
有新私信 私信列表
搜索