
Ollama Client 作成者: Shishir Chaurasiya
Chat privately with your local Ollama LLM models in-browser. Fast, lightweight, and secure AI without cloud dependencies.
20 Users20 Users
この拡張機能を使用するには Firefox が必要です
拡張機能メタデータ
この拡張機能について
🧠 Ollama Client – Chat with Local LLMs Inside Your Browser
Ollama Client is a lightweight, privacy-first Firefox Addon that brings the power of locally hosted large language models (LLMs) directly to your browser. No cloud dependencies. No API keys. No data sent externally.
Just fast, secure, offline-first AI chat powered by open-source models like LLaMA 3, Mistral, Gemma, CodeLLaMA, and more — all running on your own machine using the Ollama backend.
✨ Works on all Chromium-based browsers (Chrome, Edge, Brave) and Firefox (with additional setup). 100% open-source.
🚀 Key Features
🔌 Local Ollama Integration – Connect to a local Ollama server (no API keys)
💬 In-Browser Chat UI – Lightweight, minimal, fast
⚙️ Custom Settings – Control model parameters, themes, prompt templates
🔄 Model Switcher – Switch between models in real time
🔍 Model Search & Pull – Pull models directly in the UI (with progress indicator)
🗑️ Model Deletion with Confirmation – Clean up unused models from the UI
🧳 Load/Unload Models – Manage Ollama memory footprint efficiently
📦 Model Version Display – View and compare model versions easily
🎛️ Tune Parameters – Temperature, top_k, top_p, repeat penalty, stop sequences
🧠 Transcript & Page Summarization – Works with YouTube, Udemy, Coursera & web articles
🔊 TTS – Built-in Text-to-Speech via Web Speech API
🗂️ Multi-Chat Sessions – Save/load/delete local chats
🧯 Declarative Net Request (DNR) – Automatic CORS handling(v0.1.3)
🛡️ 100% Local and Private – All storage and inference happen on your device
📋 Copy & Regenerate – Quickly rerun or copy AI responses
🧭 Tab Access (Optional)
Want your LLM to understand the content of a page you're viewing? Enable Tab Access in the settings to fetch page content or transcripts for better contextual answers.
✔️ Fully opt-in
✔️ You choose which tabs to share
✔️ Customizable exclude list (regex supported)
✔️ No tab data ever leaves your device
⚙️ Installation & Setup
1️⃣ Install Ollama Client from the Chrome Web Store
2️⃣ Install Ollama on your machine from https://ollama.com and run
3️⃣ Pull your favorite models (e.g.,
Advanced users can customize themes, model parameters, prompt templates, and excluded URLs from the Options page.
🎯 Who Should Use Ollama Client?
👩💻 Developers building with or debugging LLMs
📚 Researchers who want local, private LLM interfaces
🎓 Students using AI as study aids on local hardware
🔐 Privacy advocates avoiding cloud AI and APIs
🤖 AI tinkerers and open-source model enthusiasts
⚡ Performance & Hardware Recommendations
💻 8 GB RAM (no GPU): gemma:2b, mistral:7b-q4
💻 16 GB RAM (no GPU): gemma:3b-q4, gemma:2b
🚀 16 GB+ with GPU (6GB VRAM): llama3:8b-q4, gemma:3b
💥 32 GB+ or high-end GPU: llama3:8b, codellama:13b
🔥 RTX 3090+, Apple M3 Max: llama3:70b, mixtral
Note: Ollama Client is a frontend interface only. All LLM generation happens via your local Ollama install. Speed and output depend on your system.
🔗 Useful Links
🌐 Chrome Web Store: https://chromewebstore.google.com/detail/ollama-client/bfaoaaogfcgomkjfbmfepbiijmciinjl
📘 Setup Guide: https://shishir435.github.io/ollama-client/ollama-setup-guide
💻 Landing Page: https://shishir435.github.io/ollama-client/ollama-client
🧑💻 GitHub: https://github.com/Shishir435/ollama-client
🧳 Portfolio: https://www.shishirchaurasiya.in
🚀 Start chatting in seconds — private, fast, and fully local AI conversations on your own machine.
Built for developers, researchers, and anyone who values speed, privacy, and full control.
olama-client #opensource #offlien #ollama-ui #ollamachat
Ollama Client is a lightweight, privacy-first Firefox Addon that brings the power of locally hosted large language models (LLMs) directly to your browser. No cloud dependencies. No API keys. No data sent externally.
Just fast, secure, offline-first AI chat powered by open-source models like LLaMA 3, Mistral, Gemma, CodeLLaMA, and more — all running on your own machine using the Ollama backend.
✨ Works on all Chromium-based browsers (Chrome, Edge, Brave) and Firefox (with additional setup). 100% open-source.
🚀 Key Features
🔌 Local Ollama Integration – Connect to a local Ollama server (no API keys)
💬 In-Browser Chat UI – Lightweight, minimal, fast
⚙️ Custom Settings – Control model parameters, themes, prompt templates
🔄 Model Switcher – Switch between models in real time
🔍 Model Search & Pull – Pull models directly in the UI (with progress indicator)
🗑️ Model Deletion with Confirmation – Clean up unused models from the UI
🧳 Load/Unload Models – Manage Ollama memory footprint efficiently
📦 Model Version Display – View and compare model versions easily
🎛️ Tune Parameters – Temperature, top_k, top_p, repeat penalty, stop sequences
🧠 Transcript & Page Summarization – Works with YouTube, Udemy, Coursera & web articles
🔊 TTS – Built-in Text-to-Speech via Web Speech API
🗂️ Multi-Chat Sessions – Save/load/delete local chats
🧯 Declarative Net Request (DNR) – Automatic CORS handling(v0.1.3)
🛡️ 100% Local and Private – All storage and inference happen on your device
📋 Copy & Regenerate – Quickly rerun or copy AI responses
🧭 Tab Access (Optional)
Want your LLM to understand the content of a page you're viewing? Enable Tab Access in the settings to fetch page content or transcripts for better contextual answers.
✔️ Fully opt-in
✔️ You choose which tabs to share
✔️ Customizable exclude list (regex supported)
✔️ No tab data ever leaves your device
⚙️ Installation & Setup
1️⃣ Install Ollama Client from the Chrome Web Store
2️⃣ Install Ollama on your machine from https://ollama.com and run
ollama serve
3️⃣ Pull your favorite models (e.g.,
ollama pull llama3:8b
, gemma:2b
) and start chatting!Advanced users can customize themes, model parameters, prompt templates, and excluded URLs from the Options page.
🎯 Who Should Use Ollama Client?
👩💻 Developers building with or debugging LLMs
📚 Researchers who want local, private LLM interfaces
🎓 Students using AI as study aids on local hardware
🔐 Privacy advocates avoiding cloud AI and APIs
🤖 AI tinkerers and open-source model enthusiasts
⚡ Performance & Hardware Recommendations
💻 8 GB RAM (no GPU): gemma:2b, mistral:7b-q4
💻 16 GB RAM (no GPU): gemma:3b-q4, gemma:2b
🚀 16 GB+ with GPU (6GB VRAM): llama3:8b-q4, gemma:3b
💥 32 GB+ or high-end GPU: llama3:8b, codellama:13b
🔥 RTX 3090+, Apple M3 Max: llama3:70b, mixtral
Note: Ollama Client is a frontend interface only. All LLM generation happens via your local Ollama install. Speed and output depend on your system.
🔗 Useful Links
🌐 Chrome Web Store: https://chromewebstore.google.com/detail/ollama-client/bfaoaaogfcgomkjfbmfepbiijmciinjl
📘 Setup Guide: https://shishir435.github.io/ollama-client/ollama-setup-guide
💻 Landing Page: https://shishir435.github.io/ollama-client/ollama-client
🧑💻 GitHub: https://github.com/Shishir435/ollama-client
🧳 Portfolio: https://www.shishirchaurasiya.in
🚀 Start chatting in seconds — private, fast, and fully local AI conversations on your own machine.
Built for developers, researchers, and anyone who values speed, privacy, and full control.
olama-client #opensource #offlien #ollama-ui #ollamachat
Rated 0 by 0 reviewers
Permissions and data詳細情報
必要な権限:
- 任意のページのコンテンツをブロックする
- ブラウザーのタブへのアクセス
- すべてのウェブサイトの保存されたデータへのアクセス
詳しい情報
- アドオンリンク
- バージョン
- 0.1.14
- サイズ
- 1.07 MB
- 最終更新日
- 17時間前 (2025年8月10日)
- 関連カテゴリー
- ライセンス
- MIT License
- プライバシーポリシー
- このアドオンのプライバシーポリシーを読む
- バージョン履歴
- コレクションへ追加
0.1.14 のリリースノート
🚀 What’s New
Increased num_ctx from 2048 → 6144 for larger context handling.
Improved prompt insertion with smart spacing for smoother UX.
https://github.com/Shishir435/ollama-client/compare/0.1.10...0.1.14
Increased num_ctx from 2048 → 6144 for larger context handling.
Improved prompt insertion with smart spacing for smoother UX.
https://github.com/Shishir435/ollama-client/compare/0.1.10...0.1.14
Shishir Chaurasiya が公開している他の拡張機能
- まだ評価されていません
- まだ評価されていません
- まだ評価されていません
- まだ評価されていません
- まだ評価されていません
- まだ評価されていません