Everything you need to know to work with your personal AI — running 24/7 on your Mac Mini, ready whenever you are.
💬 Open Vishnu in TelegramVishnu is a dedicated AI assistant built on Claude by Anthropic — set up specifically for you, running on your Mac Mini around the clock.
Uses Anthropic's Claude — one of the world's most capable AI models — as its brain for reasoning, coding, and conversation.
Chat with Vishnu through @Vishnuexu_bot in DMs or any group chat. Send text, images, files, or voice messages.
Vishnu runs 24/7 on your Mac Mini via Clawdbot. Your data, your machine, always available.
Built-in memory system keeps context across sessions. Vishnu picks up right where you left off.
No setup, no special syntax — just open Telegram and talk.
On your phone, tablet, or desktop — Telegram works everywhere.
Search for the bot or tap the button above. Start a DM conversation — or add Vishnu to any group chat.
No special commands needed. Describe what you need in plain English. Vishnu understands context, nuance, and follow-ups.
Text, images, files, voice messages — Vishnu can process them all. Share screenshots for debugging, send documents for review, or just talk.
In group chats, Vishnu sees all messages and responds naturally — no need to @mention. Use different groups to keep separate projects organized.
Open @Vishnuexu_bot →When you give Vishnu a software project, it follows a rigorous 8-step workflow to deliver quality results — no cutting corners.
Vishnu starts every project with a detailed Product Requirements Document. It asks clarifying questions until the requirements are crystal clear — no assumptions.
Breaks the work into Epics → Milestones → Tasks. Each task touches ≤3 files and ≤200 lines — small, focused, manageable chunks.
Reviews the design and technical approach before writing a single line of code. Catches structural issues early.
Codes in dependency order, building the foundation first. Uses test-driven development for critical flows.
Provides step-by-step testing instructions so you can verify everything works as expected.
When issues arise: reproduce → write failing test → fix → verify → repeat. Systematic and thorough.
Reviews its own code for quality, then commits and pushes to your repository with clear commit messages.
Keeps all planning documents in sync — progress tracked, tasks marked complete, docs updated.
You don't need a technical spec. Just describe what you want and let Vishnu handle the rest.
Use plain English. Be as detailed or vague as you want — "I need a booking website for my business" is a perfectly fine start.
Vishnu will ask smart questions to nail down the details. These questions dramatically improve the final result — don't skip them!
Once aligned, Vishnu produces a requirements doc and a detailed plan. Approve it (or request changes) before coding begins.
Vishnu works through the plan step by step, keeping you updated on progress. Test, give feedback, iterate.
A few habits that'll make your experience dramatically better.
Be specific. "I want a website that lets customers book appointments with a calendar view" beats "make me something cool."
Answer the questions. Vishnu's clarifying questions aren't busywork — they're the difference between good and great output.
Be specific about bugs. "The button on the checkout page doesn't redirect" is much better than "it's broken."
Use groups for projects. Create separate Telegram groups for different projects. Each gets its own isolated memory space.
Context carries over. Vishnu remembers day-to-day, but may need reminders for older details. Just say "remember when we talked about X?"
Use /reset when stuck. If Vishnu seems confused or stuck in a loop, /reset starts a fresh session without losing long-term memory.
Vishnu has a built-in memory system that keeps context alive across conversations.
Keeps track of the current conversation. Cleared with /reset.
Persistent notes written to files. Survives across sessions and restarts.
Each group chat has its own separate memory space. Projects stay organized.
Vishnu periodically reviews and updates its own memory to stay sharp.
Most of the time you'll just chat naturally, but these come in handy.
| Command | What it does |
|---|---|
/reset | Start a fresh session. Clears current conversation context but keeps long-term memory intact. |
/status | Check if Vishnu is running and get a quick status report. |
| Just chat | For everything else — describe what you need in natural language. No commands required. |
A clear picture of what's possible right now.
It's not a bug — it's how the AI service manages traffic. Here's what's happening and how to fix it.
When you type a message — even something simple like "test 123" — Vishnu doesn't just send those 3 words to the AI. It packages up everything the AI needs to understand who it is and what you've been talking about:
Think of it like calling a new assistant every time — you have to re-explain who they are, what they're working on, and read back the entire conversation before asking your question. That's what those tokens are.
Anthropic (the company behind Claude, Vishnu's AI brain) limits how many tokens you can send per minute. On the starter tier, that limit is 30,000 tokens per minute.
Do the math:
💬 One message in Group Chat 1 → ~25,000 tokens
💬 One message in Group Chat 2 → ~25,000 tokens
Total: ~50,000 tokens — already over the 30,000/min limit! 🚫
This is why even two simple messages sent close together across different chats can trigger the error. It's not about how long your message is — it's about the total context package hitting the API.
Your account has been upgraded to Tier 2 — rate limits are now 15x higher than the starter tier. Here's the full tier breakdown:
| Tier | Deposit | Tokens/min | Requests/min |
|---|---|---|---|
| Tier 1 | $5 | 30,000 | 50 |
| Tier 2 ✅ You're here — 15x more capacity | $40 total | 450,000 | 1,000 |
| Tier 3 | $200 total | 800,000 | 2,000 |
| Tier 4 | $400 total | 1,600,000 | 4,000 |
🎯 Your Tier 2 status:
Your $60 deposit has been applied and your account is active on Tier 2. This gives Vishnu 450,000 tokens/min and 1,000 requests/min — more than enough for simultaneous group chats and heavy usage.
Note: Your deposit is credit for future API usage — not a fee. You'll spend it as Vishnu processes messages. At typical usage, this balance lasts weeks to months. You can check your balance at console.anthropic.com/settings/billing.
Vishnu is configured with a multi-provider fallback chain spanning three different AI companies. Even if an entire provider goes down, Vishnu keeps working:
The most powerful Claude model — exceptional reasoning, coding, and conversation. Handles all your messages by default.
Fast, highly capable model with higher rate limits. Still Anthropic — you'll barely notice the switch.
Completely different provider. If all of Anthropic is down, Vishnu switches to Google's AI — independent infrastructure, unaffected by Anthropic outages.
Third independent provider. Even if Anthropic AND Google are both having issues, OpenAI keeps Vishnu running.
Once the short cooldown expires (~3 minutes), Vishnu tries the primary model again. No action needed on your end.
When a model hits a rate limit, Vishnu drops to the next provider in the chain with a ~3 minute cooldown. After the cooldown, it automatically retries the preferred model. Recovery is fast — minutes, not hours. This all happens transparently behind the scenes.
Anthropic automatically caches Vishnu's system prompt across calls within the same session. Cached tokens don't count toward rate limits — this is why the first message may take ~55 seconds, but subsequent ones are much faster. This is already active and working behind the scenes.
Bottom line: With Tier 2 active and a 4-model, 3-provider fallback chain, Vishnu is essentially bulletproof. Even if an entire AI provider has an outage, Vishnu seamlessly falls to a different company's AI and keeps working.
On top of the model fallback chain, Vishnu has an intelligent monitoring system that watches for problems in real-time and automatically activates additional protections when needed — no human intervention required.
A background monitor runs every 30 seconds, checking three things:
Watches for 429 responses from the AI API
Monitors how many requests are running at once
Checks CPU usage to detect slowdowns
If 3+ rate limit errors, 12+ simultaneous requests, or high CPU detected — the smart queue turns on automatically
DMs get highest priority, then group chats, then background tasks. Your direct messages always go first.
If the wait is under 30 seconds, you won't even notice — it just looks like normal processing time
After 5 minutes of stable conditions (no errors, normal load), the queue automatically disables itself
You don't need to do anything. The system manages itself — it only activates when there's a real problem, handles it transparently, and turns itself off when things are back to normal. During high activity periods (multiple group chats going at once), it smoothly queues requests instead of failing with errors.
This page documents all improvements, fixes, and enhancements made to your AI assistant.
Optimized model fallback behavior to prevent retry loops while maintaining full functionality.
→ View detailed reportComplete system configuration with Tier 2 Anthropic account and full feature set.
→ View detailed reportThis log will be updated as new work is performed on your system.
Quick reference for known issues and their solutions.
When referencing AI models in configuration, always use hyphens between version numbers — not dots.
claude-opus-4.5
claude-opus-4-5
Dots in model names cause API routing errors. This applies to all model references (e.g., claude-sonnet-4, claude-opus-4-5).
If you're on Node.js v25 (or later), ClawdHub may fail with a fetch-related error. The fix is to install the undici package in the ClawdHub directory:
This resolves a compatibility issue where Node v25 changed its built-in fetch implementation.
When Vishnu needs to browse the web, it should use the managed browser (Clawd's built-in browser) rather than the Chrome extension relay. This is already configured, but if you see browser-related errors:
profile="chrome"
profile="clawd"
The managed browser runs headlessly on the Mac Mini — no Chrome window or extension needed. It's more reliable for automated tasks.
Quick answers to common questions.
/reset, it only clears the current session — long-term memory stays intact./reset to start fresh — your long-term project context is preserved.