Getting Started Guide

Your AI Assistant: Vishnu

Everything you need to know to work with your personal AI — running 24/7 on your Mac Mini, ready whenever you are.

💬 Open Vishnu in Telegram

Your personal AI, always on

Vishnu is a dedicated AI assistant built on Claude by Anthropic — set up specifically for you, running on your Mac Mini around the clock.

🧠

Powered by Claude

Uses Anthropic's Claude — one of the world's most capable AI models — as its brain for reasoning, coding, and conversation.

💬

Lives in Telegram

Chat with Vishnu through @Vishnuexu_bot in DMs or any group chat. Send text, images, files, or voice messages.

🏠

Runs on Your Hardware

Vishnu runs 24/7 on your Mac Mini via Clawdbot. Your data, your machine, always available.

🔄

Remembers Everything

Built-in memory system keeps context across sessions. Vishnu picks up right where you left off.

Start chatting in 30 seconds

No setup, no special syntax — just open Telegram and talk.

  1. Open Telegram

    On your phone, tablet, or desktop — Telegram works everywhere.

  2. Find @Vishnuexu_bot

    Search for the bot or tap the button above. Start a DM conversation — or add Vishnu to any group chat.

  3. Just type naturally

    No special commands needed. Describe what you need in plain English. Vishnu understands context, nuance, and follow-ups.

  4. Send anything

    Text, images, files, voice messages — Vishnu can process them all. Share screenshots for debugging, send documents for review, or just talk.

🚀

Quick Start

In group chats, Vishnu sees all messages and responds naturally — no need to @mention. Use different groups to keep separate projects organized.

Open @Vishnuexu_bot →

Vishnu's superpower: structured development

When you give Vishnu a software project, it follows a rigorous 8-step workflow to deliver quality results — no cutting corners.

Step 1

📋 PRD First

Vishnu starts every project with a detailed Product Requirements Document. It asks clarifying questions until the requirements are crystal clear — no assumptions.

Step 2

🧩 Plan Decomposition

Breaks the work into Epics → Milestones → Tasks. Each task touches ≤3 files and ≤200 lines — small, focused, manageable chunks.

Step 3

🏗️ Architecture Review

Reviews the design and technical approach before writing a single line of code. Catches structural issues early.

Step 4

⚡ Implementation

Codes in dependency order, building the foundation first. Uses test-driven development for critical flows.

Step 5

🧪 Manual Test Walkthrough

Provides step-by-step testing instructions so you can verify everything works as expected.

Step 6

🔄 Debug Loop

When issues arise: reproduce → write failing test → fix → verify → repeat. Systematic and thorough.

Step 7

✅ Code Review + Commit

Reviews its own code for quality, then commits and pushes to your repository with clear commit messages.

Step 8

📝 Update Planning Docs

Keeps all planning documents in sync — progress tracked, tasks marked complete, docs updated.

How to give Vishnu a project

You don't need a technical spec. Just describe what you want and let Vishnu handle the rest.

  1. Describe what you want

    Use plain English. Be as detailed or vague as you want — "I need a booking website for my business" is a perfectly fine start.

  2. Answer clarifying questions

    Vishnu will ask smart questions to nail down the details. These questions dramatically improve the final result — don't skip them!

  3. Review the PRD & plan

    Once aligned, Vishnu produces a requirements doc and a detailed plan. Approve it (or request changes) before coding begins.

  4. Watch it build

    Vishnu works through the plan step by step, keeping you updated on progress. Test, give feedback, iterate.

Get the best out of Vishnu

A few habits that'll make your experience dramatically better.

🎯

Be specific. "I want a website that lets customers book appointments with a calendar view" beats "make me something cool."

💬

Answer the questions. Vishnu's clarifying questions aren't busywork — they're the difference between good and great output.

🐛

Be specific about bugs. "The button on the checkout page doesn't redirect" is much better than "it's broken."

📁

Use groups for projects. Create separate Telegram groups for different projects. Each gets its own isolated memory space.

🔁

Context carries over. Vishnu remembers day-to-day, but may need reminders for older details. Just say "remember when we talked about X?"

🧹

Use /reset when stuck. If Vishnu seems confused or stuck in a loop, /reset starts a fresh session without losing long-term memory.

How Vishnu remembers

Vishnu has a built-in memory system that keeps context alive across conversations.

📝

Session Memory

Keeps track of the current conversation. Cleared with /reset.

🗂️

Long-Term Memory

Persistent notes written to files. Survives across sessions and restarts.

🔒

Isolated by Chat

Each group chat has its own separate memory space. Projects stay organized.

♻️

Self-Maintaining

Vishnu periodically reviews and updates its own memory to stay sharp.

Useful commands

Most of the time you'll just chat naturally, but these come in handy.

CommandWhat it does
/resetStart a fresh session. Clears current conversation context but keeps long-term memory intact.
/statusCheck if Vishnu is running and get a quick status report.
Just chatFor everything else — describe what you need in natural language. No commands required.

What Vishnu can (and can't) do

A clear picture of what's possible right now.

✅ Vishnu CAN

  • Write code in any language or framework
  • Build websites, web apps, and APIs
  • Search the web for research
  • Read and create files on your Mac Mini
  • Run terminal commands
  • Remember context and follow up on projects
  • Process images, files, and voice messages
  • Work on multiple projects simultaneously

🚫 Not Yet

  • Access your email or calendar (not configured)
  • Make purchases or sign up for services
  • Push code to production without your review

Why Vishnu sometimes says "rate limit exceeded"

It's not a bug — it's how the AI service manages traffic. Here's what's happening and how to fix it.

🔍 What's actually happening behind the scenes

When you type a message — even something simple like "test 123" — Vishnu doesn't just send those 3 words to the AI. It packages up everything the AI needs to understand who it is and what you've been talking about:

System prompt — ~15,000 tokens
Your files (SOUL, AGENTS, etc.) — ~5,000 tokens
Chat history — ~2,000+ tokens (grows)
Your message "test 123" — 3 tokens
Total per message: ~22,000+ tokens

Think of it like calling a new assistant every time — you have to re-explain who they are, what they're working on, and read back the entire conversation before asking your question. That's what those tokens are.

📖 See the full visual explainer →

⚠️ The rate limit problem

Anthropic (the company behind Claude, Vishnu's AI brain) limits how many tokens you can send per minute. On the starter tier, that limit is 30,000 tokens per minute.

Do the math:

💬 One message in Group Chat 1 → ~25,000 tokens

💬 One message in Group Chat 2 → ~25,000 tokens

Total: ~50,000 tokens — already over the 30,000/min limit! 🚫

This is why even two simple messages sent close together across different chats can trigger the error. It's not about how long your message is — it's about the total context package hitting the API.

✅ Your Anthropic Tier: Upgraded to Tier 2

Your account has been upgraded to Tier 2 — rate limits are now 15x higher than the starter tier. Here's the full tier breakdown:

Tier Deposit Tokens/min Requests/min
Tier 1 $5 30,000 50
Tier 2 ✅ You're here — 15x more capacity $40 total 450,000 1,000
Tier 3 $200 total 800,000 2,000
Tier 4 $400 total 1,600,000 4,000

🎯 Your Tier 2 status:

Your $60 deposit has been applied and your account is active on Tier 2. This gives Vishnu 450,000 tokens/min and 1,000 requests/min — more than enough for simultaneous group chats and heavy usage.

Note: Your deposit is credit for future API usage — not a fee. You'll spend it as Vishnu processes messages. At typical usage, this balance lasts weeks to months. You can check your balance at console.anthropic.com/settings/billing.

🔄 Multi-Provider Failover — Vishnu Never Goes Down

Vishnu is configured with a multi-provider fallback chain spanning three different AI companies. Even if an entire provider goes down, Vishnu keeps working:

🧠
Primary: Claude Opus 4.5 ANTHROPIC

The most powerful Claude model — exceptional reasoning, coding, and conversation. Handles all your messages by default.

if Opus hits a rate limit
Fallback 1: Claude Sonnet 4 ANTHROPIC

Fast, highly capable model with higher rate limits. Still Anthropic — you'll barely notice the switch.

if all Anthropic models are limited
🌐
Fallback 2: Gemini 3 Pro GOOGLE

Completely different provider. If all of Anthropic is down, Vishnu switches to Google's AI — independent infrastructure, unaffected by Anthropic outages.

if Google is also limited
🛡️
Fallback 3: GPT-4o OPENAI

Third independent provider. Even if Anthropic AND Google are both having issues, OpenAI keeps Vishnu running.

cooldown clears (~3 min)
Back to Opus automatically

Once the short cooldown expires (~3 minutes), Vishnu tries the primary model again. No action needed on your end.

🔁 How the cooldown works

When a model hits a rate limit, Vishnu drops to the next provider in the chain with a ~3 minute cooldown. After the cooldown, it automatically retries the preferred model. Recovery is fast — minutes, not hours. This all happens transparently behind the scenes.

💾 Prompt caching — already saving tokens

Anthropic automatically caches Vishnu's system prompt across calls within the same session. Cached tokens don't count toward rate limits — this is why the first message may take ~55 seconds, but subsequent ones are much faster. This is already active and working behind the scenes.

Bottom line: With Tier 2 active and a 4-model, 3-provider fallback chain, Vishnu is essentially bulletproof. Even if an entire AI provider has an outage, Vishnu seamlessly falls to a different company's AI and keeps working.

🧠 Adaptive Smart Queue — Self-Managing Protection

On top of the model fallback chain, Vishnu has an intelligent monitoring system that watches for problems in real-time and automatically activates additional protections when needed — no human intervention required.

How it works

A background monitor runs every 30 seconds, checking three things:

🚫 Rate Limit Errors

Watches for 429 responses from the AI API

📊 Active Sessions

Monitors how many requests are running at once

System Load

Checks CPU usage to detect slowdowns

What happens when it detects a problem
🚦
Queue Activates

If 3+ rate limit errors, 12+ simultaneous requests, or high CPU detected — the smart queue turns on automatically

📋
Requests Get Prioritized

DMs get highest priority, then group chats, then background tasks. Your direct messages always go first.

⏱️
Short Waits Are Silent

If the wait is under 30 seconds, you won't even notice — it just looks like normal processing time

Auto-Recovery

After 5 minutes of stable conditions (no errors, normal load), the queue automatically disables itself

🎯 What this means for you

You don't need to do anything. The system manages itself — it only activates when there's a real problem, handles it transparently, and turns itself off when things are back to normal. During high activity periods (multiple group chats going at once), it smoothly queues requests instead of failing with errors.

💡 Tips for smooth operation

🔄 Use /reset after long chats Long conversations build up huge history. Resetting clears the session and drops the token count back down.
📁 Organize with groups Use separate Telegram groups for different projects. Each group gets isolated memory — no context bleed.
⚡ Fast recovery If a rate limit ever hits, Vishnu recovers in ~3 minutes — not hours. The multi-provider chain keeps things running in the meantime.
✅ Tier 2 is active Your account is on Tier 2 with 15x the starter capacity. Rate limits should be a non-issue for normal use.

All work performed on Vishnu

This page documents all improvements, fixes, and enhancements made to your AI assistant.

Recent Updates

February 3, 2026 — API Rate Limit Optimization

Optimized model fallback behavior to prevent retry loops while maintaining full functionality.

→ View detailed report
🔧 February 1-2, 2026 — Initial Setup

Complete system configuration with Tier 2 Anthropic account and full feature set.

→ View detailed report

This log will be updated as new work is performed on your system.

Common issues & fixes

Quick reference for known issues and their solutions.

📛

Model name format: use hyphens, not dots

When referencing AI models in configuration, always use hyphens between version numbers — not dots.

❌ Wrong claude-opus-4.5
✅ Correct claude-opus-4-5

Dots in model names cause API routing errors. This applies to all model references (e.g., claude-sonnet-4, claude-opus-4-5).

🔧

ClawdHub + Node v25: undici fix

If you're on Node.js v25 (or later), ClawdHub may fail with a fetch-related error. The fix is to install the undici package in the ClawdHub directory:

cd /path/to/clawdhub && npm i undici

This resolves a compatibility issue where Node v25 changed its built-in fetch implementation.

🌐

Browser: use the managed browser, not Chrome relay

When Vishnu needs to browse the web, it should use the managed browser (Clawd's built-in browser) rather than the Chrome extension relay. This is already configured, but if you see browser-related errors:

❌ Chrome relay (requires extension) profile="chrome"
✅ Managed browser (automatic) profile="clawd"

The managed browser runs headlessly on the Mac Mini — no Chrome window or extension needed. It's more reliable for automated tasks.

Frequently asked questions

Quick answers to common questions.

Is Vishnu always running?
Yes! Vishnu runs 24/7 on your Mac Mini. As long as your Mac Mini is powered on and connected to the internet, Vishnu is ready to chat. If you ever notice it's not responding, reach out to us at OpenClaw Install.
Can other people talk to Vishnu?
In DMs, only you can talk to Vishnu. In group chats, anyone in the group can interact with Vishnu. This is great for collaboration — add team members to project groups and everyone can work with Vishnu together.
Does Vishnu remember previous conversations?
Yes! Vishnu has a built-in memory system. It writes important context to memory files that persist across sessions. Long-term memory survives restarts. If you use /reset, it only clears the current session — long-term memory stays intact.
What if Vishnu gives a wrong answer or gets confused?
Just tell it! Say "that's not right" or describe specifically what's wrong. Vishnu is great at course-correcting. If it seems stuck in a loop, use /reset to start fresh — your long-term project context is preserved.
Can Vishnu work on multiple projects at once?
Absolutely. The best approach is to create separate Telegram groups for each project. Each group gets its own isolated memory, so contexts won't bleed between projects.
How do I start a new project?
Just describe what you want in plain English — either in DMs or in a dedicated group chat. Vishnu will ask clarifying questions, build a requirements doc and plan, then start building. It follows a structured 8-step workflow to ensure quality.
Is my data private?
Vishnu runs on your Mac Mini — your hardware, in your space. Conversations go through Telegram's servers (encrypted) and AI processing happens via Anthropic's API. No data is shared with other users or stored on third-party servers beyond what's needed for AI processing.
Can I send Vishnu images or files?
Yes! You can send images (great for screenshots and visual reference), documents, voice messages, and other files through Telegram. Vishnu can analyze images, read documents, and process what you send.
What if I need help or something isn't working?
Contact us at OpenClaw Installopenclawinstall.net. We manage Vishnu's infrastructure and can help with any issues, upgrades, or new capabilities you'd like added.