Frequently Asked Questions
The stuff people actually ask us before signing up.
~ Can Deployment Bro break my server?+
No. Bro doesn't run arbitrary shell commands — only validated operations. Anything dangerous (restart, deploy) asks you to confirm first. You see exactly what will happen before it happens. Even with AI, you're the one pressing the button.
~ Is this just ChatGPT for servers?+
No. ChatGPT gives generic advice. Bro has real-time access to YOUR server — logs, metrics, processes, services. When it says "94% RAM, node.js using 3.2GB", that's your actual server right now, not a guess based on training data.
~ What if AI hallucinates?+
It can't break things even if it tries. Bro doesn't generate shell commands from AI output. The AI picks WHAT to do (restart, check logs, diagnose), then it maps to pre-built, tested operations. Think intent recognition, not code generation.
~ Why can't Bro edit my config files?+
On purpose. Bro can READ your nginx.conf and tell you what to change, but won't edit it directly. This prevents prompt injection (malicious log content tricking AI into editing configs) and accidental syntax errors. Your critical files, your hands.
~ What's the difference between Watchdog and Bro?+
Watchdog: you give commands (/restart, /logs), bot executes. Reliable, predictable, free. Bro: you describe problems in plain English ("why is it slow?"), AI diagnoses and suggests fixes. Think autopilot vs. copilot. Pick based on how you work.
~ Do I need Kubernetes?+
Nope. mttrly works with a $10 VPS, systemd, Docker Compose, PM2, journalctl. If you can SSH into a Linux server, you're good. No Prometheus, no Grafana, no 47-service observability stack.
~ What servers does mttrly support?+
Any Linux server you can SSH into. VPS (DigitalOcean, Linode, Vultr), dedicated servers, cloud instances (AWS EC2, GCP, Azure), even Railway or Render if they expose SSH. Systemd, Docker, PM2 — doesn't matter.
~ How much does it cost?+
Free: 1 server, watchdog mode, basic commands. Deployment Bro: $39/mo for 3 servers with AI and deploy pipeline. Need more servers? $15/mo each. No credit card to start.
~ Is mttrly secure?+
The agent connects outbound only — no ports to open, your firewall stays closed. All traffic is encrypted. Dangerous actions need your confirmation. We don't store your logs or sensitive data. Just minimal metadata (server names, action history) so you can see what happened.
~ What happens if mttrly goes down?+
Nothing scary. Your servers keep running — mttrly is just the remote control, not the engine. SSH still works. We run redundant infrastructure for 99.9% uptime, but even if we're down, your servers are fine.
~ How do I get started?+
Open Telegram, find @mttrly_bot, tap Start. You get a token. Run one command on your server. That's it — under 2 minutes. Start free, upgrade to Bro when you're ready.
~ Can my team use it?+
Yes. Deployment Crew ($99/mo) gives your whole team access to the same servers. Everyone sees the same alerts, everyone can act. All actions are logged so you know who did what.
~ Which messengers does mttrly support?+
Telegram, Slack, Discord, and WhatsApp (beta). Free and Bro plans — pick one. Deployment Crew lets you use all of them at once, so your team works from whichever app they prefer.
AI Safety & Limitations
Understanding how Deployment Bro works, what it can and can't do, and how we ensure safe operation
~ Can Deployment Bro break my server?+
No, by design. Three safety mechanisms prevent unwanted changes: (1) Approval required — every change operation (restart, deploy, cleanup) shows a preview and waits for your explicit confirmation. Read-only operations (logs, status) run immediately. (2) No arbitrary shell — Bro executes predefined operations with validation, not arbitrary commands. It can't run
rm -rf / or any command outside its operation set. (3) Validated execution — operations are validated before running (e.g., nginx config test before reload, pre-deploy disk space check). Deploy operations create automatic snapshots for rollback.~ Is it safe to give AI access to my production server?+
Deployment Bro doesn't have unrestricted server access. It works through a limited set of validated operations (healthcheck, log analysis, service restart, deploy), not arbitrary shell. All operations are executed by the mttrly agent (runs on your server), not by AI directly. AI's role: understand your natural language request → choose the right operation → extract parameters → analyze results. The operation execution itself is deterministic code with validation, not AI-generated commands. All actions are logged with full audit trail.
~ What if Bro doesn't understand my question or makes a mistake?+
Bro is transparent about its limitations. If it doesn't understand your request or can't map it to a supported operation, it will say: "I don't know how to do that. Here's what I can help with: [list of capabilities]." It won't guess or try something potentially dangerous. If AI analysis produces uncertain results, Bro shows the raw data and says "I see these logs but can't determine root cause. Want to investigate together?" The show by default, change by request principle means Bro never makes changes unless you explicitly approve.
~ How does the "show by default, change by request" principle work?+
Critical safety rule based on real usage patterns. When you ask read-only questions ("show logs", "check memory", "what's using CPU?"), Bro analyzes and reports — never makes changes even if it detects problems. If diagnostics find an issue, Bro recommends action but waits for confirmation. Example: you type "show logs" → Bro finds 23 TypeError exceptions → "Node.js has crashed. Restart the service?" → you approve → it executes. This prevents AI from making unwanted changes while maintaining natural language UX. Change operations are never assumed — they must be explicitly requested or approved.
~ What data does Bro send to AI providers (OpenAI/Anthropic)?+
Only the minimum needed for analysis: your natural language request, relevant operation output (logs, status data), and conversation context. Sensitive data is filtered: credentials, API keys, environment variables with SECRET/PASSWORD/TOKEN in names are redacted before sending. With BYOK (Bring Your Own Key), data goes directly to your OpenAI/Anthropic account — mttrly never sees it. Without BYOK, requests are sent via mttrly's AI gateway with zero retention policy (data not stored after request completes). Full list of what's sent: user message text, selected log excerpts (time-limited, filtered), system status output (CPU/memory/disk metrics), service names and states. Never sent: full log files, database contents, application code, SSH keys, certificates.
~ Can I audit what Bro did on my server?+
Yes, comprehensive audit trail. Every operation is logged with: timestamp, user who initiated, natural language request, operation executed, parameters used, execution result, approval status. Logs are stored: (1) On your server (agent logs in
/var/log/mttrly/), (2) In mttrly dashboard (30 days retention for Deployment Bro tier), (3) Optionally in your own logging system via webhook/syslog. You can query history: "what changed in the last 24 hours?" triggers ChangeAudit recipe showing all actions + git commits + service restarts. For compliance: export audit logs as JSON or connect your SIEM.~ What happens if Bro's AI analysis is wrong?+
Preview + approval prevents acting on wrong analysis. Example: Bro analyzes logs and concludes "Database is slow, restart postgres?" but you know it's actually a network issue. You click Cancel, then ask "check network latency" to investigate further. The diagnostic recipes are deterministic — they always run the same investigation steps. AI only analyzes aggregated results, it doesn't control which operations run. If analysis seems wrong, you see the raw data (logs, metrics) alongside AI's interpretation, can disagree and investigate manually.