Installation
RTF runs as a Docker container — you only need two files to get it running on your machine. No complex setup, no manual dependency installation, no configuration headaches.
Prerequisites
Before you start, make sure you have the following installed:
| Requirement | Version | Notes |
|---|---|---|
| Operating System | Linux (recommended) | Ubuntu, Debian, Kali, Fedora, Arch all work. macOS also supported. |
| Docker | 24+ | Install Docker → |
| Docker Compose | v2+ | Usually included with Docker Desktop; install separately on Linux |
RTF is built on Kali Linux under the hood. Running it on a Linux host gives you the best performance and compatibility. macOS works too, but Windows (WSL2) may have limitations.
Installing Docker on Linux (quick)
# Ubuntu / Debian / Kali
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
newgrp docker
Verify your setup
docker --version # Docker version 24.x.x
docker compose version # Docker Compose version v2.x.x
The Two Files You Need
Download or copy these two files into a folder on your machine:
| File | Purpose |
|---|---|
docker-compose.yml | Defines the services, ports, volumes, and environment variables |
setup.sh | Management CLI — one-command setup, logs, shell access, updates |
your-folder/
├── docker-compose.yml
└── setup.sh
Step 1 — Configure Your Environment Variables
Open docker-compose.yml and fill in your credentials under the environment section of the rtf-server service.
The variables you must set before starting:
Database
- DB_URL=mongodb+srv://<user>:<password>@<cluster>.mongodb.net/rtf
RTF uses MongoDB Atlas (free tier works). Create a free cluster at mongodb.com and paste your connection string here.
AI Features (OpenRouter)
- OPENROUTER_API_KEY=sk-or-v1-...
- AI_MODEL=moonshotai/kimi-k2-thinking
- CONTROLLER_MODEL=deepseek/deepseek-v3.2
RTF uses OpenRouter to access AI models. Get a free API key at openrouter.ai. The default models (kimi-k2-thinking and deepseek-v3.2) power all AI features — attack planning, scope suggestions, and tool suggestions.
File Storage (Cloudflare R2)
- R2_ACCOUNT_ID=...
- R2_ACCESS_KEY_ID=...
- R2_SECRET_ACCESS_KEY=...
- R2_BUCKET_NAME=rtf-files
- R2_PUBLIC_URL=https://pub-...r2.dev
RTF uses Cloudflare R2 to store finding screenshots and evidence. Create a free R2 bucket at dash.cloudflare.com and fill in the credentials. R2 has a generous free tier — 10GB free storage per month.
If you leave R2 unconfigured, everything in RTF works except screenshot/image attachments on findings. You can add R2 later.
Other Settings (leave as-is)
These are already set correctly in the file — no changes needed:
- NODE_ENV=production
- PORT=4000
- AI_ENABLED=true
- CONTEXT_MEMORY_ENABLED=true
- OLLAMA_URL=http://ollama:11434
Step 2 — Make setup.sh Executable
chmod +x setup.sh
Step 3 — Start RTF (One Command)
./setup.sh -s
That's it. This single command automatically:
- Installs mkcert — if not already on your system (mkcert generates browser-trusted SSL certificates)
- Installs a local Certificate Authority — so your browser trusts the certificate
- Generates SSL certificates — for
localhostand127.0.0.1 - Pulls the Docker images — downloads
rtf-serverandollamaif not already present - Starts both containers — RTF Server and the local Ollama LLM service
When it's done, you'll see:
✓ RTF Server started successfully!
ℹ Access the server at:
https://localhost:4000
Open https://localhost:4000 in your browser — no security warnings, no certificate errors.
The first run takes longer because Docker needs to pull the images (~2-4 GB depending on your connection). Subsequent starts are near-instant.
What's Running After Setup
After ./setup.sh -s, two containers are running:
| Container | Port | Purpose |
|---|---|---|
rtf-server | 4000 | The RTF application (Kali Linux + Node.js) |
ollama | 11434 | Local LLM service for offline AI inference |
And five persistent volumes are created to keep your data safe across restarts:
| Volume | What It Stores |
|---|---|
rtf-packages | Tools you install via the package manager |
rtf-package-list | Tracking list for installed packages |
rtf-data | Application data |
rtf-configs | Tool configs (nuclei templates, subfinder, etc.) |
ollama-models | Downloaded AI models (can be 10GB+ for large models) |
Managing the Server
The setup.sh script handles everything — you never need to type docker commands directly.
| Command | What It Does |
|---|---|
./setup.sh -s | Start the server (smart auto-setup) |
./setup.sh -t | Stop the server |
./setup.sh -r | Restart the server |
./setup.sh -l | View live logs |
./setup.sh -e | Open a shell inside the container |
./setup.sh -st | Check server status and health |
./setup.sh -u | Update to the latest version |
./setup.sh -h | Show all available commands |
Optional — Enable GPU for Ollama
If your machine has a GPU, you can dramatically speed up local AI model inference by enabling GPU support for the Ollama container.
Open docker-compose.yml and find the commented GPU section under the ollama service:
NVIDIA GPU
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
Uncomment this block. Also install nvidia-docker2 on your host:
sudo apt-get install -y nvidia-docker2
sudo systemctl restart docker
AMD GPU
devices:
- /dev/kfd
- /dev/dri
group_add:
- video
Uncomment this block and ensure ROCm drivers are installed on your host.
No GPU (Default)
No changes needed — Ollama runs on CPU by default. It's slower but works on any machine.
Optional — Download Local AI Models
Ollama lets you run AI models locally (offline, no API key needed). To download a model:
./setup.sh -md llama3.2:3b
Popular models:
| Model | Size | Best For |
|---|---|---|
llama3.2:1b | ~1.3 GB | Fast responses, basic tasks |
phi3:mini | ~2.3 GB | Balanced speed and quality |
mistral:7b | ~4.1 GB | Better reasoning |
llama3.1:8b | ~4.7 GB | High quality responses |
List your downloaded models:
./setup.sh -ml
Troubleshooting
Browser shows certificate warning
Run the certificate setup manually:
./setup.sh -c
Then restart the server with ./setup.sh -s.
Container won't start
Check the logs to see what went wrong:
./setup.sh -l
Server is unhealthy
./setup.sh -st # Check health status
./setup.sh -l # View logs for errors
./setup.sh -r # Restart the server
Complete reset (start fresh)
./setup.sh -t # Stop the server
docker compose down -v # Remove all volumes (deletes data)
./setup.sh -s # Start fresh
Next Steps
You're up and running! Now:
- Your First Engagement → — create your first profile and start working
- Managing Profiles → — understand how engagements are organized