Skip to main content

Installation

RTF runs as a Docker container — you only need two files to get it running on your machine. No complex setup, no manual dependency installation, no configuration headaches.


Prerequisites

Before you start, make sure you have the following installed:

RequirementVersionNotes
Operating SystemLinux (recommended)Ubuntu, Debian, Kali, Fedora, Arch all work. macOS also supported.
Docker24+Install Docker →
Docker Composev2+Usually included with Docker Desktop; install separately on Linux
Linux is Recommended

RTF is built on Kali Linux under the hood. Running it on a Linux host gives you the best performance and compatibility. macOS works too, but Windows (WSL2) may have limitations.

Installing Docker on Linux (quick)

# Ubuntu / Debian / Kali
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
newgrp docker

Verify your setup

docker --version         # Docker version 24.x.x
docker compose version # Docker Compose version v2.x.x

The Two Files You Need

Download or copy these two files into a folder on your machine:

FilePurpose
docker-compose.ymlDefines the services, ports, volumes, and environment variables
setup.shManagement CLI — one-command setup, logs, shell access, updates
your-folder/
├── docker-compose.yml
└── setup.sh

Step 1 — Configure Your Environment Variables

Open docker-compose.yml and fill in your credentials under the environment section of the rtf-server service.

The variables you must set before starting:

Database

- DB_URL=mongodb+srv://<user>:<password>@<cluster>.mongodb.net/rtf

RTF uses MongoDB Atlas (free tier works). Create a free cluster at mongodb.com and paste your connection string here.

AI Features (OpenRouter)

- OPENROUTER_API_KEY=sk-or-v1-...
- AI_MODEL=moonshotai/kimi-k2-thinking
- CONTROLLER_MODEL=deepseek/deepseek-v3.2

RTF uses OpenRouter to access AI models. Get a free API key at openrouter.ai. The default models (kimi-k2-thinking and deepseek-v3.2) power all AI features — attack planning, scope suggestions, and tool suggestions.

File Storage (Cloudflare R2)

- R2_ACCOUNT_ID=...
- R2_ACCESS_KEY_ID=...
- R2_SECRET_ACCESS_KEY=...
- R2_BUCKET_NAME=rtf-files
- R2_PUBLIC_URL=https://pub-...r2.dev

RTF uses Cloudflare R2 to store finding screenshots and evidence. Create a free R2 bucket at dash.cloudflare.com and fill in the credentials. R2 has a generous free tier — 10GB free storage per month.

What if I skip R2?

If you leave R2 unconfigured, everything in RTF works except screenshot/image attachments on findings. You can add R2 later.

Other Settings (leave as-is)

These are already set correctly in the file — no changes needed:

- NODE_ENV=production
- PORT=4000
- AI_ENABLED=true
- CONTEXT_MEMORY_ENABLED=true
- OLLAMA_URL=http://ollama:11434

Step 2 — Make setup.sh Executable

chmod +x setup.sh

Step 3 — Start RTF (One Command)

./setup.sh -s

That's it. This single command automatically:

  1. Installs mkcert — if not already on your system (mkcert generates browser-trusted SSL certificates)
  2. Installs a local Certificate Authority — so your browser trusts the certificate
  3. Generates SSL certificates — for localhost and 127.0.0.1
  4. Pulls the Docker images — downloads rtf-server and ollama if not already present
  5. Starts both containers — RTF Server and the local Ollama LLM service

When it's done, you'll see:

✓ RTF Server started successfully!

ℹ Access the server at:
https://localhost:4000

Open https://localhost:4000 in your browser — no security warnings, no certificate errors.

First-time startup

The first run takes longer because Docker needs to pull the images (~2-4 GB depending on your connection). Subsequent starts are near-instant.


What's Running After Setup

After ./setup.sh -s, two containers are running:

ContainerPortPurpose
rtf-server4000The RTF application (Kali Linux + Node.js)
ollama11434Local LLM service for offline AI inference

And five persistent volumes are created to keep your data safe across restarts:

VolumeWhat It Stores
rtf-packagesTools you install via the package manager
rtf-package-listTracking list for installed packages
rtf-dataApplication data
rtf-configsTool configs (nuclei templates, subfinder, etc.)
ollama-modelsDownloaded AI models (can be 10GB+ for large models)

Managing the Server

The setup.sh script handles everything — you never need to type docker commands directly.

CommandWhat It Does
./setup.sh -sStart the server (smart auto-setup)
./setup.sh -tStop the server
./setup.sh -rRestart the server
./setup.sh -lView live logs
./setup.sh -eOpen a shell inside the container
./setup.sh -stCheck server status and health
./setup.sh -uUpdate to the latest version
./setup.sh -hShow all available commands

Optional — Enable GPU for Ollama

If your machine has a GPU, you can dramatically speed up local AI model inference by enabling GPU support for the Ollama container.

Open docker-compose.yml and find the commented GPU section under the ollama service:

NVIDIA GPU

deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]

Uncomment this block. Also install nvidia-docker2 on your host:

sudo apt-get install -y nvidia-docker2
sudo systemctl restart docker

AMD GPU

devices:
- /dev/kfd
- /dev/dri
group_add:
- video

Uncomment this block and ensure ROCm drivers are installed on your host.

No GPU (Default)

No changes needed — Ollama runs on CPU by default. It's slower but works on any machine.


Optional — Download Local AI Models

Ollama lets you run AI models locally (offline, no API key needed). To download a model:

./setup.sh -md llama3.2:3b

Popular models:

ModelSizeBest For
llama3.2:1b~1.3 GBFast responses, basic tasks
phi3:mini~2.3 GBBalanced speed and quality
mistral:7b~4.1 GBBetter reasoning
llama3.1:8b~4.7 GBHigh quality responses

List your downloaded models:

./setup.sh -ml

Troubleshooting

Browser shows certificate warning

Run the certificate setup manually:

./setup.sh -c

Then restart the server with ./setup.sh -s.

Container won't start

Check the logs to see what went wrong:

./setup.sh -l

Server is unhealthy

./setup.sh -st    # Check health status
./setup.sh -l # View logs for errors
./setup.sh -r # Restart the server

Complete reset (start fresh)

This deletes all your data
./setup.sh -t        # Stop the server
docker compose down -v # Remove all volumes (deletes data)
./setup.sh -s # Start fresh

Next Steps

You're up and running! Now: