Containing the Blast Radius: Hardening WSL2 for AI Coding Agents
I run AI coding agents locally — Claude Code, GitHub Copilot, and similar tools — directly on my development machine under WSL2 on Windows. This is convenient: the agents have direct access to my codebase, can run tests, execute build tools, and iterate quickly without container overhead.
The convenience comes with a cost. These agents pull in large dependency trees, execute tool calls, and by default run with the same filesystem and network access as my own user. A compromised dependency doesn't need to escalate privileges — it already has everything it needs.
The LiteLLM supply chain attack made this concrete. A malicious package inserted into a widely-used AI tooling library would have had access to source code, credentials, SSH keys, and an unobstructed path to the internet on every affected developer machine. The blast radius was large not because of any particular vulnerability, but because of how much access agent processes have by default.
This post documents the hardening setup I built in response: WSL isolation, a dedicated unprivileged user, scoped filesystem access, and outbound traffic restricted to a proxy whitelist. Each layer closes a specific attack vector. None of them require exotic tooling — just standard Linux primitives applied deliberately.
Linux users: Layer 1 is WSL-specific — skip it. Layers 2–4 apply to any Linux environment running AI agents locally.
The Threat Model
A supply chain attack lands malicious code in an agent dependency. From that point the compromised process inherits everything the agent had:
- Filesystem access — your entire home directory, SSH keys,
.envfiles, git credentials - Network access — exfiltration, C2 callbacks, lateral movement
- Windows interop — on WSL2 with default settings, the process can launch
cmd.exe,powershell.exe, any Windows binary
The goal is not to make compromise impossible — that's the dependency audit problem. The goal is containment: a compromised agent process should not be able to reach anything outside its immediate task scope.
Four layers, each closing a specific attack vector.
Why Not jai, devcontainers, or bwrap?
These are the obvious alternatives — and they're worth understanding before building something custom.
jai (and bwrap/bubblewrap generally) provides filesystem isolation via overlayfs and UID separation in strict mode — conceptually similar to Layer 2 of this setup. But jai's own documentation is explicit: "Network access. No network restrictions." For the LiteLLM threat model — a supply chain attack whose goal is exfiltration — jai strict mode protects your SSH keys and dotfiles but cannot prevent the compromised process from sending your source code, API tokens, or anything in the granted directory to an attacker-controlled server. The filesystem boundary stops reads outside the project; it does nothing about writes to the network.
Claude Code devcontainers are first-party and well-documented. Anthropic's own documentation acknowledges the gap directly: "devcontainers don't prevent a malicious project from exfiltrating anything accessible in the devcontainer including Claude Code credentials." The outbound firewall is configurable — but not pre-configured. Layers 3 and 4 in this post are exactly that configuration, and they apply equally inside a devcontainer.
The common gap: all three tools address filesystem integrity. None address network exfiltration by default. For a supply chain attack, network is the point.
Layer 1: WSL Interop Lockdown
By default WSL2 allows any process to launch Windows executables via interop. A compromised agent can call powershell.exe, modify the Windows registry, or schedule a persistent task — all from inside the Linux environment.
Disable it in /etc/wsl.conf:
[automount]
enabled = false
mountFsTab = true
[interop]
enabled = false
appendWindowsPath = false
| Setting | Effect |
|---|---|
automount.enabled = false |
Removes /mnt/c, /mnt/d — Windows drives invisible inside WSL |
automount.mountFsTab = true |
Keeps the 9P server alive — \\wsl.localhost\Debian still works from Windows Explorer |
interop.enabled = false |
Blocks WSL → Windows binary execution entirely |
interop.appendWindowsPath = false |
Stops Windows PATH injection into WSL |
The mountFsTab = true combination is non-obvious: automount.enabled and the 9P server that backs \\wsl.localhost are controlled separately. Disabling automount alone breaks Windows Explorer access to WSL; keeping mountFsTab = true preserves it.
Restart the distro to apply:
wsl --shutdown
wsl -d Debian
Layer 2: Dedicated Agent User with Scoped Bind Mount
Running the agent as your main user gives it access to your entire home directory. A dedicated unprivileged user with a bind-mounted project directory limits filesystem access to exactly the current task.
# one-time setup
sudo useradd -m -s /bin/bash claude
sudo chmod 700 /home/claude
sudo mkdir -p /home/claude/_
# give claude write access to your project root via ACL
sudo apt install acl
sudo setfacl -R -m u:claude:rwX /home/vadim/_/
sudo setfacl -R -d -m u:claude:rwX /home/vadim/_/
The -d flag sets the default ACL — new projects cloned under /home/vadim/_/ automatically get write access without any manual step.
The agent user gets zero sudo rights. Verify:
sudo -l -U claude
# User claude is not allowed to run sudo on this host.
Project mounting is an explicit privileged act performed by your main user before starting a session. Save this as /usr/local/bin/agent-session:
#!/bin/bash
set -euo pipefail
CMD="${1:-}"
PROJECT="${2:-}"
usage() {
echo "Usage: agent-session start <project> | stop <project> | list"
exit 1
}
case "$CMD" in
start)
[[ -z "$PROJECT" ]] && usage
[[ "$PROJECT" =~ ^[a-zA-Z0-9_-]+$ ]] || { echo "Invalid project name"; exit 1; }
SRC="/home/vadim/_/$PROJECT"
DST="/home/claude/_/$PROJECT"
[[ -d "$SRC" ]] || { echo "Project not found: $SRC"; exit 1; }
mkdir -p "$DST"
if mountpoint -q "$DST"; then
echo "Already mounted: $DST"
else
sudo mount --bind "$SRC" "$DST"
echo "Mounted: $SRC → $DST"
fi
;;
stop)
[[ -z "$PROJECT" ]] && usage
DST="/home/claude/_/$PROJECT"
if mountpoint -q "$DST"; then
sudo umount "$DST"
rmdir "$DST"
echo "Unmounted: $DST"
else
echo "Not mounted: $DST"
fi
;;
list)
mount | grep "/home/claude/_/" || echo "No active agent sessions"
;;
*)
usage
;;
esac
sudo install -m 755 /usr/local/bin/agent-session
Sudoers entry for mount/umount only:
vadim ALL=(root) NOPASSWD: /usr/bin/mount, /usr/bin/umount
Session lifecycle:
agent-session start recursive-modulith
# ... agent works ...
agent-session stop recursive-modulith
Only the current project is visible to the agent. Nothing else in /home/vadim is reachable. Git is the recovery boundary — commits and pushes remain manual.
Layer 3: VS Code Remote SSH as Agent User
This is the non-obvious piece. The standard Remote WSL extension connects as your main user — user isolation doesn't help if the VS Code server process owns the file I/O. The VS Code server is what actually performs filesystem operations, so that's what needs to run as the restricted user.
The solution: connect via Remote SSH directly as claude. VS Code deploys its server into claude's home and all file operations run under claude's permissions.
Prerequisites
VS Code requires one setting to work with interop disabled. Without it, VS Code's remote startup script calls Windows interop internally to detect the WSL environment, and fails before the connection is established:
{
"remote.WSL.experimental.scriptLessStartup": true
}
SSH Setup
# enable SSH server
sudo systemctl enable --now ssh
# set up key auth for claude
sudo mkdir -p /home/claude/.ssh
cat ~/.ssh/authorized_keys | sudo tee /home/claude/.ssh/authorized_keys
sudo chown -R claude:claude /home/claude/.ssh
sudo chmod 700 /home/claude/.ssh
sudo chmod 600 /home/claude/.ssh/authorized_keys
Windows SSH config (C:\Users\vadim\.ssh\config):
Host wsl-agent
HostName <WSL2-IP>
User claude
IdentityFile C:\Users\vadim\.ssh\id_rsa
Get the WSL2 IP:
ip addr show eth0 | grep 'inet '
Connect via Remote Explorer → SSH → wsl-agent. VS Code deploys its server as claude and opens the project scope directly.
Validation
# in VS Code terminal
whoami # claude
ls /home/vadim # Permission denied
ls /home/claude/_ # only the mounted project
Note that sudo docker is intentionally unavailable to claude. An agent with Docker access can trivially escape any filesystem isolation by mounting host paths into a privileged container.
Layer 4: Outbound Traffic Restriction
Filesystem isolation means nothing if the agent can exfiltrate over the network. The architecture: nftables restricts outbound traffic by Unix user ID at the kernel level, forcing all agent traffic through a local Squid proxy. Squid enforces a domain whitelist and logs everything.
The environment variables http_proxy and https_proxy in claude's .bashrc are convenience only — they tell well-behaved tools where the proxy is. The actual enforcement is nftables: all outbound from claude that isn't directed at 127.0.0.1:3128 is dropped at the kernel level, regardless of what the process does with its environment.
nftables
Save as /etc/nftables.d/claude-restrict.conf:
table inet claude_restrict {
chain output {
type filter hook output priority filter; policy accept;
# only restrict claude user
meta skuid != claude accept
# allow established connections
ct state established,related accept
# allow loopback (VS Code server internal communication)
ip daddr 127.0.0.1 accept
# drop everything else from claude
meta skuid claude drop
}
}
sudo systemctl enable nftables
DNS is intentionally not whitelisted for claude. Raw DNS queries are a DNS tunneling vector — data encoded as subdomains forwarded through your resolver to an attacker-controlled nameserver. Squid performs its own DNS resolution as the system user, so claude never needs to make raw DNS queries.
Squid
sudo apt install squid
Replace /etc/squid/squid.conf:
http_port 127.0.0.1:3128
acl agent_allowed dstdomain .anthropic.com
acl agent_allowed dstdomain .claude.ai
acl agent_allowed dstdomain .claude.com
acl agent_allowed dstdomain .github.com
acl agent_allowed dstdomain .githubcopilot.com
acl agent_allowed dstdomain .githubusercontent.com
acl agent_allowed dstdomain .npmjs.org
acl agent_allowed dstdomain .nodejs.org
acl agent_allowed dstdomain .maven.org
acl agent_allowed dstdomain .repo1.maven.org
acl agent_allowed dstdomain .repo.maven.apache.org
acl agent_allowed dstdomain .plugins.gradle.org
acl agent_allowed dstdomain .services.gradle.org
acl agent_allowed dstdomain .downloads.gradle.org
acl agent_allowed dstdomain .static.rust-lang.org
acl agent_allowed dstdomain .crates.io
acl agent_allowed dstdomain .pypi.org
acl agent_allowed dstdomain .files.pythonhosted.org
acl agent_allowed dstdomain .letsencrypt.org
acl agent_allowed dstdomain .digicert.com
acl SSL_ports port 443
acl CONNECT method CONNECT
http_access deny CONNECT !SSL_ports
http_access allow agent_allowed
http_access deny all
access_log /var/log/squid/agent-access.log
sudo systemctl enable --now squid
Proxy environment in /home/claude/.bashrc:
export http_proxy=http://127.0.0.1:3128
export https_proxy=http://127.0.0.1:3128
export no_proxy=localhost,127.0.0.1
The Squid access log is your audit trail. Any attempt to reach outside the whitelist appears as TCP_DENIED/403 — a compromised agent trying to phone home is immediately visible.
What This Closes
| Attack vector | Mitigation |
|---|---|
| Pivot to Windows host | interop disabled |
| Access Windows drives | automount disabled |
| Read outside project scope | bind mount + claude user |
| Write outside project scope | bind mount + claude user |
| Escape via Docker | claude has no docker group membership |
| Exfiltrate over network | nftables uid-based restriction |
| Arbitrary outbound connections | Squid domain whitelist |
| Bypass proxy via env vars | nftables enforces at kernel level |
| DNS tunneling | raw DNS blocked, Squid resolves independently |
What it doesn't close: a compromised agent can still corrupt the current project. That's a recovery problem, not an isolation problem — git handles it.
Conclusion
None of these layers are exotic. WSL config, a Unix user, a bind mount, nftables, Squid — standard Linux tooling applied deliberately to a threat model most developers haven't thought through yet.
The LiteLLM incident is unlikely to be the last supply chain attack targeting AI tooling. The dependency trees are large, the ecosystem is moving fast, and agents have broad access by default.
The blast radius is larger than it looks. Now it's smaller on my machine.
Vadim Ferderer is a senior software engineer at adesso SE, specializing in performance optimization and software architecture.