Running on Windows Using WSL

You can run the Decentralized LLM Standard subnet on Windows using WSL or Docker. In this guide, we will show you how to set up a subnet validator node on WSL (Windows Subsystem for Linux).

Tutorial

  1. This tutorial works on Windows 10-11 and NVIDIA GPUs with driver version >= 495 (this requirement is usually met for fresh installations).

    If you have an AMD GPU, please proceed until you have errors, then upgrade to a patched version of the Subnet mentioned in the AMD GPU tutorial.

  2. Launch Windows PowerShell as an Administrator, and install WSL 2:

    wsl.exe --install

    If you previously had WSL 1, please upgrade as explained here.

  3. Update WSL, if needed:

    wsl.exe --update
  4. Open WSL, and check that GPUs are available:

    nvidia-smi
  5. In WSL, install basic Python stuff:

    sudo apt update
    sudo apt install python3-pip python-is-python3
  6. Then, clone the Subnet repository:

    git clone https://github.com/hypertensor-blockchain/dsn.git
  7. In the directory, check torch was installed by returning True:

    python3 -c "import torch; print(torch.cuda.is_available())"

Got an error? Check out the "Troubleshooting" page. Most errors are covered there and are easy to fix, including:

  • hivemind.dht.protocol.ValidationError: local time must be within 3 seconds of others

  • Killed

  • torch.cuda.OutOfMemoryError: CUDA out of memory

  • If your error is not covered there, let us know in Discord and we will help!

Want to share multiple GPUs? In this case, you'd need to run a separate Subnet validator server for each GPU. Open a separate WSL console for each GPU, then run this in the first console:

CUDA_VISIBLE_DEVICES=0 python -m subnet.cli.run_server Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2

Do the same for each console, replacing CUDA_VISIBLE_DEVICES=0 with CUDA_VISIBLE_DEVICES=1, CUDA_VISIBLE_DEVICES=2, etc.

  1. Once all blocks are loaded, check that your server is available on https://dashboard.hypertensor.org If your server is listed as available through a "Relay", please read the section below.

Making the server directly available

If you have a NAT or a firewall, the DSN (Decentralized Subnet) standard will use relays for NAT/firewall traversal by default, which negatively impacts performance. If your computer has a public IP address, we strongly recommend setting up port forwarding to make the server available directly.

We explain how to do it below:

  1. Create the .wslconfig file in your user's home directory in Windows with the following contents:

    [wsl2]
    localhostforwarding=true
  2. In WSL, find out the IP address of your WSL container (172.X.X.X):

    sudo apt install net-tools
    ifconfig
  3. In Windows (PowerShell), allow traffic to be routed into the WSL container (replace 172.X.X.X with the IP address (inet broadcast) from step 2):

    netsh interface portproxy add v4tov4 listenport=31330 listenaddress=0.0.0.0 connectport=31330 connectaddress=172.X.X.X
  4. Set up your firewall (e.g., Windows Defender) to allow traffic from the outside world to the port 31330/tcp.

    1. You can also add an inbound rule for the port in the Windows Defender Firewall with Advanced Security settings.

  5. If you have a router, set it up to allow connections from the outside world (port 31330/tcp) to your computer (port 31330/tcp).

Relay servers in the Decentralized LLM Standard subnet receive lesser rewards by default.

Last updated