Introduction
Getting Started
It's recommended to use a permanent public IP. Hosting providers such as AWS, there is a feature called Elastic IPs that can attach a permanent IP to your server.
run
git clone https://github.com/hypertensor-blockchain/subnet-llm.git
Create an
.env
file in the root directory and add your mnemonic phrase to thePHRASE
variable. If you don't have an account, create an account.The example is shown in
.env.example
in the root directory.
Ensure
INITIAL_PEERS
within thesrc/petals_tensor/constants.py
file are currently active peers if needed.Update
src/petals_tensor/health/config.py
MODEL
to the model you're hosting if needed.Update
src/petals_tensor/substrate/config.py
DEV_URL
to a live validator IP and port if needed.Example:
"DEV_URL = ws://127.0.0.1:9945"
See Run Server for more details.
Install the repository with
python -m pip install .
Note
After completing all of the steps, you may need to run python -m pip install .
again.
See Steps
Steps
Steps to connect to Hypertensor and begin generating incentives
Note: These steps will eventually be combined into one step as development continues. In total, the following will require three CLIs.
This starts your AI model node.
Ensure to use a terminal multiplexer so it continues running after the CLI is exited.
By this point, you must have completed Getting Started. By running this command, your data will be stored and used for future commands and consensus.
This will wait until the subnet is successfully voted in if not yet already.
If the subnet is voted in:
This stakes to the blockchain and must be run to be included in consensus.
Helpful Tips
NVIDIA CUDA Installation Guides
This is useful if using a remote server and installing drivers from scratch. This is all of the information for installing and configuring the driver.
Install PyTorch
Ensure to use a combination of libraries that work well together.
Ensure your GPU is available
Before running your subnet node, make sure the following outputs True
:
Debugging
ConnectionRefusedError: [Errno 111] Connection refused
This error is likely due to the substrate configuration URL, which cannot connect to the blockchain. Go to
substrate/config.py
and make sure the SubstrateConfig class URL is correct.
Killed
This happens since Windows doesn't allocate much RAM to WSL by default, so the server gets OOM-killed. To increase the memory limit, go to
C:/Users/Username
and create the.wslconfig
with this content:
To create a
.wslconfig
file, open your File Explorer, and type and enter%UserProfile%
to go to your profile directory in Windows and follow the directions above.The file must not be a txt file. Ensure to create the file using Nano or your preferred IDE software.
This happens due to WSL precision errors.
Run
sudo ntpdate pool.ntp.org
Privacy
Running peer-to-peer servers allows others to identify your IP address. If this is a concern, it is recommended to use a VPN, proxy, or other means to hide your IP address.
Swarm
The current implementation for Petals Tensor is meant to be used as one repository to one server and is not meant for a swarm. To run multiple servers, you will need one repository per directory to successfully interface with Hypertensor.
Health Monitor
Last updated