Run Server
Getting Started
Ensure your .env
file in the root directory has the accurate RPC URL for DEV_RPC
.
Server
Running A Server On Linux
Running A Server on Linux With An AMD GPU
https://github.com/bigscience-workshop/petals/wiki/Running-on-AMD-GPU
Run A Server on Windows (WSL)
https://github.com/bigscience-workshop/petals/wiki/Run-Petals-server-on-Windows
Note
This will not run with the default Windows OS. Ensure to use WSL if on Windows.
Running a relay server will receive fewer rewards, currently by 33%, than a direct server. Relay servers are slower and negatively impact performance.
Relay servers may be unresponsive at times and cause a broken consensus.
Testnet v1.0 will be testing the peer consensus mechanism with relay servers. Relay servers may be removed from the peer consensus mechanism, thus unable to receive rewards, once live testing is completed or during live testing.
Run Server
Run the server
model_path: The HuggingFace model path.
--public_ip: The public IP of the server for other peers to connect to.
--port: The port of the server for other peers to connect to (open port before running the command).
Example
Specify --initial_peers
Example
--initial_peers
ExampleIf the initial peers found in the petals_tensor/constants.py
file are not online, you can specify initial peers.
Note
This should begin loading blocks (not to be confused with the blockchain blocks) within 5-10 minutes or less. If it doesn't output logs such as Loaded bigscience/bloom-560m block 1
within 5-10 minutes, try restarting your server or computer. Once these logs begin appearing, it can take up to 30 minutes to complete.
Last updated