Testnet Tensor
  • Introduction
  • Explorer
  • GPT
  • DSN Dashboard
  • Create Account
    • Wallet
    • CLI
    • Faucet
  • Hypertensor CLI
    • Hypertensor CLI
  • Run A Subnet Node
    • Getting Started
    • Wiki
      • Running on AMD GPU
      • Running on Windows Using WSL
      • Troubleshooting
    • Generate Keypair
    • Register & Stake
    • Add
    • Start Validator Node
    • Start Bootstrap Node
    • Activate
    • Update Delegate Reward Rate
    • Deactivate
    • Remove
    • Keys
  • Delegate Staking
    • Introduction
    • Add Delegate Stake
    • Transfer Delegate Stake
    • Remove Delegate Stake
    • Claim Delegate Stake
  • Node Delegate Staking
    • Introduction
    • Add Node Delegate Stake
    • Transfer Node Delegate Stake
    • Remove Node Delegate Stake
    • Claim Node Delegate Stake
  • Delegate Staking Utils
    • Introduction
    • Subnet to Node
    • Node to Subnet
  • Build a Subnet
    • Introduction
    • DSN Standard
    • Subnet Consensus Protocol (SCP)
      • Incentives
      • Accounting
      • Proposals
    • Subnet
      • Registration
      • Activation
      • Deactivation
    • Subnet Nodes
      • Registration
      • Activation
      • Deactivate
  • Contribute
Powered by GitBook
On this page
  • Features
  • Performance

GPT

PreviousExplorerNextDSN Dashboard

Last updated 3 months ago

You can chat with the decentrally hosted model at hosting an uncensored version of Llama 3.1 8B. This model is validated in a subnet using the .

This is a beta version of a chat GPT, expect bugs.

Features

  • Text generation

  • Voice to text

  • Text to voice

Performance

With good GPUs, internet connection, and geography, expect between 8-20 tokens/s.

Model
GPU
Tokens Per Second

Llama 3.1 8B

NVIDIA 3070

6-15

Llama 3.1 8B

NVIDIA T4

4-8.5

Llama 3.1 8B

NVIDIA 4090

7.1-20

The tokens per second can slow down when the subnet has many clients using it and when there aren't nodes in your region due to latency.

As this is a testnet and unincentivized, most of the nodes in the subnet won't be using high-performance hardware.

chat.hypertensor.org
Decentralized Subnet Standard