Average response time of all replies in the last few minutes.
Depends heavily on current server load, and what Context Size and Response Length you use.
On average, people are sending prompts of ~-- tokens.
OUTPUT TOKEN LIMITS
Maximum accepted value for Response Length when using this model.
People putting in crazy high numbers can stall the machines for minutes at a time.
The average output length for this model is ~-- tokens.