RE: LeoThread 2025-08-13 21:50
You are viewing a single comment's thread:
#summarystats – July 15, 2025 to August 13, 2025
- Videos summarized: 26,965
- Total Output Tokens posted to chain: 33,368,643
Account | Videos Summarized | Output Tokens Produced |
---|---|---|
@ben.haase | 20,366 | 25,648,936 |
@taskmaster4450le | 5,846 | 6,882,689 |
@mes | 260 | 228,550 |
@drax.leo | 126 | 174,950 |
@winanda | 118 | 129,666 |
@monkmasters | 77 | 88,692 |
@anderssinho | 47 | 67,895 |
@vimukthi | 31 | 50,261 |
@lordshah | 48 | 44,334 |
@onealfa | 12 | 15,432 |
@moretea | 15 | 13,671 |
@d-zero | 7 | 10,142 |
@khantaimur | 4 | 4,263 |
@tokenizedsociety | 3 | 3,013 |
@tbnfl4sun | 2 | 2,734 |
@leostrategy | 2 | 2,123 |
@ahmadmanga | 1 | 1,292 |
Historical Data
- Total Videos summarized: 103,630
- Total Output (all-time): 102,808,944
0
0
0.000
what are ouput tokens ?
In the context of Large Language Models (LLMs), output tokens are simply the individual units of text that the model generates as its response.
Think of them like building blocks of text. Each token is a single unit of text, such as:
When an LLM generates text, it predicts one token at a time. The model outputs a sequence of these tokens to form a complete response, such as a sentence or paragraph.
For example, if an LLM is asked to generate a response to the prompt "What is your name?", the output tokens might be:
The model has generated a sequence of 4 output tokens to form the complete response "My name is LLaMA".
Ooh i tough the recieved such tokens haha, that could be a way to monetize, people buy ouput tokens and semd them to the bot.. bot then generate the words and "consume the tokens"
!vote
✅ Voted thread successfully!
Vote weight: 6.47%
!BBH !PIZZA
We crossed 100 million tokens output.
Damn, Ben has been busy lol