RE: LeoThread 2026-01-29 16-26

You are viewing a single comment's thread:

The 27M model that managed to beat ChatGPT the other month was a very specialized model to solve one pattern recognition benchmark that text generation models struggled with, but I don't think this is true success in beating massive models because these models can do text generation and has massive knowledge. The 27M model can't solve any problem that needs knowledge, only patterns.

For scaling purposes, ChatGPT 4 is approximately 1.8T parameter size, which would make it 1,800,000M compared to this small 27M model

Still, it's a step in the improvements direction. #technology

https://inleo.io/threads/view/vimukthi/re-leothreads-qraqcket?referral=vimukthi



0
0
0.000
0 comments