#llms
Read more stories on Hashnode
Articles with this tag
I wanted to share some information about comparing open local models. I started with public benchmarks and leaderboards, even open and closed...
As of now, large models that work best require GPU for training and running (for inference/generation/etc). Training, in general, takes more resources...
Humans need to read through the full length of LLM responses to verify them - can we eliminate that? Ever? People make mistakes. Similarly LLMs...