Taalas HC1: 17,000 tokens/sec on Llama 3.1 8B vs Nvidia H200’s 233 tokens/sec. 73x faster at one-tenth the power. Each chip runs ONE model, hardwired into the transistors.
Taalas HC1: 17,000 tokens/sec on Llama 3.1 8B vs Nvidia H200’s 233 tokens/sec. 73x faster at one-tenth the power. Each chip runs ONE model, hardwired into the transistors.
AI really needs dedicated hardware, I feel like if there was more chip manufacturing in the west we might have more diverse chips.
Frankly I’m really confused as to why this llm demand on ram isn’t encouraging new companies to manufacture ram. If this is a bubble then we all just wait it out, if it’s not a bubble then someone else would swoop in to take up the market.