Inference-Time Scaling Boosts LM Reasoning via Extended CoT
- Inference-time scaling enhances language model reasoning by extending chain-of-thought (CoT)
- Existing approaches are limited to a single policy, opening doors for OSS innovations in local LLM deployment






