While large language models (LLMs) are gaining the worldโ€™s attention for their astonishing capabilities, they are not always the right tool for every task. Working with LLMs via APIs introduces many challenges, such as application complexity, cloud network latency and data privacy concerns. Sometimes, we only need a Swiss army knife to cut a piece of string, and a high-precision industrial laser cutter will be overkill.

Small Language Models (SLMs) are receiving increasing attention for simple use cases such as small-scale Q&A, simple classification, and automation tasks for edge mobile devices. Microsoftโ€™s Phi family is a great example of how SLMs trained with high-quality data can produce outsized performance and be invaluable to the right tasks.