Close

See How Your Peers Are Moving Forward in the Cloud

New research from CDW can help you build on your success and take the next step.

Mar 04 2025
Artificial Intelligence

Small Language Models (SLMs): A Cost-Effective, Sustainable Option for Higher Education

Small language models offer efficient computing that requires fewer resources than their larger counterparts.

Small language models, known as SLMs, create intriguing possibilities for higher education leaders looking to take advantage of artificial intelligence and machine learning. 

SLMs are miniaturized versions of the large language models (LLMs) that spawned ChatGPT and other flavors of generative AI. For example, compare a smartwatch to a desktop workstation (monitor, keyboard, CPU and mouse): The watch has a sliver of the computing muscle of the PC, but you wouldn’t strap a PC to your wrist to monitor your heart rate while jogging.

SLMs can potentially reduce costs and complexity while delivering identifiable  benefits — a welcome advance for institutions grappling with the implications of AI and ML. SLMs also allow creative use cases for network edge devices such as cameras, phones and Internet of Things (IoT) sensors.

Click the banner below to learn how artificial intelligence can transform your campus.

 

Just as a smartwatch applies basic computing to specific demands, SLMs apply learning automation in smaller doses where it can do plenty of good. To mine these opportunities, higher education leaders need to school themselves on the basics: what SLMs do and how they deliver value across campus.

What Are Small Language Models?

SLMs are spinoffs of LLMs, which have garnered massive attention since the introduction of ChatGPT in late 2022. Drawing on the power of LLMs, ChatGPT depends on specially designed microchips called graphic processing units (GPUs) to mimic human communication. The models ingest immense volumes of text, sounds or visual data and train themselves to learn from hundreds of billions or even trillions of variables, called parameters, according to IBM. 

SLMs, by contrast, use substantially fewer parameters — from a few million to a billion. They can’t do everything an LLM can do, but their small size pays off in specific scenarios.

“Most universities are like small cities,” says Sidney Fernandes, CIO and vice president of digital experiences at the University of South Florida. Institutions oversee operational domains far beyond their core educational responsibilities: parking, transportation, housing, healthcare, buildings, law enforcement, athletics and more. “All of those operationally focused areas present places where these models can be specifically targeted to domains,” Fernandes adds.  

WATCH NOW: AI in higher education comes with both risks and rewards.

SLMs vs. LLMs: Key Differences and Advantages

Smaller models cost less to operate, which the world noticed with the arrival of DeepSeek, a small, open AI model from China. DeepSeek’s potential reduction in AI costs triggered a temporary sell-off in global financial markets, as investors feared it might challenge the dominance of NVIDIA, the global leader for GPU chips.  

Lower AI costs will be welcomed in higher education, according to Jenay Robert, senior researcher with EDUCAUSE and co-author of the 2025 AI Landscape Study. Robert noted that a scant 2% of respondents to an EDUCAUSE survey said they had new funding sources for AI-related costs.

“Institutions are likely trying to fund AI-related expenses by diverting or reallocating existing budgets, so SLMs can provide an opportunity to decrease those costs and minimize budgetary impacts,” she says.

SLMs can also help with the data governance issues that LLMs create. Colleges and universities worry about data protection, privacy breaches, compliance demands and potential copyright or intellectual property infractions with LLMs, Robert says.

LEARN MORE: Students want more from their AI education.

“Institutions are more likely to be able to run small language models on-premises, reducing risks related to data protection,” Robert adds. EDUCAUSE’s surveys noted that many higher education leaders prefer on-premises AI implementations in “walled gardens” that answer data governance challenges.

Leaders looking to reduce the carbon footprint of AI applications in education may see more advantages from SLMs. “Because small language models are more efficient than large language models, they might be helpful in mitigating the impacts of generative AI on the environment,” she says.

How Small Language Models Can Transform Higher Education

USF’s Fernandes suggests the greatest benefits of SLMs may happen at the network edge in devices such as smartphones, cameras, sensors and laptops. Manufacturers are already adding AI chips to devices to help with inference (the process computers use to infer the meaning of users’ requests). More devices will add inference capability in years to come, he says.

Edge devices are also safer from a privacy perspective because the data can be hosted in remote locations that are difficult for troublemakers to access. “If you install it locally, you can potentially have more sensitive data that is domain-specific,” Fernandes says. That could help a campus police department or healthcare pros staffing a campus clinic, for instance.

DISCOVER: Universities are using the AI tools built into existing software.

Domain-specific SLMs can be installed on campus and target individual academic departments, and they can be trained for specific pedagogic tasks to help students grasp basic concepts. In operations, SLMs could be trained on systems to provide predictive maintenance, helping managers replace aging components or machinery to avert far more costly breakdowns. 

For all of their benefits, SLMs still require solid data governance to ensure high-quality results. Smaller models must be carefully fine-tuned and monitored to reduce the risks of hallucinations and biased of offensive outputs. “Understanding the benefits as well as the shortcomings of those models is going to be very, very critical,” Fernandes says.

The Future of Model Compression and Distillation in AI

SLMs operate via processes called model compression or model distillation in which the larger model trains the smaller model.

Because they can be trained on specific domains, SLMs open more opportunities for autonomous agents that operate in the background, doing everyday chores for people on campus. “As you have more model compression, you're going to have IoT devices with SLMs built into them that could act as agents themselves,” Fernandes says. Eventually, the agents could become smart enough that they might talk to each other, saving even more human labor.

UP NEXT: High-performance computing supports AI education.

While the models get smaller, they’ll retain considerable computing power. “The SLMs of tomorrow will probably do what the LLMs of today are doing,” Fernandes says. These much smaller, more efficient models can be installed directly on edge devices.

“They'll come with inbuilt capabilities, and then there will be vendors taking advantage of those,” Fernandes adds. “Which means, how you manage your edge devices is going to be even more critical than it was before.”

South_agency/Getty Images