← Back to blog

When should I use LLMs vs SLMs?

When should I use LLMs vs SLMs?

When it comes to the world of language models and Gen AI, a key question for companies looking to adopt these innovations is which model(s) to use.

As if it’s not already complicated enough with the plethora of foundational models out there, it is now even more daunting for decision makers to make the right choice for their organization, with the introduction of SLMs, or Small and Specialized Language Models.

In a previous article, we discussed what SLMs are (give it a read if you haven't already done so!), so we will not dive further into this topic. Today’s article talks about when it is a good idea to use LLMs (Large Language Models), and when it makes sense to use SLMs.

As you know, making business and technology decisions involve trade offs, and every case should be assessed individually. In this article, we hope to provide you with a high level framework to help decide between LLMs vs SLMs.

When does it make sense to use LLMs?

Let’s first take a look at some reasons to use LLMs:

You want a general purpose model

One of the key features of LLMs is that they are trained on a vast corpus of data from a plethora of domains. Although this can lead to unintended consequences like hallucinations, it does result in deeper breadth in terms of what your model can do. So if you are looking for a general purpose model that can do a bit of everything, LLMs could be suitable for you.

You have unconstrained resources and budget

Because LLMs are trained on an enormous amount of data, training your own model using LLMs can be costly. But for those who are not concerned with cost and resource constraints, using LLMs still makes sense.

You want something off the shelf

Perhaps all you are looking for is some lightweight fine-tuning of a base model—something off the shelf. In this case, using LLMs is a good option.

When does it make sense to use SLMs?

If the above do not resonate with you, below are some scenarios where using SLMs is more fitting:

You have a very specific use case and specialized corpus of data

If you are looking for something that is not off the shelf, and not general purpose, but rather, focused on a specific use case, trained using your proprietary data, LLMs might not be the best fit here. Instead, SLMs are a more fit for purpose and efficient way to train your specialized models.

You want fast time to value in training and deploying your model

Related to the point above, if you are looking to train and deploy a specialized language model quickly, SLMs are a better choice as opposed to LLMs. Smaller data size means less complexity and training time, which leads to faster time value.

You do not want to spend extra money training your model with irrelevant data

Similarly, training your model using a smaller, specialized corpus of data also results in a fraction of the training cost when compared to LLMs. If this sounds like an enticing proposition, SLMs may make more sense for you.

You want full ownership of your language model living in your own cloud

Arcee’s SLM system doesn’t just stand for “small and specialized”. In our book, SLMs are also secure language models as they can be trained and deployed 100% in your own cloud. So, if your organization holds a mountain of proprietary data that you want to stay 100% yours, building SLMs with Arcee would be an excellent fit.

~~~~~

In sum, the general rule of thumb is that if you have specialized use cases that can only be realized with your own proprietary data, using SLMs probably make more sense versus LLMs. However, if you are looking for a general purpose model that can do a range of things, and you are not resource constrained (both in terms of cash and GPU), then LLMs could still be a good option.

IF you have decided that SLMs are a better fit for your use case, get in touch with Arcee. Our unique end-to-end SLM Adaptation System can help you build your own SLMs, using your own data, in your own cloud. Talk to our team to learn more!