DistillKit v0.1 by Arcee Labs: The Technical Paper Read the DistillKit v0.1 by Arcee AI Technical Paper: our new open-source tool that's set to change how we create and distribute Small Language Models (SLMs).
Arcee-AI Releases Two Open Datasets Today, we have made two important datasets publicly available: 1. Agent Data: This dataset was instrumental in training Arcee-Agent. It contains Salesforce-xlam, agent-flan, and a custom version of Glaive-FC2 with 20k extended samples that call for the model to do tool use sequentially within the same response, along with Magpie-Pro
Introducing Arcee-Nova What a week here at Arcee AI. On the heels of Arcee-Scribe yesterday, today we bring you Arcee-Nova – our highest-performing open source model... Evaluated on the same stack as the OpenLLM Leaderboard 2.0, making it the top-performing open source model tested on that stack. Its performance approaches that of
Introducing Arcee-Scribe: Your Creative Writing Partner Need a guide or just some inspiration for your writing tasks – especially those that require a dose of creativity? Get your artistic juices flowing with the latest model by Arcee AI.
Optimizing LLM Training with Spectrum Here at Arcee AI, we're the pioneers of training performant and efficient LLMs with Model Merging... And now we bring you *yet another* cutting-edge technique that also dramatically optimizes your training and improves your models.
Introducing Arcee Agent: A Specialized 7B Language Model for Function Calling and Tool Use Arcee Agent is yet another Arcee model punching above its weight: it's just 7B (initialized from Qwen2-7B) and outranks much larger models. Try it out for function calling and tool use!
Arcee Spark: A Compact & Efficient 7B Parameter Language Model Looking for proof that Small is the new Big when it comes to language models? Look no further than the model we've just dropped here at Arcee AI: you get top-notch results with just 7B parameters.