Meet Arcee-SuperNova: Our Flagship 70B Model, Alternative to OpenAI Meet Arcee-SuperNova: a groundbreaking model with state-of-the-art abilities in instruction-following and strong alignment with human preferences.
Arcee-SuperNova: Training Pipeline and Model Composition We trained Arcee SuperNova-70B and Arcee SuperNova-8B to be a generally intelligent Llama-3.1-405B derivatives using intelligent distillation, novel post-training, and model merging techniques.
Arcee Swarm: Unlocking AI Expertise Through Specialization Get ready for a game-changer when it comes to AI for complex problem-solving & decision making, with Arcee AI's Mixture of Agents architecture release: Arcee Swarm. Rather than relying on one LLM to handle all tasks, Arcee Swarm routes your query to a collection of smaller expert models.
Do Direct Preference Optimization (DPO) with Arcee AI's training platform Direct Preference Optimization (DPO) is one of the top methods for fine-tuning LLMs... It's available on our model training platform - and today, we bring you support for DPO on our training APIs.
Arcee-Spark Gets an Upgrade: Introducing Llama-Spark! Coming on the heels of Arcee-Spark – our incredibly performant 7B model – we now bring you Llama-Spark. Built on Llama-3.1-8B, Llama-Spark is a conversational AI that you'd never suspect is just an 8B parameter model.
Distilling LLMs with Compact, Powerful Models for Everyone: Introducing DistillKit by Arcee AI First, Arcee AI revolutionized Small Language Models (SLMs) with Model Merging and the open-source repo MergeKit. Today we bring you another leap forward in the creation and distribution of SLMs with an open soure tool we're calling DistillKit.
DistillKit v0.1 by Arcee Labs: The Technical Paper Read the DistillKit v0.1 by Arcee AI Technical Paper: our new open-source tool that's set to change how we create and distribute Small Language Models (SLMs).