Taking the Stage: Arcee Orchestra Learn how we're leveraging small language models (SLMs) to power agentic AI workflows, in our new end-to-end, easy-to-use platform called Arcee Orchestra.
Arcee AI's Role in the INTELLECT-1 Open Source Revolution The Arcee AI research team is honored to be among the contributors to the world's first fully decentralized training of a large language model (LLM). Read about the game-changing project led by Prime Intellect, and how we brought our expertise to the post-training.
Meet Arcee-SuperNova: Our Flagship 70B Model, Alternative to OpenAI Meet Arcee-SuperNova: a groundbreaking model with state-of-the-art abilities in instruction-following and strong alignment with human preferences.
Arcee-SuperNova: Training Pipeline and Model Composition We trained Arcee SuperNova-70B and Arcee SuperNova-8B to be a generally intelligent Llama-3.1-405B derivatives using intelligent distillation, novel post-training, and model merging techniques.
Distilling LLMs with Compact, Powerful Models for Everyone: Introducing DistillKit by Arcee AI First, Arcee AI revolutionized Small Language Models (SLMs) with Model Merging and the open-source repo MergeKit. Today we bring you another leap forward in the creation and distribution of SLMs with an open soure tool we're calling DistillKit.
DistillKit v0.1 by Arcee Labs: The Technical Paper Read the DistillKit v0.1 by Arcee AI Technical Paper: our new open-source tool that's set to change how we create and distribute Small Language Models (SLMs).
Arcee Spark: A Compact & Efficient 7B Parameter Language Model Looking for proof that Small is the new Big when it comes to language models? Look no further than the model we've just dropped here at Arcee AI: you get top-notch results with just 7B parameters.