Meet Arcee-SuperNova: Our Flagship 70B Model, Alternative to OpenAI Meet Arcee-SuperNova: a groundbreaking model with state-of-the-art abilities in instruction-following and strong alignment with human preferences.
Arcee-SuperNova: Training Pipeline and Model Composition We trained Arcee SuperNova-70B and Arcee SuperNova-8B to be a generally intelligent Llama-3.1-405B derivatives using intelligent distillation, novel post-training, and model merging techniques.
The Power of Non-English LLMs: Meet our Arabic model, Meraj (معراج) We've taken our groundbreaking general-purpose model, Arcee Nova, and enhanced it for Arabic – leading to an Arabic-language LLM that's enterprise-ready, with unprecedented text-generation and comprehension capabilities.
How to Choose Between Open Source and Closed Source LLMs: A 2024 Guide Companies are becoming increasingly aware of the potential business value of open source large language models, which are quickly approaching the performance of their closed source counterparts.
Arcee Swarm: Unlocking AI Expertise Through Specialization Get ready for a game-changer when it comes to AI for complex problem-solving & decision making, with Arcee AI's Mixture of Agents architecture release: Arcee Swarm. Rather than relying on one LLM to handle all tasks, Arcee Swarm routes your query to a collection of smaller expert models.
Do Direct Preference Optimization (DPO) with Arcee AI's training platform Direct Preference Optimization (DPO) is one of the top methods for fine-tuning LLMs... It's available on our model training platform - and today, we bring you support for DPO on our training APIs.
Arcee-Spark Gets an Upgrade: Introducing Llama-Spark! Coming on the heels of Arcee-Spark – our incredibly performant 7B model – we now bring you Llama-Spark. Built on Llama-3.1-8B, Llama-Spark is a conversational AI that you'd never suspect is just an 8B parameter model.
Understanding Large Language Models: Open Source vs. Closed Source LLMs How much do you know about Large Language Models (LLMs), the tech behind AI-powered assistants? We give you the basics on both open source and closed source LLMs.
Distilling LLMs with Compact, Powerful Models for Everyone: Introducing DistillKit by Arcee AI First, Arcee AI revolutionized Small Language Models (SLMs) with Model Merging and the open-source repo MergeKit. Today we bring you another leap forward in the creation and distribution of SLMs with an open soure tool we're calling DistillKit.
DistillKit v0.1 by Arcee Labs: The Technical Paper Read the DistillKit v0.1 by Arcee AI Technical Paper: our new open-source tool that's set to change how we create and distribute Small Language Models (SLMs).
Train, Merge, & Domain-Adapt Llama-3.1 with Arcee AI Get Llama-3.1 but better – customize the OS model for all your needs, using Arcee AI's training, merging, and adaptation techniques and tools. Our team created this guide to get you started.
Partner Spotlight: Arcee AI 🤝 MongoDB Joint customers use MongoDB & Arcee AI to take data from JSON files to world-class custom language models in just a few clicks.
Arcee-AI Releases Two Open Datasets Today, we have made two important datasets publicly available: 1. Agent Data: This dataset was instrumental in training Arcee-Agent. It contains Salesforce-xlam, agent-flan, and a custom version of Glaive-FC2 with 20k extended samples that call for the model to do tool use sequentially within the same response, along with Magpie-Pro
Introducing Arcee-Nova What a week here at Arcee AI. On the heels of Arcee-Scribe yesterday, today we bring you Arcee-Nova – our highest-performing open source model... Evaluated on the same stack as the OpenLLM Leaderboard 2.0, making it the top-performing open source model tested on that stack. Its performance approaches that of