Eclaire v0.4.0: Native Apple Silicon Support with MLX

Eclaire v0.4.0: Native Apple Silicon Support with MLX

TL;DR: Eclaire now runs AI models natively on Apple Silicon with MLX framework integration. New support for MLX-LM (text), MLX-VLM (vision), and LM Studio.

What is MLX?

MLX is Apple’s machine learning framework designed specifically for Apple Silicon. Built for efficient array operations on unified memory, MLX provides a NumPy-like interface with automatic differentiation and GPU acceleration through Metal.

Eclaire now leverages MLX to run AI models natively on Apple Silicon, taking full advantage of:

Choose Your MLX LLM Backend

Eclaire now supports three LLM provider backends optimized for Apple Silicon. Choose the one that best fits your needs:

MLX-LM

Provides text generation capabilities using MLX-optimized models. Perfect for the assistant, chat, and content management.

MLX-VLM

Brings vision-language model support with multimodal capabilities. Ideal for photo analysis, OCR, document processing, and visual question answering. Handles both text and image inputs.

LM Studio

Offers an intuitive GUI and powerful CLI tools for model management. Browse, download, and run models with a user-friendly interface. Supports both MLX-optimized and GGUF format models, providing flexibility in model selection.

Backend Selection Notes

Eclaire has a backend service (for AI assistant functionality) and a workers service (for data processing, extraction, tagging, OCR, etc). Each service can use its own model.


  • MLX-LM can be selected as the Eclaire backend
  • MLX-VLM and LM Studio can be selected for either the Eclaire backend or workers

  • Workers require vision for processing images and documents, which MLX-VLM and LM Studio are both capable of supporting.

Detailed guides for each backend are coming soon.

Enhanced Model Import Workflow

The Eclaire model CLI allows you to import model definitions directly from Hugging Face with support for identifying MLX models and their capabilities (vision, text, etc.):

./tools/model-cli/run.sh import https://huggingface.co/mlx-community/gemma-3-4b-it-qat-4bit
# Interactive prompt will guide you through backend selection

Imported Models - Click to view full size

This makes it much easier to import and configure models correctly on the first try.

Get Started

To try the new MLX support, you’ll need:

Resources:

Happy building with local AI on Apple Silicon!