Run AI Locally.
No Setup Required.
The easiest way to run LLMs on your Mac. Open source, beginner-friendly, and powered by Apple's MLX framework.
Blazing Fast Tokens/Sec
Built directly on Apple's MLX framework to deliver industry-leading inference speeds on M-series chips.
Native Feel
Built with SwiftUI for a truly Mac-like experience. Smooth animations, frosted glass, and seamless integration.
Open Source
Transparency is key. Inspect the code, contribute features, and run models with zero compiled secrets.
Benchmarks. Off the charts.
Leveraging the Neural Engine to deliver token generation speeds that leave others in the dust.
Llama-3-8B-Instruct on MacBook Pro M3 Max
Run the best models.
Get Started in Seconds
Download Generative Feedback and start running AI models locally. No account required.
Requires macOS 14+ and Apple Silicon (M1/M2/M3/M4)