HomeSample Page

Sample Page Title


Within the present panorama of generative AI, the ‘scaling legal guidelines’ have usually dictated that extra parameters equal extra intelligence. Nonetheless, Liquid AI is difficult this conference with the discharge of LFM2.5-350M. This mannequin is definitely a technical case research in intelligence density with extra pre-training (from 10T to 28T tokens) and large-scale reinforcement studying

The importance of LFM2.5-350M lies in its structure and coaching effectivity. Whereas essentially the most AI corporations has been centered on frontier fashions, Liquid AI is focusing on the ‘edge’—units with restricted reminiscence and compute—by proving {that a} 350-million parameter mannequin can outperform fashions greater than twice its measurement on a number of evaluated benchmarks.

https://www.liquid.ai/weblog/lfm2-5-350m-no-size-left-behind

Structure: The Hybrid LIV Spine

The core technical differentiator of the LFM2.5-350M is its departure from the pure Transformer structure. It makes use of a hybrid construction constructed on Linear Enter-Various Methods (LIVs).

Conventional Transformers rely completely on self-attention mechanisms, which endure from quadratic scaling points: because the context window grows, the reminiscence and computational necessities for the Key-Worth (KV) cache enhance. Liquid AI addresses this by utilizing a hybrid spine consisting of:

  • 10 Double-Gated LIV Convolution Blocks: These deal with the vast majority of the sequence processing. LIVs operate equally to superior Recurrent Neural Networks (RNNs) however are designed to be extra parallelizable and secure throughout coaching. They keep a constant-state reminiscence, decreasing the I/O overhead.
  • 6 Grouped Question Consideration (GQA) Blocks: By integrating a small variety of consideration blocks, the mannequin retains high-precision retrieval and long-range context dealing with with out the complete reminiscence overhead of a normal Transformer.

This hybrid strategy permits the LFM2.5-350M to help a 32k context window (32,768 tokens) whereas sustaining a particularly lean reminiscence footprint.

Efficiency and Intelligence Density

The LFM2.5-350M was pre-trained on 28 trillion tokens with a particularly excessive training-to-parameter ratio. This ensures that the mannequin’s restricted parameter rely is utilized to its most potential, leading to excessive ‘intelligence density.’

Benchmarks and Use Circumstances

The LFM2.5-350M is a specialist mannequin designed for high-speed, agentic duties relatively than general-purpose reasoning.

BenchmarkRating
IFEval (Instruction Following)76.96
GPQA Diamond30.64
MMLU-Professional20.01

The excessive IFEval rating signifies the mannequin is environment friendly at following complicated, structured directions, making it appropriate for software use, operate calling, and structured knowledge extraction (e.g., JSON). Nonetheless, the documentation explicitly states that LFM2.5-350M is just not really helpful for arithmetic, complicated coding, or inventive writing. For these duties, the reasoning capabilities of bigger parameter counts stay vital.

https://www.liquid.ai/weblog/lfm2-5-350m-no-size-left-behind

{Hardware} Optimization and Inference Effectivity

A significant hurdle for AI devs is the ‘reminiscence wall’—the bottleneck created by transferring knowledge between the processor and reminiscence. As a result of the LFM2.5-350M makes use of LIVs and GQA, it drastically reduces KV cache measurement, boosting throughput. On a single NVIDIA H100 GPU, the mannequin can attain a throughput of 40.4K output tokens per second at excessive concurrency.

Liquid AI group experiences device-specific low-memory inference outcomes that make native deployment viable:

  • Snapdragon 8 Elite NPU: 169MB peak reminiscence utilizing RunAnywhere This autumn.
  • Snapdragon GPU: 81MB peak reminiscence utilizing RunAnywhere This autumn.
  • Raspberry Pi 5: 300MB utilizing Cactus Engine int8.

Key Takeaways

  • Excessive Intelligence Density: By coaching a 350M parameter mannequin on 28 trillion tokens, Liquid AI group achieved an tremendous excessive 80,000:1 token-to-parameter ratio, permitting it to outperform fashions greater than twice its measurement on a number of benchmarks.
  • Hybrid LIV Structure: The mannequin departs from pure Transformers by utilizing Linear Enter-Various Methods (LIVs) mixed with a small variety of Grouped Question Consideration (GQA) blocks, considerably decreasing the reminiscence overhead of the KV cache.
  • Edge-First Effectivity: It’s designed for native deployment with a 32k context window and a remarkably low reminiscence footprint—reaching as little as 81MB on cellular GPUs and 169MB on NPUs by way of specialised inference engines.
  • Specialised Agentic Functionality: The mannequin is extremely optimized for instruction following (IFEval: 76.96) and gear use, although it’s explicitly not really helpful for complicated coding, arithmetic, or inventive writing.
  • Huge Throughput: The architectural effectivity allows high-speed utility, processing as much as 40.4K output tokens per second on a single H100, making it ultimate for high-volume knowledge extraction and real-time classification.

Try the Technical particulars and Mannequin WeightAdditionally, be happy to comply with us on Twitter and don’t neglect to affix our 120k+ ML SubReddit and Subscribe to our E-newsletter. Wait! are you on telegram? now you’ll be able to be part of us on telegram as effectively.


Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles