Skip to main content
← Back to Text Models LFM2-350M is Liquid AI’s smallest text model, designed for edge devices with strict memory and compute constraints. Delivers surprisingly strong performance for its size, making it ideal for low-latency applications.

Specifications

PropertyValue
Parameters350M
Context Length32K tokens
ArchitectureLFM2 (Dense)

Ultra-Light

Minimal memory and compute footprint

Low Latency

Fastest inference in the LFM family

Edge-Ready

Runs on IoT and embedded devices

Quick Start

Install:
pip install transformers torch
Download & Run:
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("LiquidAI/LFM2-350M", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("LiquidAI/LFM2-350M")

input_ids = tokenizer.apply_chat_template(
    [{"role": "user", "content": "What is machine learning?"}],
    add_generation_prompt=True, return_tensors="pt"
).to(model.device)

output = model.generate(input_ids, max_new_tokens=256)
print(tokenizer.decode(output[0], skip_special_tokens=True))