Skip to main content
← Back to Text Models LFM2.5-1.2B-Thinking is optimized for reasoning tasks, delivering strong performance on math, logic, and multi-step problem-solving. Built on the LFM2.5 architecture with specialized training for chain-of-thought reasoning.

Specifications

PropertyValue
Parameters1.2B
Context Length32K tokens
ArchitectureLFM2.5 (Dense)

Math & Logic

Strong arithmetic and logical reasoning

Chain-of-Thought

Step-by-step problem decomposition

Fine-tunable

TRL compatible (SFT, DPO, GRPO)

Quick Start

Install:
pip install transformers torch
Download & Run:
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("LiquidAI/LFM2.5-1.2B-Thinking", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("LiquidAI/LFM2.5-1.2B-Thinking")

input_ids = tokenizer.apply_chat_template(
    [{"role": "user", "content": "What is 15% of 240?"}],
    add_generation_prompt=True, return_tensors="pt"
).to(model.device)

output = model.generate(input_ids, max_new_tokens=256)
print(tokenizer.decode(output[0], skip_special_tokens=True))