transformer_lens.model_bridge.supported_architectures.olmo2 module¶
OLMo 2 architecture adapter.
- class transformer_lens.model_bridge.supported_architectures.olmo2.Olmo2ArchitectureAdapter(cfg: Any)¶
Bases:
ArchitectureAdapterArchitecture adapter for OLMo 2 models.
OLMo 2 uses a post-norm architecture with RMSNorm, Q/K normalization in attention, rotary position embeddings (RoPE), and gated MLP (SwiGLU). Key differences from pre-norm models like Llama:
Post-norm: RMSNorm is applied AFTER attention and AFTER MLP, not before. ln1 maps to post_attention_layernorm, ln2 maps to post_feedforward_layernorm.
Q/K normalization: Per-head RMSNorm applied to queries and keys after projection.
No biases on any projections.
Optional Parameters (may not exist in state_dict):¶
blocks.{i}.attn.b_Q - No bias on query projection
blocks.{i}.attn.b_K - No bias on key projection
blocks.{i}.attn.b_V - No bias on value projection
blocks.{i}.attn.b_O - No bias on output projection
blocks.{i}.mlp.b_in - No bias on MLP up_proj
blocks.{i}.mlp.b_gate - No bias on MLP gate_proj
blocks.{i}.mlp.b_out - No bias on MLP down_proj
blocks.{i}.ln1.b - RMSNorm has no bias
blocks.{i}.ln2.b - RMSNorm has no bias
ln_final.b - RMSNorm has no bias
- __init__(cfg: Any) None¶
Initialize the OLMo 2 architecture adapter.
- setup_component_testing(hf_model: Any, bridge_model: Any = None) None¶
Set up rotary embedding references for OLMo 2 component testing.
OLMo 2 uses RoPE (Rotary Position Embeddings). We set the rotary_emb reference on all attention bridge instances for component testing.
We also force the HF model to use “eager” attention to match the bridge’s implementation. The bridge uses “eager” to support output_attentions for hooks.
- Parameters:
hf_model – The HuggingFace OLMo 2 model instance
bridge_model – The TransformerBridge model (if available)