transformer_lens.components.bert_block#
Hooked Transformer Bert Block Component.
This module contains all the component BertBlock.
- class transformer_lens.components.bert_block.BertBlock(cfg: HookedTransformerConfig)#
Bases:
ModuleBERT Block. Similar to the TransformerBlock, except that the LayerNorms are applied after the attention and MLP, rather than before.
- forward(resid_pre: Float[Tensor, 'batch pos d_model'], additive_attention_mask: Float[Tensor, 'batch 1 1 pos'] | None = None) Float[Tensor, 'batch pos d_model']#
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.