transformer_lens.utils#
Utils.
This module contains varied utility functions used throughout the library.
- class transformer_lens.utils.LocallyOverridenDefaults(model, **overrides)#
Bases:
object
Context manager that allows temporary overriding of default values within a model. Once the context is exited, the default values are restored.
WARNING: This context manager must be used for any function/method that directly accesses default values which may be overridden by the user using the function/method’s arguments, e.g., model.cfg.default_prepend_bos and model.tokenizer.padding_side which can be overriden by prepend_bos and padding_side arguments, respectively, in the to_tokens.
- __init__(model, **overrides)#
Initializes the context manager.
- Parameters:
model (HookedTransformer) – The model whose default values will be overridden.
overrides (dict) – Key-value pairs of properties to override and their new values.
- class transformer_lens.utils.Slice(input_slice: Optional[Union[int, Tuple[int], Tuple[int, int], Tuple[int, int, int], List[int], Tensor, ndarray]] = None)#
Bases:
object
An object that represents a slice input. It can be a tuple of integers or a slice object.
We use a custom slice syntax because Python/Torch’s don’t let us reduce the number of dimensions:
Note that slicing with input_slice=None means do nothing, NOT add an extra dimension (use unsqueeze for that)
There are several modes: int - just index with that integer (decreases number of dimensions) slice - Input is a tuple converted to a slice ((k,) means :k, (k, m) means m:k, (k, m, n) means m:k:n) array - Input is a list or tensor or numpy array, converted to a numpy array, and we take the stack of values at those indices identity - Input is None, leave it unchanged.
Examples for dim=0: if input_slice=0, tensor -> tensor[0] elif input_slice = (1, 5), tensor -> tensor[1:5] elif input_slice = (1, 5, 2), tensor -> tensor[1:5:2] (ie indexing with [1, 3]) elif input_slice = [1, 4, 5], tensor -> tensor[[1, 4, 5]] (ie changing the first axis to have length 3, and taking the indices 1, 4, 5 out). elif input_slice is a Tensor, same as list - Tensor is assumed to be a 1D list of indices.
- __init__(input_slice: Optional[Union[int, Tuple[int], Tuple[int, int], Tuple[int, int, int], List[int], Tensor, ndarray]] = None)#
Modular component for slicing tensors. Can be used to slice a tensor along a given dimension, or to index into a tensor along a given dimension.
- Parameters:
input_slice (SliceInput) – The slice to apply. Can be an int, a tuple, a list, a torch.Tensor, or None. If None, do nothing.
- Raises:
ValueError – If the input_slice is not one of the above types.
- apply(tensor: Tensor, dim: int = 0) Tensor #
Takes in a tensor and a slice, and applies the slice to the given dimension (supports positive and negative dimension syntax). Returns the sliced tensor.
- Parameters:
tensor (torch.Tensor) – The tensor to slice.
dim (int, optional) – The dimension to slice along. Supports positive and negative dimension syntax.
- Returns:
The sliced tensor.
- Return type:
torch.Tensor
- indices(max_ctx: Optional[int] = None) Union[ndarray, int32, int64] #
Returns the indices when this slice is applied to an axis of size max_ctx. Returns them as a numpy array, for integer slicing it is eg array([4])
- Parameters:
max_ctx (int, optional) – The size of the axis to slice. Only used if the slice is not an integer.
- Returns:
The indices that this slice will select.
- Return type:
np.ndarray
- Raises:
ValueError – If the slice is not an integer and max_ctx is not specified.
- slice: Union[int, slice, ndarray]#
- transformer_lens.utils.SliceInput#
An object that represents a slice input. It can be a tuple of integers or a slice object.
An optional type alias for a slice input used in the ActivationCache module.
- A SliceInput can be one of the following types:
int: an integer representing a single position
Tuple[int, int]: a tuple of two integers representing a range of positions
Tuple[int, int, int]: a tuple of three integers representing a range of positions with a step size
List[int]: a list of integers representing multiple positions
torch.Tensor: a tensor containing a boolean mask or a list of indices to be selected from the input tensor.
SliceInput is used in the apply_ln_to_stack method in the ActivationCache module.
alias of
Optional
[Union
[int
,Tuple
[int
],Tuple
[int
,int
],Tuple
[int
,int
,int
],List
[int
],Tensor
,ndarray
]]
- transformer_lens.utils.calc_fan_in_and_fan_out(tensor)#
Calculate the fan in and fan out of a tensor. We define it ourselves because Torch uses a different convention for weights (e.g. for an MLP they use d_out x d_in, and we use d_in x d_out, for attention they do (n_head d_head) x d_model, we do n_head x d_model x d_head).
- transformer_lens.utils.composition_scores(left: FactoredMatrix, right: FactoredMatrix, broadcast_dims=True) Union[Float[Tensor, '*leading_dims'], Float[Tensor, '*leading_dims_left_and_right']] #
See HookedTransformer.all_composition_scores for documentation.
- transformer_lens.utils.download_file_from_hf(repo_name, file_name, subfolder='.', cache_dir='/home/runner/.cache/huggingface/hub', force_is_torch=False, **kwargs)#
Helper function to download files from the HuggingFace Hub, from subfolder/file_name in repo_name, saving locally to cache_dir and returning the loaded file (if a json or Torch object) and the file path otherwise.
If it’s a Torch file without the “.pth” extension, set force_is_torch=True to load it as a Torch object.
- transformer_lens.utils.gelu_fast(input: Float[Tensor, 'batch pos d_mlp']) Float[Tensor, 'batch pos d_mlp'] #
- transformer_lens.utils.gelu_new(input: Float[Tensor, 'batch pos d_mlp']) Float[Tensor, 'batch pos d_mlp'] #
- transformer_lens.utils.get_act_name(name: str, layer: Optional[Union[int, str]] = None, layer_type: Optional[str] = None)#
Helper function to convert shorthand to an activation name. Pretty hacky, intended to be useful for short feedback loop hacking stuff together, more so than writing good, readable code. But it is deterministic!
Returns a name corresponding to an activation point in a TransformerLens model.
- Parameters:
name (str) – Takes in the name of the activation. This can be used to specify any activation name by itself.
it (The code assumes the first sequence of digits passed to) –
type. (that is the layer) –
number (Given only a word and) –
is. (it leaves layer and layer_type as) –
word (Given only a) –
is. –
Examples – get_act_name(‘embed’) = get_act_name(‘embed’, None, None) get_act_name(‘k6’) = get_act_name(‘k’, 6, None) get_act_name(‘scale4ln1’) = get_act_name(‘scale’, 4, ‘ln1’)
layer (int, optional) – Takes in the layer number. Used for activations that appear in every block.
layer_type (string, optional) – Used to distinguish between activations that appear multiple times in one block.
Full Examples:
get_act_name(‘k’, 6, ‘a’)==’blocks.6.attn.hook_k’ get_act_name(‘pre’, 2)==’blocks.2.mlp.hook_pre’ get_act_name(‘embed’)==’hook_embed’ get_act_name(‘normalized’, 27, ‘ln2’)==’blocks.27.ln2.hook_normalized’ get_act_name(‘k6’)==’blocks.6.attn.hook_k’ get_act_name(‘scale4ln1’)==’blocks.4.ln1.hook_scale’ get_act_name(‘pre5’)==’blocks.5.mlp.hook_pre’
- transformer_lens.utils.get_attention_mask(tokenizer, tokens: Tensor, prepend_bos: bool) Tensor #
Computes the attention mask for the tokenized input. NOTE: Only the leftmost leading pads (when padding_side == left) or rightmost trailing pads (when padding_side == right) are considered as real pad tokens that should not be attended.
- Parameters:
tokenizer – The tokenizer used for tokenization.
tokens (torch.Tensor) – The tokenized input.
prepend_bos (bool) – If True, a BOS token is prepended to the input.
- Returns:
The attention mask for the input.
- Return type:
torch.Tensor
- transformer_lens.utils.get_corner(tensor, n=3)#
- transformer_lens.utils.get_cumsum_along_dim(tensor, dim, reverse=False)#
Returns the cumulative sum of a tensor along a given dimension.
- transformer_lens.utils.get_dataset(dataset_name: str, **kwargs) Dataset #
Returns a small HuggingFace dataset, for easy testing and exploration. Accesses several convenience datasets with 10,000 elements (dealing with the enormous 100GB - 2TB datasets is a lot of effort!). Note that it returns a dataset (ie a dictionary containing all the data), not a DataLoader (iterator over the data + some fancy features). But you can easily convert it to a DataLoader.
Each dataset has a ‘text’ field, which contains the relevant info, some also have several meta data fields
Kwargs will be passed to the huggingface dataset loading function, e.g. “data_dir”
Possible inputs: * openwebtext (approx the GPT-2 training data https://huggingface.co/datasets/openwebtext) * pile (The Pile, a big mess of tons of diverse data https://pile.eleuther.ai/) * c4 (Colossal, Cleaned, Common Crawl - basically openwebtext but bigger https://huggingface.co/datasets/c4) * code (Codeparrot Clean, a Python code dataset https://huggingface.co/datasets/codeparrot/codeparrot-clean ) * c4_code (c4 + code - the 20K data points from c4-10k and code-10k. This is the mix of datasets used to train my interpretability-friendly models, though note that they are not in the correct ratio! There’s 10K texts for each, but about 22M tokens of code and 5M tokens of C4) * wiki (Wikipedia, generated from the 20220301.en split of https://huggingface.co/datasets/wikipedia )
- transformer_lens.utils.get_device()#
- transformer_lens.utils.get_input_with_manually_prepended_bos(tokenizer, input)#
Manually prepends the bos token to the input.
- Parameters:
tokenizer (AutoTokenizer) – The tokenizer to use for prepending the bos token.
input (Union[str, List[str]]) – The input to prepend the bos token to.
- Returns:
The input with the bos token manually prepended.
- Return type:
Union[str, List[str]]
- transformer_lens.utils.get_nested_attr(obj, attr_str)#
Retrieves a nested attribute from an object based on a dot-separated string.
For example, if attr_str is “a.b.c”, this function will return obj.a.b.c.
- Parameters:
obj (Any) – The object from which to retrieve the attribute.
attr_str (str) – A dot-separated string representing the attribute hierarchy.
- Returns:
The value of the nested attribute.
- Return type:
Any
- transformer_lens.utils.get_offset_position_ids(past_kv_pos_offset: int, attention_mask: Int[Tensor, 'batch offset_pos']) Int[Tensor, 'batch pos'] #
Returns the indices of non-padded tokens, offset by the position of the first attended token.
- transformer_lens.utils.get_tokenizer_with_bos(tokenizer)#
Returns the tokenizer initialized with add_bos_token=True. Such a tokenizer should be set as the default tokenizer because the tokenization of some tokenizers like LlamaTokenizer are different when bos token is automatically/manually prepended.
- Parameters:
tokenizer (AutoTokenizer) – The tokenizer to initialize with add_bos_token=True.
- Returns:
The tokenizer initialized with add_bos_token=True.
- Return type:
AutoTokenizer
- transformer_lens.utils.get_tokens_with_bos_removed(tokenizer, tokens)#
Removes the bos token from the beginning of each sequence in tokens. The last dimension of tokens must be the sequence length.
- Parameters:
tokenizer (AutoTokenizer) – The tokenizer used to tokenize the input.
tokens (torch.Tensor) – The tokenized input.
- Returns:
The tokenized input with the bos token removed.
- Return type:
torch.Tensor
- transformer_lens.utils.init_kaiming_normal_(param, a=0, nonlinearity='relu', gain=1.0, mode='fan_in')#
Initializes the input tensor using the Kaiming initialization method.
Starting from a std 1 normal distribution, we scale the weights by c / sqrt(fan_in), where c = sqrt(2) if the params were immediately preceded by a relu and 1 for everything else.
As with torch, a is a hyperparameter for nonlinearity, if it takes one.
- transformer_lens.utils.init_kaiming_uniform_(param, a=0, nonlinearity='relu', gain=1.0, mode='fan_in')#
Initializes the input tensor using the Kaiming initialization method.
Starting from a std 1 uniform distribution, we scale the weights by c / sqrt(fan_in), where c = sqrt(2) if the params were immediately preceded by a relu and 1 for everything else.
As with torch, a is a hyperparameter for nonlinearity, if it takes one.
- transformer_lens.utils.init_xavier_normal_(param, gain=1.0)#
Initializes the input tensor using the Xavier initialization method.
- transformer_lens.utils.init_xavier_uniform_(param, gain=1.0)#
Initializes the input tensor using the Xavier initialization method.
- transformer_lens.utils.is_lower_triangular(x: Tensor) bool #
Checks if x is a lower triangular matrix.
- transformer_lens.utils.is_square(x: Tensor) bool #
Checks if x is a square matrix.
- transformer_lens.utils.keep_single_column(dataset: Dataset, col_name: str)#
Acts on a HuggingFace dataset to delete all columns apart from a single column name - useful when we want to tokenize and mix together different strings
- transformer_lens.utils.lm_accuracy(logits: Float[Tensor, 'batch pos d_vocab'], tokens: Int[Tensor, 'batch pos'], per_token: bool = False) Union[Float[Tensor, ''], Float[Tensor, 'batch pos']] #
Cross-Entropy Accuracy for Language Modelling. We measure the accuracy on the logits for predicting the NEXT token.
If per_token is True, returns the boolean for top 1 accuracy for each token in the batch. Note that this has size [batch, seq_len-1], as we cannot predict the first token.
- transformer_lens.utils.lm_cross_entropy_loss(logits: Float[Tensor, 'batch pos d_vocab'], tokens: Int[Tensor, 'batch pos'], attention_mask: Optional[Int[Tensor, 'batch pos']] = None, per_token: bool = False) Union[Float[Tensor, ''], Float[Tensor, 'batch pos']] #
Cross entropy loss for the language model, gives the loss for predicting the NEXT token.
- Parameters:
logits (torch.Tensor) – Logits. Shape [batch, pos, d_vocab]
tokens (torch.Tensor[int64]) – Input tokens. Shape [batch, pos]
attention_mask (torch.Tensor[int64], optional) – Attention mask. Shape [batch, pos]. Used to mask out padding tokens. Defaults to None.
per_token (bool, optional) – Whether to return the log probs predicted for the correct token, or the loss (ie mean of the predicted log probs). Note that the returned array has shape [batch, seq-1] as we cannot predict the first token (alternately, we ignore the final logit). Defaults to False.
- transformer_lens.utils.override_or_use_default_value(default_flag: Any, override: Optional[Any] = None) Any #
Determines which flag to return based on whether an overriding flag is provided. If a not-None overriding flag is provided, it is returned. Otherwise, the global flag is returned.
- transformer_lens.utils.print_gpu_mem(step_name='')#
- transformer_lens.utils.remove_batch_dim(tensor: Float[Tensor, '1 ...']) Float[Tensor, '...'] #
Removes the first dimension of a tensor if it is size 1, otherwise returns the tensor unchanged
- transformer_lens.utils.repeat_along_head_dimension(tensor: Float[Tensor, 'batch pos d_model'], n_heads: int, clone_tensor=True)#
- transformer_lens.utils.sample_logits(final_logits: Float[Tensor, 'batch d_vocab'], top_k: Optional[int] = None, top_p: Optional[float] = None, temperature: float = 1.0, freq_penalty: float = 0.0, tokens: Optional[Int[Tensor, 'batch pos']] = None) Int[Tensor, 'batch'] #
Sample from the logits, in order to generate text
final_logits has shape [batch, vocab_size] We divide the logits by temperature before softmaxing and sampling - high temperature = more uniform, low = more argmaxy. Temp = 0.0 is greedy sampling We apply top_k and top_p filtering to the logits, to encourage diversity. top_k = 10 means we only sample from the 10 most likely tokens. top_p = 0.9 means we only sample from the top 90% of tokens, and then renormalise the distribution. top_k and top_p are mutually exclusive. By default we apply neither and just sample from the full distribution.
Frequency penalty is a penalty on the probability of a token, proportional to the number of times it has been generated so far. This encourages the model to generate new tokens, rather than repeating itself. It is a hyperparameter, and should be tuned. It is applied to the logits before sampling. If this is non-zero it is required to input the input_tokens
#! TODO: Finish testing all the edge cases here. Useful testing code: logits = torch.randn(4) print(logits) np.unique(np.array([sample_logits(logits, top_k=2).item() for i in range(1000)]), return_counts=True)
- transformer_lens.utils.set_nested_attr(obj, attr_str, value)#
Sets a nested attribute of an object based on a dot-separated string.
For example, if attr_str is “a.b.c”, this function will set the value of obj.a.b.c to value.
- Parameters:
obj (Any) – The object on which to set the attribute.
attr_str (str) – A dot-separated string representing the attribute hierarchy.
value (Any) – The value to set for the nested attribute.
- transformer_lens.utils.solu(input: Float[Tensor, 'batch pos d_mlp']) Float[Tensor, 'batch pos d_mlp'] #
SoLU activation function as described by https://transformer-circuits.pub/2022/solu/index.html.
LayerNorm implemented by the MLP class.
- transformer_lens.utils.test_prompt(prompt: str, answer: Union[str, list[str]], model, prepend_space_to_answer: bool = True, print_details: bool = True, prepend_bos: Optional[bool] = None, top_k: int = 10) None #
Test if the Model Can Give the Correct Answer to a Prompt.
Intended for exploratory analysis. Prints out the performance on the answer (rank, logit, prob), as well as the top k tokens. Works for multi-token prompts and multi-token answers.
Warning:
This will print the results (it does not return them).
Examples:
>>> from transformer_lens import HookedTransformer, utils >>> model = HookedTransformer.from_pretrained("tiny-stories-1M") Loaded pretrained model tiny-stories-1M into HookedTransformer
>>> prompt = "Why did the elephant cross the" >>> answer = "road" >>> utils.test_prompt(prompt, answer, model) Tokenized prompt: ['<|endoftext|>', 'Why', ' did', ' the', ' elephant', ' cross', ' the'] Tokenized answer: [' road'] Performance on answer token: Rank: 2 Logit: 14.24 Prob: 3.51% Token: | road| Top 0th token. Logit: 14.51 Prob: 4.59% Token: | ground| Top 1th token. Logit: 14.41 Prob: 4.18% Token: | tree| Top 2th token. Logit: 14.24 Prob: 3.51% Token: | road| Top 3th token. Logit: 14.22 Prob: 3.45% Token: | car| Top 4th token. Logit: 13.92 Prob: 2.55% Token: | river| Top 5th token. Logit: 13.79 Prob: 2.25% Token: | street| Top 6th token. Logit: 13.77 Prob: 2.21% Token: | k| Top 7th token. Logit: 13.75 Prob: 2.16% Token: | hill| Top 8th token. Logit: 13.64 Prob: 1.92% Token: | swing| Top 9th token. Logit: 13.46 Prob: 1.61% Token: | park| Ranks of the answer tokens: [(' road', 2)]
- Parameters:
prompt – The prompt string, e.g. “Why did the elephant cross the”.
answer – The answer, e.g. “road”. Note that if you set prepend_space_to_answer to False, you need to think about if you have a space before the answer here (as e.g. in this example the answer may really be “ road” if the prompt ends without a trailing space). If this is a list of strings, then we only look at the next-token completion, and we compare them all as possible model answers.
model – The model.
prepend_space_to_answer – Whether or not to prepend a space to the answer. Note this will only ever prepend a space if the answer doesn’t already start with one.
print_details – Print the prompt (as a string but broken up by token), answer and top k tokens (all with logit, rank and probability).
prepend_bos – Overrides self.cfg.default_prepend_bos if set. Whether to prepend the BOS token to the input (applicable when input is a string). Models generally learn to use the BOS token as a resting place for attention heads (i.e. a way for them to be “turned off”). This therefore often improves performance slightly.
top_k – Top k tokens to print details of (when print_details is set to True).
- Returns:
None (just prints the results directly).
- transformer_lens.utils.to_numpy(tensor)#
Helper function to convert a tensor to a numpy array. Also works on lists, tuples, and numpy arrays.
- transformer_lens.utils.tokenize_and_concatenate(dataset: Dataset, tokenizer: AutoTokenizer, streaming: bool = False, max_length: int = 1024, column_name: str = 'text', add_bos_token: bool = True, num_proc: int = 10) Dataset #
Helper function to tokenizer and concatenate a dataset of text. This converts the text to tokens, concatenates them (separated by EOS tokens) and then reshapes them into a 2D array of shape (____, sequence_length), dropping the last batch. Tokenizers are much faster if parallelised, so we chop the string into 20, feed it into the tokenizer, in parallel with padding, then remove padding at the end.
This tokenization is useful for training language models, as it allows us to efficiently train on a large corpus of text of varying lengths (without, eg, a lot of truncation or padding). Further, for models with absolute positional encodings, this avoids privileging early tokens (eg, news articles often begin with CNN, and models may learn to use early positional encodings to predict these)
- Parameters:
dataset (Dataset) – The dataset to tokenize, assumed to be a HuggingFace text dataset.
tokenizer (AutoTokenizer) – The tokenizer. Assumed to have a bos_token_id and an eos_token_id.
streaming (bool, optional) – Whether the dataset is being streamed. If True, avoids using parallelism. Defaults to False.
max_length (int, optional) – The length of the context window of the sequence. Defaults to 1024.
column_name (str, optional) – The name of the text column in the dataset. Defaults to ‘text’.
add_bos_token (bool, optional) – . Defaults to True.
- Returns:
Returns the tokenized dataset, as a dataset of tensors, with a single column called “tokens”
- Return type:
Dataset
- transformer_lens.utils.transpose(tensor: Float[Tensor, '... a b']) Float[Tensor, '... b a'] #
Utility to swap the last two dimensions of a tensor, regardless of the number of leading dimensions