Skip to main content

Attention Manipulation (AtMan)

AtMan is our method to manipulate the attention of an input sequence (this can be a token, a word, or a sentence) to steer the model's prediction in a different contextual direction. With AtMan, you can manipulate attention in both directions, either suppressing or amplifying an input sequence. If you would like to know more about the technical details of AtMan, you can refer to the paper we published.

Suppressing

Attention manipulation can suppress the attention that is given to a token (or a set of tokens) in an input. This opens up a lot of opportunities to design your prompt. The completion for the following prompt without any attention manipulation looks like this:

Hello, my name is Lucas. I like soccer and basketball. Today I will play soccer.

With AtMan, you can suppress any part of a text in your prompt to obtain a different completion. In this example, we will suppress "soccer":

Hello, my name is Lucas. I like soccer and basketball. Today I will play basketball with my friends.

We can see that the suppression of soccer led to a different completion.

Amplifying

AtMan also allows you to amplify the attention given to a token. The completion for the following prompt without any attention manipulation looks like this:

I bought a game and a party hat. Tonight I will be wearing the party hat while playing the game.

Let's say that we really want to play the game tonight. In this case, we can amplify the attention paid to "game":

I bought a game and a party hat. Tonight I will be playing games with my friends.

Again, the attention manipulation led to a different completion.

AtMan for Embeddings, Evaluation and Multimodal Input

AtMan can be used not only for text input completions, but also for multimodal input completions, (semantic) embeddings and evaluation calls. If you would like to see what attention manipulation looks like for these different endpoints, please refer to our Tasks section.