Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers

Hila Chefer, Shir Gur, Lior Wolf

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Fingerprint

Dive into the research topics of 'Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers'. Together they form a unique fingerprint.

Computer Science

Keyphrases

Psychology