ViTO: Vision Transformer-Operator

Oded Ovadia, Adar Kahana, Panos Stinis, Eli Turkel, Dan Givoli, George Em Karniadakis

Research output: Contribution to journalArticlepeer-review

Abstract

We combine vision transformers with operator learning to solve diverse inverse problems described by partial differential equations (PDEs). Our approach, named Vision Transformer-Operator (ViTO), combines a U-Net based architecture with a vision transformer. We apply ViTO to solve inverse PDE problems of increasing complexity, including the wave equation, the Navier–Stokes equations, and the Darcy equation. We focus on the more challenging case of super-resolution, where the input dataset, for the inverse problem, is at a significantly coarser resolution than the output. The results are comparable to or exceed the leading operator network benchmarks for accuracy. Furthermore, ViTO's architecture has a small number of trainable parameters (less than 10% of the leading competitor), resulting in a performance speed-up of over 5 times when averaged over the various test cases.

Original languageEnglish
Article number117109
JournalComputer Methods in Applied Mechanics and Engineering
Volume428
DOIs
StatePublished - 1 Aug 2024

Keywords

  • Deep learning
  • Inverse problems
  • Scientific machine learning
  • Super-resolution
  • Vision Transformers

All Science Journal Classification (ASJC) codes

  • Computational Mechanics
  • Mechanics of Materials
  • Mechanical Engineering
  • General Physics and Astronomy
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'ViTO: Vision Transformer-Operator'. Together they form a unique fingerprint.

Cite this