Variational Monte Carlo with Large Patched Transformers
Date
Friday November 29, 20241:30 pm - 2:30 pm
Location
STI AProf. Stefanie Czischek
University of Ottawa
Abstract
Large language models, like transformers, have recently demonstrated immense powers in text and image generation. This success is driven by the ability to capture long-range correlations between elements in a sequence. The same feature makes the transformer a powerful wavefunction ansatz that addresses the challenge of describing correlations in simulations of qubit systems. In this talk I consider two-dimensional Rydberg atom arrays to demonstrate that transformers reach higher accuracies than conventional recurrent neural networks for variational ground state searches. I further introduce large, patched transformer models, which consider a sequence of large atom patches, and show that this architecture significantly accelerates the simulations.
Timbits, coffee, tea will be served in STI A before the colloquium.
Upcoming Events
Oct
31
Friday
Departmental Colloquium -Kristin Poduska
Departmental - Kristin Poduska
Nov
07
Friday
Departmental Colloquium -The Search for Dark Matter and Resolving the DAMA Conundrum
Departmental - The Search for Dark Matter and Resolving the DAMA Conundrum
Nov
14
Friday
Departmental Colloquium - Dominique Segura-Cox
Departmental - Dominique Segura-Cox