# Publications by Type: Journal Article

We present a deep neural network (DNN)-based model (HubbardNet) to variationally find the ground state and excited state wavefunctions of the one-dimensional and two-dimensional Bose- Hubbard model. Using this model for a square lattice with M sites, we obtain the energy spectrum as an analytical function of the on-site Coulomb repulsion, U, and the total number of particles, N, from a single training. This approach bypasses the need to solve a new hamiltonian for each different set of values (U, N ). We show that the DNN-parametrized solutions are in excellent agreement with results from the exact diagonalization of the hamiltonian, and it outperforms the exact diagonalization solution in terms of computational scaling. These advantages suggest that our model is promising for efficient and accurate computation of exact phase diagrams of many-body lattice hamiltonians.

Quantum control is a ubiquitous research field that has enabled physicists to delve into the dynamics and features of quantum systems. In addition to steering the system, quantum control has delivered powerful applications for various atomic, optical, mechanical, and solid-state systems. In recent years, traditional control techniques based on optimization processes have been translated into efficient artificial intelligence algorithms. Here, we introduce a computational method for optimal quantum control problems via physics-informed neural networks (PINNs). We apply our methodology to open quantum systems by efficiently solving the state-to-state transfer problem with high probabilities, short-time evolution, and minimizing the power of the control. Furthermore, we illustrate the flexibility of PINNs to solve the same problem under changes in parameters and initial conditions, showing advantages in comparison with standard control techniques.

There is a wave of interest in using physics-informed neural networks for solving differen-

tial equations. Most of the existing methods are based on feed-forward networks, while

recurrent neural networks solvers have not been extensively explored. We introduce a

reservoir computing (RC) architecture, an echo-state recurrent neural network capable

of discovering approximate solutions that satisfy ordinary differential equations (ODEs).

We suggest an approach to calculate time derivatives of recurrent neural network outputs

without using back-propagation. The internal weights of an RC are fixed, while only a lin-

ear output layer is trained, yielding efficient training. However, RC performance strongly

depends on finding the optimal hyper-parameters, which is a computationally expensive

process. We use Bayesian optimization to discover optimal sets in a high-dimensional

hyper-parameter space efficiently and numerically show that one set is robust and can

be transferred to solve an ODE for different initial conditions and time ranges. A closed-

form formula for the optimal output weights is derived to solve first-order linear equa-

tions in a one-shot backpropagation-free learning process. We extend the RC approach

by solving nonlinear systems of ODEs using a hybrid optimization method consisting of

gradient descent and Bayesian optimization. Evaluation of linear and nonlinear systems

of equations demonstrates the efficiency of the RC ODE solver.

There has been a wave of interest in applying machine learning to study dynamical systems. We present a Hamiltonian neural network that solves the differential equations that govern dynamical systems. This is an equation-driven machine learning method where the optimization process of the network depends solely on the predicted functions without using any ground truth data. The model learns solutions that satisfy, up to an arbitrarily small error, Hamilton’s equations and, therefore, conserve the Hamiltonian invariants. The choice of an appropriate activation function drastically improves the predictability of the network. Moreover, an error analysis is derived and states that the numerical errors depend on the overall network performance. The Hamiltonian network is then employed to solve the equations for the nonlinear oscillator and the chaotic H ́enon-Heiles dynamical system. In both systems, a symplectic Euler integrator requires two orders more evaluation points than the Hamiltonian network in order to achieve the same order of the numerical error in the predicted phase space trajectories.

Population-wide vaccination is critical for containing the SARS-CoV-2 (Covid-19) pandemic when combined with restrictive and prevention measures. In this study we introduce SAIVR, a mathematical model able to forecast the Covid-19 epidemic evolution during the vaccination campaign. SAIVR extends the widely used Susceptible-Infectious-Removed (SIR) model by considering the Asymptomatic (A) and Vaccinated (V) compartments. The model contains sev- eral parameters and initial conditions that are estimated by employing a semi-supervised machine learning procedure. After training an unsupervised neural network to solve the SAIVR differ- ential equations, a supervised framework then estimates the optimal conditions and parameters that best fit recent infectious curves of 27 countries. Instructed by these results, we performed an extensive study on the temporal evolution of the pandemic under varying values of roll-out daily rates, vaccine efficacy, and a broad range of societal vaccine hesitancy/denial levels. The concept of herd immunity is questioned by studying future scenarios which involve different vaccination efforts and more infectious Covid-19 variants.

Reservoir computer (RC) are among the fastest to train of all neural networks, especially when they are compared to other recurrent neural networks. RC has this advantage while still handling sequential data exceptionally well. However, RC adoption has lagged other neural network models because of the model’s sensitivity to its hyper-parameters (HPs). A modern unified software package that automatically tunes these parameters is missing from the literature. Manually tuning these numbers is very difficult, and the cost of traditional grid search methods grows exponentially with the number of HPs considered, discouraging the use of the RC and limiting the complexity of the RC models which can be devised. We address these problems by introducing RcTorch, a PyTorch based RC neural network package with automated HP tuning. Herein, we demonstrate the utility of RcTorchby using it to predict the complex dynamics of a driven pendulum being acted upon by varying forces. This work includes coding examples. Example Python Jupyter notebooks can be found on our GitHub repository https://github.com/blindedjoy/RcTorch and documentation can be found at https://rctorch.readthedocs.io/.

heterostructure devices is a critically needed diagnostic tool for study of the electronic and optical phenomena induced by the periodic variation of atomic structure in these complex systems. Conventional imaging methods are destructive and insensitive to the buried device geometries, preventing practical inspection. Here we report a versatile scanning probe microscopy employing infrared light for imaging moiré superlattices of twisted bilayers graphene encapsulated by hexagonal boron nitride. We map the pattern using the scattering dynamics of phonon polaritons launched in hexagonal boron nitride capping layers via its interaction with the buried moiré superlattices. We explore the origin of the double-line features imaged and show the mechanism of the underlying effective phase change of the phonon polariton reflectance at domain walls. The nano-imaging tool developed provides a non-destructive analytical approach to elucidate the complex physics of moiré engineered heterostructures.

demonstrate that even a small number of observers greatly improves the data-driven (model-free) long-term forecasting capability of the LSTM networks and provide the framework for a consistent comparison between the RC and LSTM methods. We find that RC requires smaller training datasets than OLSTMs, but the latter require fewer observers. Both methods are benchmarked against Feed-Forward neural networks (FNNs), also trained to make predictions with observers (OFNNs).