Publications by Type: Conference Paper

Submitted
Shaan Desai, Marios Mattheakis, Hayden Joy, Pavlos Protopapas, and Stephen Roberts. Submitted. “One-Shot Transfer Learning of Physics-Informed Neural Networks.” In . Publisher's VersionAbstract
Solving differential equations efficiently and accurately sits at the heart of progress in many areas of scientific research, from classical dynamical systems to quantum mechanics. There is a surge of interest in using Physics-Informed Neural Networks (PINNs) to tackle such problems as they provide numerous benefits over traditional numerical approaches. Despite their potential benefits for solving differential equations, transfer learning has been under explored. In this study, we present a general framework for transfer learning PINNs that results in one-shot inference for linear systems of both ordinary and partial differential equations. This means that highly accurate solutions to many unknown differential equations can be obtained instantaneously without retraining an entire network. We demonstrate the efficacy of the proposed deep learning approach by solving several real-world problems, such as first- and second-order linear ordinary equations, the Poisson equation, and the time-dependent Schrodinger complex-value partial differential equation.
2110.11286.pdf
Marios Mattheakis, Hayden Joy, and Pavlos Protopapas. Submitted. “Unsupervised Reservoir Computing for Solving Ordinary Differential Equations.” In . Publisher's VersionAbstract

There is a wave of interest in using unsupervised neural networks for solving differential equations. The existing methods are based on feed-forward networks, while recurrent neural network differential equation solvers have not yet been reported. We introduce an nsupervised reservoir computing (RC), an echo-state recurrent neural network capable of discovering approximate solutions that satisfy ordinary differential equations (ODEs). We suggest an approach to calculate time derivatives of recurrent neural network outputs without using backpropagation. The internal weights of an RC are fixed, while only a linear output layer is trained, yielding efficient training. However, RC performance strongly depends on finding the optimal hyper-parameters, which is a computationally expensive process. We use Bayesian optimization to efficiently discover optimal sets in a high-dimensional hyper-parameter space and numerically show that one set is robust and can be used to solve an ODE for different initial conditions and time ranges. A closed-form formula for the optimal output weights is derived to solve first order linear equations in a backpropagation-free learning process. We extend the RC approach by solving nonlinear system of ODEs using a hybrid optimization method consisting of gradient descent and Bayesian optimization. Evaluation of linear and nonlinear systems of equations demonstrates the efficiency of the RC ODE solver.

2108.11417.pdf
2022
Anwesh Bhattacharya, Marios Mattheakis, and Pavlos Protopapas. 2022. “Encoding Involutory Invariance in Neural Networks.” In IJCNN at IEEE World Congress on Computational Intelligence. Publisher's VersionAbstract

In certain situations, Neural Networks (NN) are trained upon data that obey underlying physical symmetries. However, it is not guaranteed that NNs will obey the underlying symmetry unless embedded in the network structure. In this work, we explore a special kind of symmetry where functions are invariant with respect to involutory linear/affine transformations up to parity p = ±1. We develop mathe- matical theorems and propose NN architectures that ensure invariance and universal approximation properties. Numerical experiments indicate that the proposed mod- els outperform baseline networks while respecting the imposed symmetry. An adaption of our technique to convolutional NN classification tasks for datasets with inherent horizontal/vertical reflection symmetry has also been proposed.

2106.12891.pdf
Henry Jin, Marios Mattheakis, and Pavlos Protopapas. 2022. “Physics-Informed Neural Networks for Quantum Eigenvalue Problems.” In IJCNN at IEEE World Congress on Computational Intelligence. Publisher's VersionAbstract
Eigenvalue problems are critical to several fields of science and engineering. We expand on the method of using unsupervised neural networks for discovering eigenfunctions and eigenvalues for differential eigenvalue problems. The obtained solutions are given in an analytical and differentiable form that identically satisfies the desired boundary conditions. The network optimization is data-free and depends solely on the predictions of the neural network. We introduce two physics-informed loss functions. The first, called ortho-loss, motivates the network to discover pair-wise orthogonal eigenfunctions. The second loss term, called norm-loss, requests the discovery of normalized eigenfunctions and is used to avoid trivial solutions. We find that embedding even or odd symmetries to the neural network architecture further improves the convergence for relevant problems. Lastly, a patience condition can be used to automatically recognize eigenfunction solutions. This proposed unsupervised learning method is used to solve the finite well, multiple finite wells, and hydrogen atom eigenvalue quantum problems.
2022_pinn_quantum.pdf
2020
Alessandro Paticchio, Tommaso Scarlatti, Marios Mattheakis, Pavlos Protopapas, and Marco Brambilla. 12/2020. “Semi-supervised Neural Networks solve an inverse problem for modeling Covid-19 spread.” In 2020 NeurIPS Workshop on Machine Learning and the Physical Sciences. NeurIPS. Publisher's VersionAbstract

Studying the dynamics of COVID-19 is of paramount importance to understanding the efficiency of restrictive measures and develop strategies to defend against up- coming contagion waves. In this work, we study the spread of COVID-19 using a semi-supervised neural network and assuming a passive part of the population remains isolated from the virus dynamics. We start with an unsupervised neural network that learns solutions of differential equations for different modeling param- eters and initial conditions. A supervised method then solves the inverse problem by estimating the optimal conditions that generate functions to fit the data for those infected by, recovered from, and deceased due to COVID-19. This semi-supervised approach incorporates real data to determine the evolution of the spread, the passive population, and the basic reproduction number for different countries.

2020_covid_2010.05074.pdf
Henry Jin, Marios Mattheakis, and Pavlos Protopapas. 12/2020. “Unsupervised Neural Networks for Quantum Eigenvalue Problems.” In 2020 NeurIPS Workshop on Machine Learning and the Physical Sciences. NeurIPS. Publisher's VersionAbstract
Eigenvalue problems are critical to several fields of science and engineering. We present a novel unsupervised neural network for discovering eigenfunctions and eigenvalues for differential eigenvalue problems with solutions that identically satisfy the boundary conditions. A scanning mechanism is embedded allowing the method to find an arbitrary number of solutions. The network optimization is data-free and depends solely on the predictions. The unsupervised method is used to solve the quantum infinite well and quantum oscillator eigenvalue problems.
2020_eigenvalues_2010.05075.pdf
2019
Marios Mattheakis, Matthias Maier, Wei Xi Boo, and Efthimios Kaxiras. 9/2019. “Graphene epsilon-near-zero plasmonic crystals.” In NANOCOM '19 Proceedings of the Sixth Annual ACM International Conference on Nanoscale Computing and Communication. Dublin, Ireland. Publisher's VersionAbstract
Plasmonic crystals are a class of optical metamaterials that consist of engineered structures at the sub-wavelength scale. They exhibit optical properties that are not found under normal circumstances in nature, such as negative-refractive-index and epsilon-near-zero (ENZ) behavior. Graphene-based plasmonic crystals present linear, elliptical, or hyperbolic dispersion relations that exhibit ENZ behavior, normal or negative-index diffraction. The optical properties can be dynamically tuned by controlling the operating frequency and the doping level of graphene. We propose a construction approach to expand the frequency range of the ENZ behavior. We demonstrate how the combination of a host material with an optical Lorentzian response in combination with a graphene  conductivity that follows a Drude model leads to an ENZ condition spanning a large  frequency range.
1906.00018.pdf
2017
O. V. Shramkova, Marios Mattheakis, and G. P. Tsironis. 2017. “Amplification of surface plasmons in active nonlinear hyperbolic systems.” In 47th European Microwave Conference (EuMC), Pp. 488-491. Nuremberg, Germany. Publisher's VersionAbstract
In this paper, we study propagation of surface waves at a boundary of an amplifying isotropic medium and hyperbolic metamaterial. We demonstrate that the gain material can be used to counterbalance the losses in hyperbolic medium. We show that the gain-loss balance can be maintained even in the presence of nonlinear saturation leading to the surface wave amplification.
activesp_ieeeproceedings_2017.pdf
2014
C. Athanasopoulos, M Mattheakis, and G. P. Tsironis. 2014. “Enhanced surface plasmon polariton propagation induced by active dielectrics.” In Expert from the Proceedings of the 2014 COMSOL conference in Cambridge. Cambridge, UK: COMSOL. Publisher's VersionAbstract

We present numerical simulations for the propagation of surface plasmon polaritons (SPPs) in a dielectric-metal-dielectric waveguide using COMSOL multiphysics software. We show that the use of an active dielectric with gain that compensates metal absorption losses enhances substantially plasmon propagation. Furthermore, the introduction of the active material induces, for a specific gain value, a root in the imaginary part of the propagation constant leading to infinite propagation of the surface plasmon. The computational approaches analyzed in this work can be used to define and tune the optimal conditions for surface plasmon polariton amplification and propagation.

mattheakis_activeSPPsCOMSOL.pdf