By using an asymptotic analysis and numerical simulations, we derive and investigate a system of homogenized Maxwell's equations for conducting material sheets that are periodically arranged and embedded in a heterogeneous and anisotropic dielectric host. This structure is motivated by the need to design plasmonic crystals that enable the propagation of electromagnetic waves with no phase delay (epsilon-near-zero effect). Our microscopic model incorporates the surface conductivity of the two-dimensional (2D) material of each sheet and a corresponding line charge density through a line conductivity along possible edges of the sheets. Our analysis generalizes averaging principles inherent in previous Bloch-wave approaches. We investigate physical implications of our findings. In particular, we emphasize the role of the vector-valued corrector field, which expresses microscopic modes of surface waves on the 2D material. By using a Drude model for the surface conductivity of the sheet, we construct a Lorentzian function that describes the effective dielectric permittivity tensor of the plasmonic crystal as a function of frequency.
Plasmonic crystals are a class of optical metamaterials that consist of engineered structures at the sub-wavelength scale. They exhibit optical properties that are not found under normal circumstances in nature, such as negative-refractive-index and epsilon-near-zero (ENZ) behavior. Graphene-based plasmonic crystals present linear, elliptical, or hyperbolic dispersion relations that exhibit ENZ behavior, normal or negative-index diffraction. The optical properties can be dynamically tuned by controlling the operating frequency and the doping level of graphene. We propose a construction approach to expand the frequency range of the ENZ behavior. We demonstrate how the combination of a host material with an optical Lorentzian response in combination with a graphene conductivity that follows a Drude model leads to an ENZ condition spanning a large frequency range.
Neural networks are a central technique in machine learning. Recent years have seen a wave of interest in applying neural networks to physical systems for which the governing dynamics are known and expressed through differential equations. Two fundamental challenges facing the development of neural networks in physics applications is their lack of interpretability and their physics-agnostic design. The focus of the present work is to embed physical constraints into the structure of the neural network to address the second fundamental challenge. By constraining tunable parameters (such as weights and biases) and adding special layers to the network, the desired constraints are guaranteed to be satisfied without the need for explicit regularization terms. This is demonstrated on supervised and unsupervised networks for two basic symmetries: even/odd symmetry of a function and energy conservation. In the supervised case, the network with embedded constraints is shown to perform well on regression problems while simultaneously obeying the desired constraints whereas a traditional network fits the data but violates the underlying constraints. Finally, a new unsupervised neural network is proposed that guarantees energy conservation through an embedded symplectic structure. The symplectic neural network is used to solve a system of energy-conserving differential equations and out-performs an unsupervised, non-symplectic neural network.
Chimeras and branching are two archetypical complex phenomena that appear in many physical systems; because of their different intrinsic dynamics, they delineate opposite non-trivial limits in the complexity of wave motion and present severe challenges in predicting chaotic and singular behavior in extended physical systems. We report on the long-term forecasting capability of Long Short-Term Memory (LSTM) and reservoir computing (RC) recurrent neural networks, when they are applied to the spatiotemporal evolution of turbulent chimeras in simulated arrays of coupled superconducting quantum interference devices (SQUIDs) or lasers, and branching in the electronic flow of two-dimensional graphene with random potential. We propose a new method in which we assign one LSTM network to each system node except for “observer” nodes which provide continual “ground truth” measurements as input; we refer to this method as “Observer LSTM” (OLSTM). We demonstrate that even a small number of observers greatly improves the data-driven (model-free) long-term forecasting capability of the LSTM networks and provide the framework for a consistent comparison between the RC and LSTM methods. We find that RC requires smaller training datasets than OLSTMs, but the latter require fewer observers. Both methods are benchmarked against Feed-Forward neural networks (FNNs), also trained to make predictions with observers (OFNNs).