The violation of the Floquet version of eigenstate thermalization hypothesis is systematically discussed with realistic Hamiltonians. Our model is based on the PXP type interactions without disorder. We exactly prove the existence of many-body scar states in the Floquet eigenstates, by showing the explicit expressions of the wave functions. Using the underlying physical mechanism, various driven Hamiltonians with Floquet-scar states can be systematically engineered.

We examine the existence of completely separable ground states (GS) in finite spin-$s$ arrays with anisotropic $XYZ$ couplings immersed in a non-uniform magnetic field along one of the principal axes of the coupling. The general conditions for their existence are determined. The separability curve in field space for alternating solutions is then derived, together with simple analytic expressions for the ensuing factorized state and GS energy, valid for any spin and size. It is also shown that such curve corresponds to the fundamental $S_z$-parity transition of the GS, present for any spin, in agreement with the breaking of this symmetry by the factorized GS, and that two different types of GS parity diagrams in field space can emerge, according to the relative strength of the couplings. The role of factorization in the magnetization and entanglement of these systems is also analyzed, and analytic expressions for observables at the borders of the factorizing curve are derived. Illustrative examples for spin pairs and chains are as well discussed.

We present a compressive quantum process tomography scheme that fully characterizes any rank-deficient completely-positive process with no a priori information about the process apart from the dimension of the system on which the process acts. It uses randomly-chosen input states and adaptive output von Neumann measurements. Both entangled and tensor-product configurations are flexibly employable in our scheme, the latter which naturally makes it especially compatible with many-body quantum computing. Two main features of this scheme are the certification protocol that verifies whether the accumulated data uniquely characterize the quantum process, and a compressive reconstruction method for the output states. We emulate multipartite scenarios with high-order electromagnetic transverse modes and optical fibers to positively demonstrate that, in terms of measurement resources, our assumption-free compressive strategy can reconstruct quantum processes almost equally efficiently using all types of input states and basis measurement operations, operations, independent of whether or not they are factorizable into tensor-product states.

The photonic environment can significantly influence emission properties and interactions among atomic systems. In such scenarios, frequently the electric dipole approximation is assumed that is justified as long as the spatial extent of the atomic system is negligible compared to the spatial variations of the field. While this holds true for many canonical systems, it ceases to be applicable for more contemporary nanophotonic structures. To go beyond the electric dipole approximation, we propose and develop in this article an analytical framework to describe the impact of the photonic environment on emission and interaction properties of atomic systems beyond the electric dipole approximation. Particularly, we retain explicitly magnetic dipolar and electric quadrupolar contributions to the light-matter interactions. We exploit a field quantization scheme based on electromagnetic Green's tensors, suited for dispersive materials. We obtain expressions for spontaneous emission rate, Lamb shift, multipole-multipole shift and superradiance rate, all being modified with dispersive environment. The considered influence could be substantial for suitably tailored nanostructured photonic environments, as demonstrated exemplarily.

Noisy Intermediate-Scale Quantum computers are expected to be available this year. It is proposed to exploit such a device for decision making under uncertainty. The probabilistic character of quantum mechanics reflects this uncertainty. Concomitantly, the noise may add to it. The approach is standard in the sense that Bayes decision rule is used to decide on the basis of maximum expected reward. The novelty is to model the various action profiles and the development of `nature' as unitary transformations on a set of qubits. Measurement eventually yields samples of classical binary random variables in which the reward function has to be expressed. In order to achieve sufficiently low variances for reliable decision making more runs of such a quantum algorithm are necessary. Some simple examples have been worked out to elucidate the idea. Here the calculations are still analytically feasible. Presently lacking an operating quantum device, the QX simulator of Quantum Inspire has been used to generate the necessary samples for comparison and demonstration. First obtained results are promising and point at a possible useful application for noisy intermediate-scale quantum computers.

For any quantum algorithm given by a path in the space of unitary operators we define the computational complexity as the typical computational time associated with the path. This time is defined using a quantum time estimator associated with the path. This quantum time estimator is fully characterized by the Lyapunov generator of the path and the corresponding quantum Fisher information. The computational metric associated with this definition of computational complexity leads to a natural characterization of cost factors on the Lie algebra generators. Operator complexity growth in time is analyzed from this perspective leading to a simple characterization of Lyapunov exponent in case of chaotic Hamiltonians. The connection between complexity and entropy is expressed using the relation between quantum Fisher information about quantum time estimation and von Neumann entropy. This relation suggest a natural bound on computational complexity that generalizes the standard time energy quantum uncertainty. The connection between Lyapunov and modular Hamiltonian is briefly discussed. In the case of theories with holographic duals and for those reduced density matrix defined by tracing over a bounded region of the bulk, quantum estimation theory is crucial to estimate quantum mechanically the geometry of the tracing region. It is suggested that the corresponding quantum Fisher information associated with this estimation problem is at the root of the holographic bulk geometry.

We present the application of Restricted Boltzmann Machines (RBMs) to the task of astronomical image classification using a quantum annealer built by D-Wave Systems. Morphological analysis of galaxies provides critical information for studying their formation and evolution across cosmic time scales. We compress the images using principal component analysis to fit a representation on the quantum hardware. Then, we train RBMs with discriminative and generative algorithms, including contrastive divergence and hybrid generative-discriminative approaches. We compare these methods to Quantum Annealing (QA), Markov Chain Monte Carlo (MCMC) Gibbs Sampling, Simulated Annealing (SA) as well as machine learning algorithms like gradient boosted decision trees. We find that RBMs implemented on D-wave hardware perform well, and that they show some classification performance advantages on small datasets, but they don't offer a broadly strategic advantage for this task. During this exploration, we analyzed the steps required for Boltzmann sampling with the D-Wave 2000Q, including a study of temperature estimation, and examined the impact of qubit noise by comparing and contrasting the original D-Wave 2000Q to the lower-noise version recently made available. While these analyses ultimately had minimal impact on the performance of the RBMs, we include them for reference.

We propose a regression algorithm that utilizes a learned dictionary optimized for sparse inference on D-Wave quantum annealer. In this regression algorithm, we concatenate the independent and dependent variables as an combined vector, and encode the high-order correlations between them into a dictionary optimized for sparse reconstruction. On a test dataset, the dependent variable is initialized to its average value and then a sparse reconstruction of the combined vector is obtained in which the dependent variable is typically shifted closer to its true value, as in a standard inpainting or denoising task. Here, a quantum annealer, which can presumably exploit a fully entangled initial state to better explore the complex energy landscape, is used to solve the highly non-convex sparse coding optimization problem. The regression algorithm is demonstrated for a lattice quantum chromodynamics simulation data using a D-Wave 2000Q quantum annealer and good prediction performance is achieved. The regression test is performed using six different values for the number of fully connected logical qubits, between 20 and 64, the latter being the maximum that can be embedded on the D-Wave 2000Q. The scaling results indicate that a larger number of qubits gives better prediction accuracy, the best performance being comparable to the best classical regression algorithms reported so far.

We study dipolar-coupled quantum many-spin systems with local disorder, subject to periodic pulse driving, in different spatial dimensions: from two-dimensional to (effectively) infinite-dimensional systems. Using direct numerical simulations, we show that these systems exhibit long-lived magnetization response for all dimensions, despite strong fluctuations in the spin-spin couplings, and corresponding strong singularities in the spin dynamics. We observe the long-lived magnetization response for the initial polarization being either along the driving pulses, or along the axis conserved by the internal Hamiltonian. For longer time delays, the magnetization echoes exhibit an even-odd asymmetry, i.e.\ the system's response is modulated at the period which is twice the period of the driving. The above results are corroborated by a Floquet-operator analysis.

Quantum decoherence plays a pivotal role in the dynamical description of the quantum-to-classical transition and is the main impediment to the realization of devices for quantum information processing. This paper gives an overview of the theory and experimental observation of the decoherence mechanism. We introduce the essential concepts and the mathematical formalism of decoherence, focusing on the picture of the decoherence process as a continuous monitoring of a quantum system by its environment. We review several classes of decoherence models and discuss the description of the decoherence dynamics in terms of master equations. We survey methods for avoiding and mitigating decoherence and give an overview of several experiments that have studied decoherence processes. We also comment on the role decoherence may play in interpretations of quantum mechanics and in addressing foundational questions.

We study the characteristics of thermalizing and non-thermalizing operators in integrable theories as we turn on a non-integrable deformation. Specifically, we show that $\sigma^z$, an operator that thermalizes in the integrable transverse field Ising model, has mean matrix elements that resemble ETH, but with fluctuations around the mean that are sharply suppressed. This suppression rapidly dwindles as the Ising model becomes non-integrable by the turning on of a longitudinal field. We also construct a non-thermalizing operator in the integrable regime, which slowly approaches the ETH form as the theory becomes non-integrable. At intermediate values of the non-integrable deformation, one distinguishes a perturbatively long relaxation time for this operator.

With the long-term goal of studying quantum gravity in the lab, we propose holographic teleportation protocols that can be readily executed in table-top experiments. These protocols exhibit similar behavior to that seen in recent traversable wormhole constructions: information that is scrambled into one half of an entangled system will, following a weak coupling between the two halves, unscramble into the other half. We introduce the concept of "teleportation by size" to capture how the physics of operator-size growth naturally leads to information transmission. The transmission of a signal through a semi-classical holographic wormhole corresponds to a rather special property of the operator-size distribution we call "size winding". For more general setups (which may not have a clean emergent geometry), we argue that imperfect size winding is a generalization of the traversable wormhole phenomenon. For example, a form of signalling continues to function at high temperature and at large times for generic chaotic systems, even though it does not correspond to a signal going through a geometrical wormhole, but rather to an interference effect involving macroscopically different emergent geometries. Finally, we outline implementations feasible with current technology in two experimental platforms: Rydberg atom arrays and trapped ions.

We confirmed the annealing time of Grover's search which is required to obtain desired success probability for quantum annealing by the imaginary-time and the real-time Schr\"{o}dinger equation with two kinds of schedulings; one linearly decreases the quantum fluctuation and the other tunes the evolution rate of the Hamiltonian based on the adiabatic condition. With linear scheduling, the required annealing time for quantum annealing by the imaginary-time Schr\"{o}dinger equation is of order $\log N$, which is very different from $O(N)$ required for the quantum annealing by the real-time Schr\"{o}dinger equation. With the scheduling based on the adiabatic condition, the required annealing time is of order $\sqrt{N}$, which is identical to the annealing time for quantum annealing by the real-time Schr\"{o}dinger equation. Although the scheduling based on the adiabatic condition is optimal for the quantum annealing by the real-time Schr\"{o}dinger equation, it is inefficient for the quantum annealing by the imaginary-time Schr\"{o}dinger equation. This result implies that the optimal scheduling for the quantum annealing by the imaginary-time and the real-time Schr\"{o}dinger equation is very different, and the efficient scheduling considered with the quantum Monte Carlo methods, which is based on imaginary-time Schr\"{o}dinger equation, is not necessarily effective to improve the performance of quantum annealing by the real-time Schr\"{o}dinger equation. We discuss the efficient scheduling for quantum annealing by the imaginary-time Schr\"{o}dinger equation with respect to the exponential decay of excited states.

In this paper, we show an interesting connection between a quantum sampling technique and quantum uncertainty. Namely, we use the quantum sampling technique, introduced by Bouman and Fehr, to derive a novel entropic uncertainty relation based on smooth min entropy, the binary Shannon entropy of an observed outcome, and the probability of failure of a classical sampling strategy. We then show two applications of our new relation. First, we use it to develop a simple proof of a version of the Maassen and Uffink uncertainty relation. Second, we show how it may be applied to quantum random number generation.

The Einstein Equivalence Principle (EEP) underpins all metric theories of gravity. Its key element is the local position invariance of non-gravitational experiments, which entails the gravitational red-shift. Precision measurements of the gravitational red-shift tightly bound violations of the EEP only in the fermionic sector of the Standard Model, however recent developments of satellite optical technologies allow for its investigation in the electromagnetic sector. Proposals exploiting light interferometry traditionally suffer from the first-order Doppler effect, which dominates the weak gravitational signal necessary to test the EEP, making them unfeasible. Here, we propose a novel scheme to test the EEP, which is based on a double large-distance optical interferometric measurement. By manipulating the phase-shifts detected at two locations at different gravitational potentials it is possible to cancel-out the first-order Doppler effect and observe the gravitational red-shift implied by the EEP. We present the detailed analysis of the proposal within the post-Newtonian framework and the simulations of the expected signals obtained by using two realistic satellite orbits. Our proposal to overcome the first-order Doppler effect in optical EEP tests is feasible with current technology.

Semi-quantum key distribution protocols are designed to allow two parties to establish a shared secret key, secure against an all-powerful adversary, even when one of the users is restricted to measuring and preparing quantum states in one single basis. While interesting from a theoretical standpoint, these protocols have the disadvantage that a two-way quantum communication channel is necessary which generally limits their theoretical efficiency and noise tolerance. In this paper, we construct a new semi-quantum key distribution (SQKD) protocol which actually takes advantage of this necessary two-way channel, and, after performing an information theoretic security analysis against collective attacks, we show it is able to tolerate a channel noise level higher than any prior SQKD protocol to-date. We also compare the noise tolerance of our protocol to other two-way fully quantum protocols, along with BB84 with Classical Advantage Distillation (CAD). We also comment on some practical issues involving semi-quantum key distribution (in particular, concerning the potential complexity in physical implementation of our protocol as compared with other standard QKD protocols). Finally, we develop techniques that can be applied to the security analysis of other (S)QKD protocols reliant on a two-way quantum communication channel.

In a measurement-device-independent or quantum-refereed protocol, a referee can verify whether two parties share entanglement or Einstein-Podolsky-Rosen (EPR) steering without the need to trust either of the parties or their devices. The need for trusting a party is substituted by a quantum channel between the referee and that party, through which the referee encodes the measurements to be performed on that party's subsystem in a set of nonorthogonal quantum states. In this Letter, an EPR-steering inequality is adapted as a quantum-refereed EPR-steering witness, and the trust-free experimental verification of higher dimensional quantum steering is reported via preparing a class of entangled photonic qutrits. Further, with two measurement settings, we extract $1.106\pm0.023$ bits of private randomness per every photon pair from our observed data, which surpasses the one-bit limit for projective measurements performed on qubit systems. Our results advance research on quantum information processing tasks beyond qubits.

Rydberg atoms are at the core of an increasing number of experiments, which frequently rely on destructive detection methods, such as field ionization. Here, we present an experimental realization of single-shot non-destructive detection of ensembles of helium Rydberg atoms. We use the dispersive frequency shift of a superconducting microwave cavity interacting with the ensemble. By probing the transmission of the cavity and measuring the change in its phase, we determine the number of Rydberg atoms or the populations of Rydberg quantum states when the ensemble is prepared in a superposition. At the optimal probe power, determined by the critical photon number, we reach single-shot detection of the atom number with 13% precision for ensembles of about 500 Rydberg atoms with a measurement backaction characterized by approximately 2%-population transfer.

Matrix-product states have become the de facto standard for the representation of one-dimensional quantum many body states. During the last few years, numerous new methods have been introduced to evaluate the time evolution of a matrix-product state. Here, we will review and summarize the recent work on this topic as applied to finite quantum systems. We will explain and compare the different methods available to construct a time-evolved matrix-product state, namely the time-evolving block decimation, the MPO $W^\mathrm{II}$ method, the global Krylov method, the local Krylov method and the one- and two-site time-dependent variational principle. We will also apply these methods to four different representative examples of current problem settings in condensed matter physics.

The Potts model is a generalization of the Ising model with $Q>2$ components. In the fully connected ferromagnetic Potts model, a first-order phase transition is induced by varying thermal fluctuations. Therefore, the computational time required to obtain the ground states by simulated annealing exponentially increases with the system size. This study analytically confirms that the transverse magnetic-field quantum annealing induces a first-order phase transition. This result implies that quantum annealing does not exponentially accelerate the ground-state search of the ferromagnetic Potts model. To avoid the first-order phase transition, we propose an iterative optimization method using a half-hot constraint that is applicable to both quantum and simulated annealing. In the limit of $Q \to \infty$, a saddle point equation under the half-hot constraint is identical to the equation describing the behavior of the fully connected ferromagnetic Ising model, thus confirming a second-order phase transition. Furthermore, we verify the same relation between the fully connected Potts glass model and the Sherrington--Kirkpatrick model under assumptions of static approximation and replica symmetric solution. The proposed method is expected to obtain low-energy states of the Potts models with high efficiency using Ising-type computers such as the D-Wave quantum annealer and the Fujitsu Digital Annealer.