Recently, several platforms were proposed and demonstrated a proof-of-principle for finding the global minimum of the spin Hamiltonians such as the Ising and XY models using gain-dissipative quantum and classical systems. The implementation of dynamical adjustment of the gain and coupling strengths has been established as a vital feedback mechanism for analog Hamiltonian physical systems that aim to simulate spin Hamiltonians. Based on the principle of operation of such simulators we develop a novel class of gain-dissipative algorithms for global optimisation of NP-hard problems and show its performance in comparison with the classical global optimisation algorithms. These systems can be used to study the ground state and statistical properties of spin systems and as a direct benchmark for the performance testing of the gain-dissipative physical simulators. The estimates of the time operation of the physical implementation of the gain-dissipative simulators for large matrices show a possible speed-up of the several orders of magnitude in comparison with classical computations.

In the context of ultra-fast quantum communication and random number generation, detection timing-jitters represent a strong limitation as they can introduce major time-tagging errors and affect the quality of time-correlated photon counting or quantum state engineering. Despite their importance in emerging photonic quantum technologies, no detector model including such effects has been developed so far. We propose here an operational theoretical model based on POVM density formalism able to explicitly quantify the effect of timing-jitter for a typical class of single photon detector. We apply our model to some common experimental situations.

In this work we design a specific simulation tool for quantum channels which is based on the use of a control system. This allows us to simulate an average quantum channel which is expressed in terms of an ensemble of channels, even when these channel-components are not jointly teleportation-covariant. This design is also extended to asymptotic simulations, continuous ensembles, and memory channels. As an application, we derive relative-entropy-of-entanglement upper bounds for private communication over various channels, from the amplitude damping channel to non-Gaussian mixtures of bosonic lossy channels. Among other results, we also establish the two-way quantum and private capacity of the so-called `dephrasure' channel.

We present a theoretical description for circuits consisting of weak anharmonic qubits coupled to cavity multimodes. We obtain a unitary transformation that diagonalizes harmonic sector of the circuit. Weak anharmonicity does not alter the normal mode basis, however it can modify energy levels. We study two examples of a transmon and two transmons coupled to bus resonator, and we determine dressed frequencies and Kerr nonlinearities in closed form formulas. Our results are valid for arbitrary frequency detuning and coupling within and beyond dispersive regime.

The goal of this work is to define a notion of a quantum neural network to classify data, which exploits the low energy spectrum of a local Hamiltonian. As a concrete application, we build a binary classifier, train it on some actual data and then test its performance on a simple classification task. More specifically, we use Microsoft's quantum simulator, Liquid, to construct local Hamiltonians that can encode trained classifier functions in their ground space, and which can be probed by measuring the overlap with test states corresponding to the data to be classified. To obtain such a classifier Hamiltonian, we further propose a training scheme based on quantum annealing which is completely closed-off to the environment and which does not depend on external measurements until the very end, avoiding unnecessary decoherence during the annealing procedure. For a network of size n, the trained network can be stored as a list of O(n) coupling strengths. We address the question of which interactions are most suitable for a given classification task, and develop a qubit-saving optimization for the training procedure on a simulated annealing device. Furthermore, a small neural network to classify colors into red vs. blue is trained and tested, and benchmarked against the annealing parameters.

We present new results on realtime alternating, private alternating, and quantum alternating automaton models. Firstly, we show that the emptiness problem for alternating one-counter automata on unary alphabets is undecidable. Then, we present two equivalent definitions of realtime private alternating finite automata (PAFAs). We show that the emptiness problem is undecidable for PAFAs. Furthermore, PAFAs can recognize some nonregular unary languages, including the unary squares language, which seems to be difficult even for some classical counter automata with two-way input. Regarding quantum finite automata (QFAs), we show that the emptiness problem is undecidable both for universal QFAs on general alphabets, and for alternating QFAs with two alternations on unary alphabets. On the other hand, the same problem is decidable for nondeterministic QFAs on general alphabets. We also show that the unary squares language is recognized by alternating QFAs with two alternations.

We present a simple family of Bell inequalities applicable to a scenario involving arbitrarily many parties, each of which performs two binary-outcome measurements. We show that these inequalities are members of the complete set of full-correlation Bell inequalities discovered by Werner-Wolf-Zukowski-Brukner. For scenarios involving a small number of parties, we further verify that these inequalities are facet-defining for the convex set of Bell-local correlations. Moreover, we show that the amount of quantum violation of these inequalities naturally manifests the extent to which the underlying system is genuinely many-body entangled. In other words, our Bell inequalities, when supplemented with the appropriate quantum bounds, naturally serve as device-independent witnesses for entanglement depth, allowing one to certify genuine k-partite entanglement in an arbitrary $n\ge k$-partite scenario without relying on any assumption about the measurements being performed, nor the dimension of the underlying physical system. A brief comparison is made between our witnesses and those based on some other Bell inequalities, as well as the quantum Fisher information. A family of witnesses for genuine k-partite nonlocality applicable to an arbitrary $n\ge k$-partite scenario based on our Bell inequalities is also presented.

In this paper the Hartree equation is derived from the $N$-body Schr\''odinger equation in the mean-field limit, with convergence rate estimates that are uniform in the Planck constant $\hbar$. Specifically, we consider the two following cases:(a) T\''oplitz initial data and Lipschitz interaction forces, and (b) analytic initial data and interaction potential, over short time intervals independent of $\hbar$. The convergence rates in these two cases are $1/\sqrt{\log\log N}$ and $1/N$ respectively. The treatment of the second case is entirely self-contained and all the constants appearing in the final estimate are explicit. It provides a derivation of the Vlasov equation from the $N$-body classical dynamics using BBGKY hierarchies instead of empirical measures.

One of the most striking features of quantum theory is the existence of entangled states, responsible for Einstein's so called "spooky action at a distance". These states emerge from the mathematical formalism of quantum theory, but to date we do not have a clear idea of the physical principles that give rise to entanglement. Why does nature have entangled states? Would any theory superseding classical theory have entangled states, or is quantum theory special? One important feature of quantum theory is that it has a classical limit, recovering classical theory through the process of decoherence. We show that any theory with a classical limit must contain entangled states, thus establishing entanglement as an inevitable feature of any theory superseding classical theory.

We present an algorithm for manipulating quantum information via a sequence of projective measurements. We frame this manipulation in the language of stabilizer codes: a quantum computation approach in which errors are prevented and corrected in part by repeatedly measuring redundant degrees of freedom. We show how to construct a set of projective measurements which will map between two arbitrary stabilizer codes. We show that this process preserves all quantum information. It can be used to implement Clifford gates, braid extrinsic defects, or move between codes in which different operations are natural.

We propose a usage of a weak value for a quantum processing between preselection and postselection. While the weak value of a projector of 1 provides a process with certainty like the probability of 1, the weak value of -1 negates the process completely. Their mutually opposite effect is approved without a conventional `weak' condition. In addition the quantum process is not limited to be unitary; in particular we consider a loss of photons and experimentally demonstrate the negation of the photon loss by using the negative weak value of -1 against the positive weak value of 1.

We consider the error arising from the approximation of an N-particle dynamics with its description in terms of a one-particle kinetic equation. We estimate the distance between the j-marginal of the system and the factorized state, obtained in a mean field limit as N $\rightarrow$ $\infty$. Our analysis relies on the evolution equation for the "correlation error" rather than on the usual BBGKY hierarchy. The rate of convergence is shown to be O(j 2 N) in any bounded interval of time (size of chaos), as expected from heuristic arguments. Our formalism applies to an abstract hierarchical mean field model with bounded collision operator and a large class of initial data, covering (a) stochastic jump processes converging to the homogeneous Boltzmann and the Povzner equation and (b) quantum systems giving rise to the Hartree equation.

We introduce a novel method of quantum emulation of a classical reversible cellular automaton. By applying this method to a chaotic cellular automaton, the obtained quantum many-body system thermalizes while all the energy eigenstates and eigenvalues are solvable. These explicit solutions allow us to verify the validity of some scenarios of thermalization to this system. We find that two leading scenarios, the eigenstate thermalization hypothesis scenario and the large effective dimension scenario, do not explain thermalization in this model.

We analyze the production of entropy along non-equilibrium processes in quantum systems coupled to generic environments. First, we show that the entropy production due to final measurements and the loss of correlations obeys a fluctuation theorem in detailed and integral forms. Second, we discuss the decomposition of the entropy production into two positive contributions, adiabatic and non-adiabatic, based on the existence of invariant states of the local dynamics. Fluctuation theorems for both contributions hold only for evolutions verifying a specific condition of quantum origin. We illustrate our results with three relevant examples of quantum thermodynamic processes far from equilibrium.

Studying general quantum many-body systems is one of the major challenges in modern physics because it requires an amount of computational resources that scales exponentially with the size of the system.Simulating the evolution of a state, or even storing its description, rapidly becomes intractable for exact classical algorithms. Recently, machine learning techniques, in the form of restricted Boltzmann machines, have been proposed as a way to efficiently represent certain quantum states with applications in state tomography and ground state estimation. Here, we introduce a new representation of states based on variational autoencoders. Variational autoencoders are a type of generative model in the form of a neural network. We probe the power of this representation by encoding probability distributions associated with states from different classes. Our simulations show that deep networks give a better representation for states that are hard to sample from, while providing no benefit for random states. This suggests that the probability distributions associated to hard quantum states might have a compositional structure that can be exploited by layered neural networks. Specifically, we consider the learnability of a class of quantum states introduced by Fefferman and Umans. Such states are provably hard to sample for classical computers, but not for quantum ones, under plausible computational complexity assumptions. The good level of compression achieved for hard states suggests these methods can be suitable for characterising states of the size expected in first generation quantum hardware.

We detail techniques to optimise high-level classical simulations of Shor's quantum factoring algorithm. Chief among these is to examine the entangling properties of the circuit and to effectively map it across the one-dimensional structure of a matrix product state. Compared to previous approaches whose space requirements depend on $r$, the solution to the underlying order-finding problem of Shor's algorithm, our approach depends on its factors. We performed a matrix product state simulation of a 60-qubit instance of Shor's algorithm that would otherwise be infeasible to complete without an optimised entanglement mapping.

Protected zero modes in quantum physics traditionally arise in the context of ground states of many-body Hamiltonians. Here we study the case where zero modes exist in the center of a reflection-symmetric many-body spectrum, giving rise to the notion of a protected "infinite-temperature" degeneracy. For a certain class of nonintegrable spin chains, we show that the number of zero modes is determined by a chiral index that grows exponentially with system size. We propose a dynamical protocol, feasible in ongoing experiments in Rydberg atom quantum simulators, to detect these many-body zero modes and their protecting spectral reflection symmetry. Finally, we consider whether the zero energy states obey the eigenstate thermalization hypothesis, as is expected of states in the middle of the many-body spectrum. We find intriguing differences in their eigenstate properties relative to those of nearby nonzero-energy eigenstates at finite system sizes.

We study the population transfer between resonance states for a time-dependent loop around exceptional points in spectra of the hydrogen atom in parallel electric and magnetic fields. Exceptional points are well-suited for population transfer mechanisms, since a closed loop around these in parameter space commutes eigenstates. We address the question how shape and duration of the dynamical parameter loop affects the transferred population, in order to optimize the latter. Since the full quantum dynamics of the expansion coefficients is time-consuming, we furthermore present an approximation method, based on a $2\times 2$ matrix.

Quantum illumination is a technique for detecting the presence of a target in a noisy environment by means of a quantum probe. We prove that the two-mode squeezed vacuum state is the optimal probe for quantum illumination in the scenario of asymmetric discrimination, where the goal is to minimize the decay rate of the probability of a false positive with a given probability of a false negative. Quantum illumination with two-mode squeezed vacuum states offers a 6 dB advantage in the error probability exponent compared to illumination with coherent states. Whether more advanced quantum illumination strategies may offer further improvements had been a longstanding open question. Our fundamental result proves that nothing can be gained by considering more exotic quantum states, such as e.g. multi-mode entangled states. Our proof is based on a new fundamental entropic inequality for the noisy quantum Gaussian attenuators. We also prove that without access to a quantum memory, the optimal probes for quantum illumination are the coherent states.

We discuss a protocol based on quenching a purified quantum system that allows to capture bulk spectral features. It uses an infinite temperature initial state and an interferometric strategy to access the Loschmidt amplitude, from which the spectral features are retrieved via Fourier transform, providing coarse-grained approximation at finite times. It involves techniques available in current experimental setups for quantum simulation, at least for small systems. We illustrate possible applications in testing the eigenstate thermalization hypothesis and the physics of many-body localization.