Fault tolerance is a prerequisite for scalable quantum computing. Architectures based on 2D topological codes are effective for near-term implementations of fault tolerance. To obtain high performance with these architectures, we require a decoder which can adapt to the wide variety of error models present in experiments. The typical approach to the problem of decoding the surface code is to reduce it to minimum-weight perfect matching in a way that provides a suboptimal threshold error rate, and is specialized to correct a specific error model. Recently, optimal threshold error rates for a variety of error models have been obtained by methods which do not use minimum-weight perfect matching, showing that such thresholds can be achieved in polynomial time. It is an open question whether these results can also be achieved by minimum-weight perfect matching. In this work, we use belief propagation and a novel algorithm for producing edge weights to increase the utility of minimum-weight perfect matching for decoding surface codes. This allows us to correct depolarizing errors using the rotated surface code, obtaining a threshold of $17.76 \pm 0.02 \%$. This is larger than the threshold achieved by previous matching-based decoders ($14.88 \pm 0.02 \%$), though still below the known upper bound of $\sim 18.9 \%$.

Recent studies of out-of-time ordered thermal correlation functions (OTOC) in holographic systems and in solvable models such as the Sachdev-Ye-Kitaev (SYK) model have yielded new insights into manifestations of many-body chaos. So far the chaotic behavior has been obtained through explicit calculations in specific models. In this paper we propose a unified description of the exponential growth and ballistic butterfly spreading of OTOCs across different systems using a newly formulated "quantum hydrodynamics," which is valid at finite $\hbar$ and to all orders in derivatives. The scrambling of a generic few-body operator in a chaotic system is described as building up a "hydrodynamic cloud," and the exponential growth of the cloud arises from a shift symmetry of the hydrodynamic action. The shift symmetry also shields correlation functions of the energy density and flux, and time ordered correlation functions of generic operators from exponential growth, while leads to chaotic behavior in OTOCs. The theory also predicts an interesting phenomenon of the skipping of a pole at special values of complex frequency and momentum in two-point functions of energy density and flux. This pole-skipping phenomenon may be considered as a "smoking gun" for the hydrodynamic origin of the chaotic mode. We also discuss the possibility that such a hydrodynamic description could be a hallmark of maximally chaotic systems.

Spin squeezing is a form of entanglement that can improve the stability of quantum sensors operating with multiple particles, by inducing inter-particle correlations that redistribute the quantum projection noise. Previous analyses of potential metrological gain when using spin squeezing were performed on theoretically ideal states, without incorporating experimental imperfections or inherent limitations which result in non-unitary quantum state evolution. Here, we show that potential gains in clock stability are substantially reduced when the spin squeezing is non-unitary, and derive analytic formulas for the clock performance as a function of squeezing, excess spin noise, and interferometer contrast. Our results highlight the importance of creating and employing nearly pure entangled states for improving atomic clocks.

It is known that secondary non-stoquastic drivers may offer speed-ups or catalysis in some models of adiabatic quantum computation accompanying the more typical transverse field driver. Their combined intent is to raze potential barriers to zero during adiabatic evolution from a false vacuum to a true minimum; first order phase transitions are softened into second order transitions. We move beyond mean-field analysis to a fully quantum model of a spin ensemble undergoing adiabatic evolution in which the spins are mapped to a variable mass particle in a continuous one-dimensional potential. We demonstrate the necessary criteria for enhanced mobility or `speed-up' across potential barriers is actually a quantum form of the Rayleigh criterion. Quantum catalysis is exhibited in models where previously thought not possible, when barriers cannot be eliminated. For the $3$-spin model with secondary anti-ferromagnetic driver, catalysed time complexity scales between linear and quadratically with the number of qubits. As a corollary, we identify a useful resonance criterion for quantum phase transition that differs from the classical one, but converges on it, in the thermodynamic limit.

More general canonical ensemble which gives rise the generalized statistics or $q$-deformed statistics can represent the realistic scenario than the ideal one, with proper parameter sets involved. We study the Planck's law of blackbody radiation, Wein's and Rayleigh-Jeans radiation formulae from the point of view of $q$-deformed statistics. We find that the blackbody energy spectrum curve for a given temperature $T$ corresponding to different $q$ values differs from each other: the location of the peak(i.e. $\nu_m$) of the energy distribution $u_\nu$ (corresponding to different $q$ ) shifted towards higher $\nu$ for higher $q$. From the $q$-deformed Wein's displacement law, we find that $\lambda_m T$ varies from $0.0029~\rm{m~K}$ to $0.0017~\rm{m~K}$ as the deformation parameter $q$ varies from $1.0$(undeformed) to $1.1$(deformed).

We examine the performance of the single-mode GKP code and its concatenation with the toric code for a noise model of Gaussian shifts, or displacement errors. We show how one can optimize the tracking of errors in repeated noisy error correction for the GKP code. We do this by examining the maximum-likelihood problem for this setting and its mapping onto a 1D Euclidean path-integral modeling a particle in a random cosine potential. We demonstrate the efficiency of a minimum-energy decoding strategy as a proxy for the path integral evaluation. In the second part of this paper, we analyze and numerically assess the concatenation of the GKP code with the toric code. When toric code measurements and GKP error correction measurements are perfect, we find that by using GKP error information the toric code threshold improves from $10\%$ to $14\%$. When only the GKP error correction measurements are perfect we observe a threshold at $6\%$. In the more realistic setting when all error information is noisy, we show how to represent the maximum likelihood decoding problem for the toric-GKP code as a 3D compact QED model in the presence of a quenched random gauge field, an extension of the random-plaquette gauge model for the toric code. We present a new decoder for this problem which shows the existence of a noise threshold at shift-error standard deviation $\sigma_0 \approx 0.243$ for toric code measurements, data errors and GKP ancilla errors. If the errors only come from having imperfect GKP states, this corresponds to states with just 4 photons or more. Our last result is a no-go result for linear oscillator codes, encoding oscillators into oscillators. For the Gaussian displacement error model, we prove that encoding corresponds to squeezing the shift errors. This shows that linear oscillator codes are useless for quantum information protection against Gaussian shift errors.

In this paper, we briefly discuss the methodology for simulating a quantum computer which performs Shor's algorithm on a 7-qubit system to factorise 15. Using this simulation and the overlooked quantum brachistochrone method, we devised a Monte Carlo algorithm to calculate the expected time a theoretical quantum computer could perform this calculation under the same energy conditions as current working quantum computers. We found that, experimentally, a nuclear magnetic resonance quantum computer would take $1.59 \pm 0.04$ s to perform our simulated computation, whereas the expected optimal time under the same energy conditions is $0.955 \pm 0.004$ ms. Moreover, we found that the expected time is inversely proportional to the energy variance of our qubit states (as expected). Finally, we propose this theoretical method for analysing the time-efficiency of future quantum computing experiments.

No consensus regarding the universal validity of any particular interpretation of the measurement problem has been reached so far. Since the problem involves time independent zero entropy pure quantum states, it must be considered in an infinite time interval and therefore manifests only within experimental setups involving pure quantum states and non-zero entropy dissipative structures, of which living organisms seem to be the only ones capable of performing measurements. On the other hand any information that can be conveyed is finite and thus can be compared in an observer-independent theoretical framework. But finite information also obeys the second law of thermodynamics and therefore should relate to spatially and temporary distinguishable phenomena above the Heisenberg's Uncertainty Principle threshold. These factors were to a certain account neglected so far.

We study the current-carrying steady-state of a transverse field Ising chain coupled to magnetic thermal reservoirs and obtain the non-equilibrium phase diagram as a function of the magnetization potential of the reservoirs. Upon increasing the magnetization bias we observe a discontinuous jump of the magnetic order parameter that coincides with a divergence of the correlation length. For steady-states with a non-vanishing conductance, the entanglement entropy at zero temperature displays a bias dependent logarithmic correction that differs from the well-known equilibrium case. Our findings show that out-of-equilibrium conditions allow for novel critical phenomena not possible at equilibrium.

Gravity generated by large masses has been observed using a variety of probes from atomic interferometers to torsional balances. However, gravitational coupling between small masses has never been observed so far. Here, we demonstrate sensitive displacement sensing of the Brownian motion of an optically trapped 7-mg pendulum motion whose natural quality factor is increased to $10^8$ through dissipation dilution. The sensitivity for an integration time of one second corresponds to the displacement generated by the gravitational coupling between the probe and a mm separated 100 mg mass, whose position is modulated at the pendulum mechanical resonant frequency. Development of such a sensitive displacement sensor using a mg-scale device will pave the way for a new class of experiments where gravitational coupling between small masses in quantum regimes can be achieved.

Scalable and fault-tolerant quantum computation will require error correction. This will demand constant measurement of many-qubit observables, implemented using a vast number of CNOT gates. Indeed, practically all operations performed by a fault-tolerant device will be these CNOTs, or equivalent two-qubit controlled operations. It is therefore important to devise benchmarks for these gates that explicitly quantify their effectiveness at this task. Here we develop such benchmarks, and demonstrate their use by applying them to a range of differently implemented controlled gates and a particular quantum error correcting code. Specifically, we consider spin qubits confined to quantum dots that are coupled either directly or via floating gates to implement the minimal 17-qubit instance of the surface code. Our results show that small differences in the gate fidelity can lead to large differences in the performance of the surface code. This shows that gate fidelity is not, in general, a good predictor of code performance.

We explore repulsive Fermi polarons in one-dimensional harmonically trapped few-body mixtures of ultracold atoms using as a case example a $^6$Li-$^{40}$K mixture. A characterization of these quasiparticle-like states, whose appearance is signalled in the impurity's radiofrequency spectrum, is achieved by extracting their lifetime and residua. Increasing the number of $^{40}$K impurities leads to the occurrence of both single and multiple polarons that are entangled with their environment. An interaction-dependent broadening of the spectral lines is observed suggesting the presence of induced interactions. We propose the relative distance between the impurities as an adequate measure to detect induced interactions independently of the specifics of the atomic mixture, a result that we showcase by considering also a $^6$Li-$^{173}$Yb system. This distance is further shown to probe the generation of entanglement independently of the size of the bath ($^6$Li) and the atomic species. The generation of entanglement and the importance of induced interactions are revealed with an emphasis on the regime of intermediate interaction strengths.

The laws of quantum mechanics allow to perform measurements whose precision supersedes results predicted by classical parameter estimation theory. That is, the precision bound imposed by the central limit theorem in the estimation of a broad class of parameters, like atomic frequencies in spectroscopy or external magnetic field in magnetometry, can be overcome when using quantum probes. Environmental noise, however, generally alters the ultimate precision that can be achieved in the estimation of an unknown parameter. This tutorial reviews recent theoretical work aimed at obtaining general precision bounds in the presence of an environment. We adopt a complementary approach, where we first analyze the problem within the general framework of describing the quantum systems in terms of quantum dynamical maps and then relate this abstract formalism to a microscopic description of the system's dissipative time evolution. We will show that although some forms of noise do render quantum systems standard quantum limited, precision beyond classical bounds is still possible in the presence of different forms of local environmental fluctuations.

We propose how to create and manipulate one-way nonclassical light via photon blockade in rotating nonlinear devices. We refer to this effect as nonreciprocal photon blockade (PB). Specifically, we show that in a spinning Kerr resonator, PB happens when the resonator is driven in one direction but not the other. This occurs because of the {Fizeau drag,} leading to a full split of the resonance frequencies of the countercirculating modes. Different types of purely quantum correlations, such as single- and two-photon blockades, can emerge in different directions in a well-controlled manner, and the transition from PB to photon-induced tunneling is revealed as well. Our work opens up a new route to achieve quantum nonreciprocal devices, which are crucial elements in chiral quantum technologies or topological photonics.

We analyze a quantum version of the weak equivalence principle, in which we compare the response of a static particle detector crossed by an accelerated cavity with the response of an accelerated detector crossing a static cavity in (1+1)-dimensional flat spacetime. We show, for both massive and massless scalar fields, that the non-locality of the field is enough for the detector to distinguish the two scenarios. We find this result holds for vacuum and excited field states of different kinds and we clarify the role of field mass in this setup.

Floquet topological matter has emerged as one exciting platform to explore rich physics and game-changing applications of topological phases. As one remarkable and recently discovered feature of Floquet symmetry protected topological (SPT) phases, in principle a simple periodically driven system can host an arbitrary number of topological protected zero edge modes and pi edge modes, with Majorana zero modes and Majorana pi modes as examples protected by the particle-hole symmetry. This work advocates a new route to holonomic quantum computation by exploiting the co-existence of many Floquet SPT edge modes, all of which have trivial dynamical phases during a computation protocol. As compelling evidence supporting this ambitious goal, three pairs of Majorana edge modes, hosted by a periodically driven one-dimensional (1D) superconducting superlattice, are shown to suffice to encode two logical qubits, realize quantum gate operations, and execute two simple quantum algorithms through adiabatic lattice deformation. When compared with early studies on quantum computation based on Majorana zero modes of topological quantum wires, significant resource saving is now made possible by use of Floquet SPT phases. This paper is thus hoped to motivate a series of future studies on the potential of Floquet topological matter in quantum computation.

A multi-slit interference experiment, with which-way detectors, in the presence of environment induced decoherence, is theoretically analyzed. The effect of environment is modeled via a coupling to a bath of harmonic oscillators. Through an exact analysis, an expression for $\mathcal{C}$, a recently introduced measure of coherence, of the particle at the detecting screen is obtained as a function of the parameters of the environment. It is argued that the effect of decoherence can be quantified using the measured coherence value which lies between zero and one. For the specific case of two slits, it is shown that the decoherence time can be obtained from the measured value of the coherence, $\mathcal{C}$, thus providing a novel way to quantify the effect of decoherence via direct measurement of quantum coherence. This would be of significant value in many current studies that seek to exploit quantum superpositions for quantum information applications and scalable quantum computation.

We study the quantum nature of non-Bunch-Davies states in de Sitter space by evaluating CHSH inequality on a localized two-atom system. We show that quantum nonlocality can be generated through the Markovian evolution of two-atom, witnessed by a violation of CHSH inequality on its final equilibrium state. We find that the upper bound of inequality violation is determined by different choices of de Sitter-invariant vacua sectors. In particular, with growing Gibbons-Hawking temperature, the CHSH bound degrades monotonously for Bunch-Davies vacuum sector. Due to the intrinsic correlation of non-Bunch-Davies vacua, we find that the related violation of inequality can however drastically increase after certain turning point, and may persist for arbitrarily large environment decoherence. This implies that the CHSH inequality is useful to classify the initial quantum state of the Universe. Finally, we clarify that the witnessed intrinsic correlation of non-Bunch-Davies vacua can be utilized for quantum information applications, e.g., surpassing the Heisenberg uncertainty bound of quantum measurement in de Sitter space.

Author(s): Dominik Lips, Artem Ryabov, and Philipp Maass

We study the driven Brownian motion of hard rods in a one-dimensional cosine potential with a large amplitude compared to the thermal energy. In a closed system, we find surprising features of the steady-state current in dependence of the particle density. The form of the current-density relation ch...

[Phys. Rev. Lett. 121, 160601] Published Tue Oct 16, 2018

Author(s): J. L. Rubio, D. Viscor, J. Mompart, and V. Ahufinger

In this paper, we propose a method to create an atomic frequency comb (AFC) in hot atomic vapors using the piecewise adiabatic passage (PAP) technique. Due to the Doppler effect, the trains of pulses used for PAP give rise to a velocity-dependent transfer of the atomic population from the initial st...

[Phys. Rev. A 98, 043834] Published Tue Oct 16, 2018