We compare two different implementations of fault-tolerant entangling gates on logical qubits. In one instance, a twelve-qubit trapped-ion quantum computer is used to implement a non-transversal logical CNOT gate between two five qubit codes. The operation is evaluated with varying degrees of fault tolerance, which are provided by including quantum error correction circuit primitives known as flagging and pieceable fault tolerance. In the second instance, a twenty-qubit trapped-ion quantum computer is used to implement a transversal logical CNOT gate on two [[7,1,3]] color codes. The two codes were implemented on different but similar devices, and in both instances, all of the quantum error correction primitives, including the determination of corrections via decoding, are implemented during runtime using a classical compute environment that is tightly integrated with the quantum processor. For different combinations of the primitives, logical state fidelity measurements are made after applying the gate to different input states, providing bounds on the process fidelity. We find the highest fidelity operations with the color code, with the fault-tolerant SPAM operation achieving fidelities of 0.99939(15) and 0.99959(13) when preparing eigenstates of the logical X and Z operators, which is higher than the average physical qubit SPAM fidelities of 0.9968(2) and 0.9970(1) for the physical X and Z bases, respectively. When combined with a logical transversal CNOT gate, we find the color code to perform the sequence--state preparation, CNOT, measure out--with an average fidelity bounded by [0.9957,0.9963]. The logical fidelity bounds are higher than the analogous physical-level fidelity bounds, which we find to be [0.9850,0.9903], reflecting multiple physical noise sources such as SPAM errors for two qubits, several single-qubit gates, a two-qubit gate and some amount of memory error.

We propose a new protocol for preparing spin squeezed states in controllable atomic, molecular, and optical systems, with particular relevance to emerging optical clock platforms compatible with Rydberg interactions. By combining a short-ranged, soft-core potential with an external drive, we can transform naturally emerging Ising interactions into an XX spin model while opening a many-body gap. The gap helps maintain the system within a collective manifold of states where metrologically useful spin squeezing can be generated at a level comparable to the spin squeezing generated in systems with genuine all-to-all interactions. We examine the robustness of our protocol to experimentally-relevant decoherence and show favorable performance over typical protocols lacking gap protection.

We propose a new direction in quantum simulation that uses multilevel atoms in an optical cavity as a toolbox to engineer new types of bosonic models featuring correlated hopping processes in a synthetic ladder spanned by atomic ground states. The underlying mechanisms responsible for correlated hopping are collective cavity-mediated interactions that dress a manifold of excited levels in the far detuned limit. By weakly coupling the ground state levels to these dressed states using two laser drives with appropriate detunings, one can engineer correlated hopping processes while suppressing undesired single-particle and collective shifts of the ground state levels. We discuss the rich many-body dynamics that can be realized in the synthetic ladder including pair production processes, chiral transport and light-cone correlation spreading. The latter illustrates that an effective notion of locality can be engineered in a system with fully collective interactions.

In this paper, a semiquantum secret sharing (SQSS) protocol based on x-type states is proposed, which can accomplish the goal that only when two classical communicants cooperate together can they extract the shared secret key of a quantum communicant. Detailed security analysis turns out that this protocol can resist the participant attack and the outside attack. This protocol has some merits: (1) it only requires one kind of quantum entangled state as the initial quantum resource; (2) it doesn't employ quantum entanglement swapping or unitary operations; and (3) it needn't share private keys among different participants beforehand.

Quantum mechanics allows processes to be superposed, leading to a genuinely quantum lack of causal structure. For example, the process known as the quantum switch applies two operations ${\cal A}$ and ${\cal B}$ in a superposition of the two possible orders, ${\cal A}$ before ${\cal B}$ and ${\cal B}$ before ${\cal A}$. Experimental implementations of the quantum switch have been challenged by some on the grounds that the operations ${\cal A}$ and ${\cal B}$ were implemented more than once, thereby simulating indefinite causal order rather than actually implementing it. Motivated by this debate, we consider a situation in which the quantum operations are physically described by a light-matter interaction model. When one restricts the energy available for the implementations, an imperfect operation creating correlations between a "target" system and its environment is implemented instead, allowing one to distinguish processes using different numbers of operations. We consider such an energetically-constrained scenario and compare the quantum switch to one of its natural simulations, where each operation is implemented twice. Considering a commuting-vs-anticommuting unitary discrimination task, we find that within our model the quantum switch performs better, for some fixed amount of energy, than its simulation. In addition to the known computational or communication advantages of causal superpositions, our work raises new questions about their potential energetic advantages.

We exploit the properties of chain mapping transformations of bosonic environments to identify a finite collection of modes able to capture the characteristic features, or fingerprint, of the environment. Moreover we show that the countable infinity of residual bath modes can be replaced by a universal Markovian closure, namely a small collection of damped modes undergoing a Lindblad-type dynamics whose parametrization is independent of the spectral density under consideration. We show that the Markovian closure provides a quadratic speed-up with respect to standard chain mapping techniques and makes the memory requirement independent of the simulation time, while preserving all the information on the fingerprint modes. We illustrate the application of the Markovian closure to the computation of linear spectra but also to non-linear spectral response, a relevant experimentally accessible many body coherence witness for which efficient numerically exact calculations in realistic environments are currently lacking.

Coherent quantum noise cancellation (CQNC) can be used in optomechanical sensors to surpass the standard quantum limit (SQL). In this paper, we investigate an optomechanical force sensor that uses the CQNC strategy by cascading the optomechanical system with an all-optical effective negative mass oscillator. Specifically, we analyze matching conditions, losses and compare the two possible arrangements in which either the optomechanical or the negative mass system couples first to light. While both of these orderings yield a sub-SQL performance, we find that placing the effective negative mass oscillator before the optomechanical sensor will always be advantageous for realistic parameters. The modular design of the cascaded scheme allows for better control of the sub-systems by avoiding undesirable coupling between system components, while maintaining similar performance to the integrated configuration proposed earlier. We conclude our work with a case study of a micro-optomechanical implementation.

Detecting entanglement of multipartite quantum states is an inherently probabilistic process due to a finite number of measured samples. The level of confidence of entanglement detection can be used to quantify the probability that the measured signal is coming from a separable state and provides a meaningful figure of merit for big data sets. Yet, for limited sample sizes, to avoid serious misinterpretations of the experimental results, one should not only consider the probability that a separable state gave rise to the measured signal, but should also include information about the probability that the signal came from an entangled state. We demonstrate this explicitly and propose a comprehensive method of entanglement detection when only a very limited amount of data is available. The method is based on a non-linear combination of correlation functions and is independent of system size. As an example, we derive the optimal number of measurement settings and clicks per setting revealing entanglement with only 20 copies of a state.

It is shown that above-threshold ionization peaks disappear when the kinetic energy associated with the nondipole radiation-pressure-induced photoelectron momentum in the laser propagation direction becomes comparable to the photon energy, and how peaks can be made reappear if knowledge of the length and direction of the photoelectron momentum is at hand and an emission-direction-dependent momentum shift is accounted for. The reported findings should be observable with intense mid-infrared laser pulses.

We show how to leverage quantum annealers (QAs) to better select candidates in greedy algorithms. Unlike conventional greedy algorithms that employ problem-specific heuristics for making locally optimal choices at each stage, we use QAs that sample from the ground state of problem-dependent Hamiltonians at cryogenic temperatures and use retrieved samples to estimate the probability distribution of problem variables. More specifically, we look at each spin of the Ising model as a random variable and contract all problem variables whose corresponding uncertainties are negligible. Our empirical results on a D-Wave 2000Q quantum processor demonstrate that the proposed quantum-assisted greedy algorithm (QAGA) scheme can find notably better solutions compared to the state-of-the-art techniques in the realm of quantum annealing

Quantum Approximate Optimization Algorithm(QAOA) is a promising quantum algorithm that can demonstrate quantum supremacy. The performance of QAOA on noisy intermediate-scale quantum (NISQ) devices degrades due to decoherence. In this paper, we present a framework for running QAOA on non-Markovian quantum systems which are represented by an augmented system model. In this model, a non-Markovian environment is modelled as an ancillary system driven by quantum white noises and the corresponding principal system is the computational unit for the algorithm. With this model, we mathematically formulates QAOA as piecewise control of the augmented system. To reduce the effect of non-Markovian decoherence, the above basic algorithm is modified for obtaining an efficient depth by a proximal gradient descent algorithm. Finally, in an example of the Max-Cut problem, we find non-Markovianity can help to achieve a good performance of QAOA, which is characterized by an exploration rate.

Entanglement is one is the most fundamental nature of quantum systems. In this work, we formulate the entanglement spectrum and entropy for Floquet noninteracting fermionic lattice models and build their connections with Floquet topological phases. Topological winding and Chern numbers are introduced to characterize the entanglement spectrum and eigenmodes in one and two spatial dimensions. Correspondences between the spectrum and topology of Floquet entanglement Hamiltonians under periodic boundary conditions and topological edge states under open boundary conditions are further established for Floquet topological insulators in different symmetry classes and spatial dimensions. Our study provides a firm entry to explore richer entanglement patterns in Floquet topological matter.

Utilizing counterdiabatic (CD) driving - aiming at suppression of diabatic transition - in digitized adiabatic evolution have garnered immense interest in quantum protocols and algorithms. However, improving the approximate CD terms with a nested commutator ansatz is a challenging task. In this work, we propose a technique of finding optimal coefficients of the CD terms using a variational quantum circuit. By classical optimizations routines, the parameters of this circuit are optimized to provide the coefficients corresponding to the CD terms. Then their improved performance is exemplified in Greenberger-Horne-Zeilinger state preparation on nearest-neighbor Ising model. Finally, we also show the advantage over the usual quantum approximation optimization algorithm, in terms of fidelity with bounded time.

We show that two seemingly unrelated problems - the trapping of an atom in an optical superlattice (OSL) and the libration of a planar rigid rotor in combined electric and optical fields - have isomorphic Hamiltonians. Formed by the interference of optical lattices whose spatial periods differ by a factor of two, OSL gives rise to a periodic potential that acts on atomic translation via the AC Stark effect. The latter system, also known as the generalized planar pendulum (GPP), is realized by subjecting a planar rigid rotor to combined orienting and aligning interactions due to the coupling of the rotor's permanent and induced electric dipole moments with the combined fields. The mapping makes it possible to establish correspondence between concepts developed for the two eigenproblems individually, such as localization on the one hand and orientation/alignment on the other. Moreover, since the GPP problem is conditionally quasi-exactly solvable (C-QES), so is atomic trapping in an OSL. We make use of both the correspondence and the quasi-exact solvability to treat ultracold atoms in an optical superlattice as a semifinite-gap system. The band structure of this system follows from the eigenenergies and their genuine and avoided crossings obtained previously for the GPP as analytic solutions of the Whittaker-Hill equation. These solutions characterize both the squeezing and the tunneling of atoms trapped in an optical superlattice and pave the way to unraveling their dynamics in analytic form.

We describe the design, construction, and operation of an apparatus utilizing a piezoelectric transducer for in-vacuum loading of nanoparticles into an optical trap for use in levitated optomechanics experiments. In contrast to commonly used nebulizer-based trap-loading methods which generate aerosolized liquid droplets containing nanoparticles, the method produces dry aerosols of both spherical and high-aspect ratio particles ranging in size by approximately two orders of mangitude. The device has been shown to generate accelerations of order $10^7$ $g$, which is sufficient to overcome stiction forces between glass nanoparticles and a glass substrate for particles as small as $170$ nm diameter. Particles with sizes ranging from $170$ nm to $\sim 10$ $\mu$m have been successfully loaded into optical traps at pressures ranging from $1$ bar to $0.6$ mbar. We report the velocity distribution of the particles launched from the substrate and our results indicate promise for direct loading into ultra-high-vacuum with sufficient laser feedback cooling. This loading technique could be useful for the development of compact fieldable sensors based on optically levitated nanoparticles as well as matter-wave interference experiments with ultra-cold nano-objects which rely on multiple repeated free-fall measurements and thus require rapid trap re-loading in high vacuum conditions.

Terahertz time-domain spectroscopy (THz-TDS) using electro-optic sampling and ultrashort pulsed probes is a well-established technique for directly measuring the electric field of THz radiation. Traditionally, a balanced detection scheme relies on measuring an optical phase shift brought by THz-induced birefringence radiation using photodiodes, where sensitivity is limited by the shot-noise of the optical-sampling probe. Improvement to the sensitivity of such an approach could be achieved by applying quantum metrology, such as using NOON states for Heisenberg-limited phase estimation. We report on the first step in that direction, demonstrating that THz electric fields can be measured with single-photon detectors using a squeezed vacuum as the optical probe. Our approach achieves THz electro-optical sampling using phase-locked single-photon detectors at the shot-noise limit and thus paves the way toward quantum-enhanced THz sensing.

Training a quantum machine learning model generally requires a large labeled dataset, which incurs high labeling and computational costs. To reduce such costs, a selective training strategy, called active learning (AL), chooses only a subset of the original dataset to learn while maintaining the trained model's performance. Here, we design and implement two AL-enpowered variational quantum classifiers, to investigate the potential applications and effectiveness of AL in quantum machine learning. Firstly, we build a programmable free-space photonic quantum processor, which enables the programmed implementation of various hybrid quantum-classical computing algorithms. Then, we code the designed variational quantum classifier with AL into the quantum processor, and execute comparative tests for the classifiers with and without the AL strategy. The results validate the great advantage of AL in quantum machine learning, as it saves at most $85\%$ labeling efforts and $91.6\%$ percent computational efforts compared to the training without AL on a data classification task. Our results inspire AL's further applications in large-scale quantum machine learning to drastically reduce training data and speed up training, underpinning the exploration of practical quantum advantages in quantum physics or real-world applications.

Achievability in information theory refers to demonstrating a coding strategy that accomplishes a prescribed performance benchmark for the underlying task. In quantum information theory, the crafted Hayashi-Nagaoka operator inequality is an essential technique in proving a wealth of one-shot achievability bounds since it effectively resembles a union bound in various problems. In this work, we show that the pretty-good measurement naturally plays a role as the union bound as well. A judicious application of it considerably simplifies the derivation of one-shot achievability for classical-quantum (c-q) channel coding via an elegant three-line proof.

The proposed analysis enjoys the following favorable features: (i) The established one-shot bound admits a closed-form expression as in the celebrated Holevo-Helstrom Theorem. Namely, the average error probability of sending $M$ messages through a c-q channel is upper bounded by the error of distinguishing the joint state between channel input and output against $(M-1)$-many products of its marginals. (ii) Our bound directly yields asymptotic results in the large deviation, small deviation, and moderate deviation regimes in a unified manner. (iii) The coefficients incurred in applying the Hayashi-Nagaoka operator inequality are no longer needed. Hence, the derived one-shot bound sharpens existing results that rely on the Hayashi-Nagaoka operator inequality. In particular, we obtain the tightest achievable $\epsilon$-one-shot capacity for c-q channel heretofore, and it improves the third-order coding rate in the asymptotic scenario. (iv) Our result holds for infinite-dimensional Hilbert space. (v) The proposed method applies to deriving one-shot bounds for data compression with quantum side information, entanglement-assisted classical communication over quantum channels, and various quantum network information-processing protocols.

Mean centering is an important data preprocessing technique, which has a wide range of applications in data mining, machine learning, and multivariate statistical analysis. When the data set is large, this process will be time-consuming. In this paper, we propose an efficient quantum mean centering based on the block-encoding technique, which makes the existing quantum algorithms can get rid of the assumption that the original data set has been classically mean-centered. Specifically, we first introduce the centring matrix $C$ and propose in detail how to construct the block-encoding of $C$, and further obtain the block-encodings of $XC$, $CX$ and $CXC$ which represent removing the row means, column means and row-column means of the original data matrix $X$, respectively. Finally, combined with the quantum technology of block-encoding, our algorithm is successfully applied to principal component analysis, linear discriminant analysis and other algorithms. It is worth emphasizing that our algorithm also can be used in many algorithms involving matrix algebra problems, such as multidimensional scaling, kernel machine learning, and extreme learning machine.

The entropic dynamics (ED) approach to quantum mechanics is ideally suited to address the problem of measurement because it is based on entropic and Bayesian methods of inference that have been designed to process information and data. The approach succeeds because ED achieves a clear-cut separation between ontic and epistemic elements: positions are ontic while probabilities and wave functions are epistemic. Thus, ED is a viable realist psi-epistemic model. Such models are widely assumed to be ruled out by various no-go theorems. We show that ED evades those theorems by adopting a purely epistemic dynamics and denying the existence of an ontic dynamics at the subquantum level.