Adaptive quantum circuits employ unitary gates assisted by mid-circuit measurement, classical computation on the measurement outcome, and the conditional application of future unitary gates based on the result of the classical computation. In this paper, we experimentally demonstrate that even a noisy adaptive quantum circuit of constant depth can achieve a task that is impossible for any purely unitary quantum circuit of identical depth: the preparation of long-range entangled topological states with high fidelity. We prepare a particular toric code ground state with fidelity of at least $76.9\pm 1.3\%$ using a constant depth ($d=4$) adaptive circuit, and rigorously show that no unitary circuit of the same depth and connectivity could prepare this state with fidelity greater than $50\%$.

We extend the Wigner-Weyl-Moyal phase-space formulation of quantum mechanics to general curved configuration spaces. The underlying phase space is based on the chosen coordinates of the manifold and their canonically conjugate momenta. The resulting Wigner function displays the axioms of a quasiprobability distribution, and any Weyl-ordered operator gets associated with the corresponding phase-space function, even in the absence of continuous symmetries. The corresponding quantum Liouville equation reduces to the classical curved space Liouville equation in the semiclassical limit. We demonstrate the formalism for a point particle moving on two-dimensional manifolds, such as a paraboloid or the surface of a sphere. The latter clarifies the treatment of compact coordinate spaces as well as the relation of the presented phase-space representation to symmetry groups of the configuration space.

No consensus regarding the universal validity of any particular interpretation of the measurement problem has been reached so far. The problem manifests strongly in various forms of Wigner's friend experiments, where different observers experience different \emph{realities} measuring the same quantum system. This study argues that a biological cell, a dissipative structure, is the smallest agent capable of processing quantum information through its holographic, triangulated sphere of perception, wherein this mechanism has been extended by natural evolution to endo- and exosemiosis in multicellular organisms and further to the language of \emph{Homo sapiens}. Any external stimuli must be measured and classified by the cell in the context of classical information to provide it with an evolutionary gain. Thus, life explains the measurement problem of quantum theory within the framework of the holographic principle, emergent gravity, and emergent dimensionality.

Entanglement lies at the heart of quantum mechanics, and has been identified an essential resource for diverse applications in quantum information. If entanglement could be verified without any trust in the devices of observers, i.e., in a device-independent (DI) way, then unconditional security can be guaranteed for various quantum information tasks. In this work, we propose an experimental-friendly DI protocol to certify the presence of entanglement, based on Einstein-Podolsky-Rosen (EPR) steering. We first establish the DI verification framework, relying on the measurement-device-independent technique and self-testing, and show it is able to verify all EPR-steerable states. In the context of three-measurement settings as per party, it is found to be noise robustness towards inefficient measurements and imperfect self-testing. Finally, a four-photon experiment is implemented to device-independently verify EPR-steering even for Bell local states. Our work paves the way for realistic implementations of secure quantum information tasks.

We present the theory of out-of-plane (or vertical) electron thermal-field emission from 2D semimetals. We show that the current-voltage-temperature characteristic is well-captured by a universal scaling relation applicable for broad classes of 2D semimetals, including graphene and its few-layer, nodal point semimetal, Dirac semimetal at the verge of topological phase transition and nodal line semimetal. Here an important consequence of the universal emission behavior is revealed: in contrast to the common expectation that band topology shall manifest differently in the physical observables, band topologies in two spatial dimension are indistinguishable from each others and bear no special signature in the electron emission characteristics. Our findings represent the quantum extension of the universal semiclassical thermionic emission scaling law in 2D materials, and provide the theoretical foundations for the understanding of electron emission from cathode and charge interface transport for the design of 2D-material-based vacuum nanoelectronics.

To prove the security of quantum key distribution (QKD) protocols, several assumptions have to be imposed on users' devices. From an experimental point of view, it is preferable that such theoretical requirements are feasible and the number of them is small. In this paper, we provide a security proof of a QKD protocol where the usage of any light source is allowed as long as it emits two independent and identically distributed (i.i.d.) states. Our QKD protocol is composed of two parts: the first part is characterization of the photon-number statistics of the emitted signals up to three-photons based on the method [Opt. Exp. 27, 5297 (2019)], followed by running our differential-phase-shift (DPS) protocol [npj Quantum Inf. 5, 87 (2019)]. It is remarkable that as long as the light source emits two i.i.d. states, even if we have no prior knowledge of the light source, we can securely employ it in the QKD protocol. As this result substantially simplifies the requirements on light sources, it constitutes a significant contribution on realizing truly secure quantum communication.

In quantum computing, the computation is achieved by linear operators in or between Hilbert spaces. In this work, we explore a new computation scheme, in which the linear operators in quantum computing are replaced by (higher) functors between two (higher) categories. If from Turing computing to quantum computing is the first quantization of computation, then this new scheme can be viewed as the second quantization of computation. The fundamental problem in realizing this idea is how to realize a (higher) functor physically. We provide a theoretical idea of realizing (higher) functors physically based on the physics of topological orders.

The greatest challenge in quantum computing is achieving scalability. Classical computing previously faced a scalability issue, solved with silicon chips hosting billions of fin field-effect transistors (FinFETs). These FinFET devices are small enough for quantum applications: at low temperatures, an electron or hole trapped under the gate serves as a spin qubit. Such an approach potentially allows the quantum hardware and its classical control electronics to be integrated on the same chip. However, this requires qubit operation at temperatures above 1K, where the cooling overcomes heat dissipation. Here, we show that silicon FinFETs can host spin qubits operating above 4K. We achieve fast electrical control of hole spins with driving frequencies up to 150MHz, single-qubit gate fidelities at the fault-tolerance threshold, and a Rabi oscillation quality factor greater than 87. Our devices feature both industry compatibility and quality, and are fabricated in a flexible and agile way that should accelerate further development.

Forty years ago, Richard Feynman proposed harnessing quantum physics to build a more powerful kind of computer. Realizing Feynman's vision is one of the grand challenges facing 21st century science and technology. In this article, we'll recall Feynman's contribution that launched the quest for a quantum computer, and assess where the field stands 40 years later.

Among various quantum key distribution (QKD) protocols, the round-robin differential-phase-shift (RRDPS) protocol has a unique feature that its security is guaranteed without monitoring any statistics. Moreover, this protocol has a remarkable property of being robust against source imperfections assuming that the emitted pulses are independent. Unfortunately, some experiments confirmed the violation of the independence due to pulse correlations, and therefore the lack of a security proof without taking into account this effect is an obstacle for the security. In this paper, we prove that the RRDPS protocol is secure against any source imperfections by establishing a proof with the pulse correlations. Our proof is simple in the sense that we make only three experimentally simple assumptions for the source. Our numerical simulation based on the proof shows that the long-range pulse correlation does not cause a significant impact on the key rate, which reveals another striking feature of the RRDPS protocol. Our security proof is thus effective and applicable to wide range of practical sources and paves the way to realize truly secure QKD in high-speed systems.

Quantum algorithms for simulating quantum systems provide a clear and provable advantage over classical algorithms in fault-tolerant settings. There is also interest in quantum algorithms and the experiments that implement these algorithms for NISQ settings. In these settings, various sources of noise and error need to be accounted for when executing any experiments. Recently, NISQ devices have been verified as versatile testbeds for open quantum systems. It has been demonstrated that NISQ devices can simulate simple quantum channels. Our goal is to apply NISQ devices to the more complicated problem of simulating convex mixtures of quantum channels. We consider two specific cases; convex mixtures of Markovian channels that result in a non-Markovian channel (M+M=nM) and convex mixtures of non-Markovian channels that result in a Markovian channel (nM+nM=M). For our first case, we consider convex mixtures of Non-Markovian depolarising channels. For our second case, we consider convex mixtures of Markovian Pauli channels. We show that efficient circuits, that account for the topology of currently available devices and current levels of decoherence, can be constructed by avoiding traditional approaches such as Stinespring Dilation etc. We also present a strategy for the implementation of a process tomography that yields a CPTP channel.

Computing the ground-state properties of quantum many-body systems is a promising application of near-term quantum hardware with a potential impact in many fields. The conventional algorithm quantum phase estimation uses deep circuits and requires fault-tolerant technologies. Many quantum simulation algorithms developed recently work in an inexact and variational manner to exploit shallow circuits. In this work, we combine quantum Monte Carlo with quantum computing and propose an algorithm for simulating the imaginary-time evolution and solving the ground-state problem. By sampling the real-time evolution operator with a random evolution time according to a modified Cauchy-Lorentz distribution, we can compute the expected value of an observable in imaginary-time evolution. Our algorithm approaches the exact solution given a circuit depth increasing polylogarithmically with the desired accuracy. Compared with quantum phase estimation, the Trotter step number, i.e. the circuit depth, can be thousands of times smaller to achieve the same accuracy in the ground-state energy. We verify the resilience to Trotterisation errors caused by the finite circuit depth in the numerical simulation of various models. The results show that Monte Carlo quantum simulation is promising even without a fully fault-tolerant quantum computer.

We show how to absorb fermionic quantum simulation's expensive fermion-to-qubit mapping overhead into the overhead already incurred by surface-code-based fault-tolerant quantum computing. The key idea is to process information in surface-code twist defects, which behave like logical Majorana fermions. Our approach encodes Dirac fermions, a key data type for simulation applications, directly into logical Majorana fermions rather than atop a logical qubit layer in the architecture. Using quantum simulation of the $N$-fermion 2D Fermi-Hubbard model as an exemplar, we demonstrate two immediate algorithmic improvements. First, by preserving the model's locality at the logical level, we reduce the asymptotic Trotter-Suzuki quantum circuit depth from $\mathcal{O}(\sqrt{N})$ in a typical Jordan-Wigner encoding to $\mathcal{O}(1)$ in our encoding. Second, by exploiting optimizations manifest for logical fermions but less obvious for logical qubits, we reduce the $T$-count of the block-encoding \textsc{select} oracle by 20\% over standard implementations, even when realized by logical qubits and not logical fermions.

Non-equilibrium physics including many-body localization (MBL) has attracted increasing attentions, but theoretical approaches of reliably studying non-equilibrium properties remain quite limited. In this Letter, we propose a systematic approach to probe MBL phases via the excited-state variational quantum eigensolver (VQE) and demonstrate convincing results of MBL on a quantum hardware, which we believe paves a promising way for future simulations of non-equilibrium systems beyond the reach of classical computations in the noisy intermediate-scale quantum (NISQ) era. Moreover, the MBL probing protocol based on excited-state VQE is NISQ-friendly, as it can successfully differentiate the MBL phase from thermal phases with relatively-shallow quantum circuits, and it is also robust against the effect of quantum noises.

The Aharonov-Bohm (AB) effect is now largely considered to be a manifestation of geometric phase. However, by decomposing the vector-potential gradient tensor into divergence, curl, and shear components, we isolate a field/charged-particle interaction that is not dependent on local electric and magnetic fields. We show that a local shear field provides a velocity-dependent, dynamic-phase interaction in the AB effect whose predictions are consistent with all known classes of AB experiments, including interference fringe shifts, the absence of time delays along the direction of propagation, and the possibility of lateral forces.

Variational quantum algorithms (VQAs) are widely applied in the noisy intermediate-scale quantum era and are expected to demonstrate quantum advantage. However, training VQAs faces difficulties, one of which is the so-called barren plateaus (BP) phenomenon, where gradients of cost functions vanish exponentially with the number of qubits. In this paper, inspired by transfer learning, where knowledge of pre-solved tasks could be further used in a different but related work with training efficiency improved, we report a parameter initialization method to mitigate BP. In the method, a small-sized task is solved with a VQA. Then the ansatz and its optimum parameters are transferred to tasks with larger sizes. Numerical simulations show that this method could mitigate BP and improve training efficiency. A brief discussion on how this method can work well is also provided. This work provides a reference for mitigating BP, and therefore, VQAs could be applied to more practical problems.

Many problems that can be solved in quadratic time have bit-parallel speed-ups with factor $w$, where $w$ is the computer word size. For example, edit distance of two strings of length $n$ can be solved in $O(n^2/w)$ time. In a reasonable classical model of computation, one can assume $w=\Theta(\log n)$. There are conditional lower bounds for such problems stating that speed-ups with factor $n^\epsilon$ for any $\epsilon>0$ would lead to breakthroughs in complexity theory. However, these conditional lower bounds do not cover quantum models of computing. Indeed, Boroujeni et al. (J. ACM, 2021) showed that edit distance can be approximated within a factor $3$ in sub-quadratic time $O(n^{1.81})$ using quantum computing. They also showed that, in their chosen model of quantum computing, the approximation factor cannot be improved using sub-quadractic time.

To break through the aforementioned classical conditional lower bounds and this latest quantum lower bound, we enrich the model of computation with a quantum random access memory (QRAM), obtaining what we call the word QRAM model. Under this model, we show how to convert the bit-parallelism of quadratic time solvable problems into quantum algorithms that attain speed-ups with factor $n$. The technique we use is simple and general enough to apply to many bit-parallel algorithms that use Boolean logics and bit-shifts. To apply it to edit distance, we first show that the famous $O(n^2/w)$ time bit-parallel algorithm of Myers (J. ACM, 1999) can be adjusted to work without arithmetic + operations. As a direct consequence of applying our technique to this variant, we obtain linear time edit distance algorithm under the word QRAM model for constant alphabet. We give further results on a restricted variant of the word QRAM model to give more insights to the limits of the model.

Grassmann Phase Space Theory (GSPT) is applied to the BEC/BCS crossover in cold fermionic atomic gases and used to determine the evolution (over either time or temperature) of the Quantum Correlation Functions (QCF) that specify: (a) the positions of the spin up and spin down fermionic atoms in a single Cooper pair and (b) the positions of the two spin up and two spin down fermionic atoms in two Cooper pairs The first of these QCF is relevant to describing the change in size of a Cooper pair, as the fermion-fermion coupling constant is changed via Feshbach resonance methods through the crossover from a small Cooper pair on the BEC side to a large Cooper pair on the BCS side. The second of these QCF is important for describing the correlations between the positions of the fermionic atoms in two Cooper pairs, which is expected to be small at the BEC or BCS sides of the crossover, but is expected to be significant in the strong interaction unitary regime, where the size of a Cooper pair is comparable to the separation between Cooper pairs. In GPST the QCF are ultimately given via the stochastic average of products of Grassmann stochastic momentum fields, and GPST shows that the stochastic average of the products of Grassmann stochastic momentum fields at a later time (or lower temperature) is related linearly to the stochastic average of the products of Grassmann stochastic momentum fields at an earlier time (or higher temperature), and that the matrix elements involved in the linear relations are all c-numbers. Expressions for these matrix elements corresponding to a small time or temperature increment have been obtained analytically, providing the formulae needed for numerical studies of the evolution that are planned for a future publication. Various initial conditions are considered, including those for a non-interacting fermionic gas at zero temperature and a high temperature gas.

We study the performance (rate and fidelity) of distributing multipartite entangled states in a quantum network through the use of a central node. Specifically, we consider the scenario where the multipartite entangled state is first prepared locally at a central node, and then transmitted to the end nodes of the network through quantum teleportation. As our first result, we present leading-order analytical expressions and lower bounds for both the rate and fidelity at which a specific class of multipartite entangled states, namely Greenberger-Horne-Zeilinger (GHZ) states, are distributed. Our analytical expressions for the fidelity accurately account for time-dependent depolarizing noise encountered by individual quantum bits while stored in quantum memory, as verified using Monte Carlo simulations. As our second result, we compare the performance to the case where the central node is an entanglement switch and the GHZ state is created by the end nodes in a distributed fashion. Apart from these two results, we outline how the teleportation-based scheme could be physically implemented using trapped ions or nitrogen-vacancy centers in diamond.

Quantum memories promise to enable global quantum repeater networks. For field applications, alkali metal vapors constitute an exceptional storage platform, as neither cryogenics, nor strong magnetic fields are required. We demonstrate a technologically simple, in principle satellite-suited quantum memory based on electromagnetically induced transparency on the cesium D1 line, and focus on the trade-off between end-to-end efficiency and signal-to-noise ratio, both being key parameters in applications. For coherent pulses containing one photon on average, we achieve storage and retrieval with end-to-end efficiencies of $\eta_{e2e} = 13(2)\%$, which correspond to internal memory efficiencies of $\eta_{mem} = 33(1)\%$. Simultaneously, we achieve a noise level corresponding to $\mu_1 = 0.07(2)$ signal photons. This noise is dominated by spontaneous Raman scattering, with contributions from fluorescence. Four wave mixing noise is negligible, allowing for further minimization of the total noise level.