We study a variant of quantum hypothesis testing wherein an additional 'inconclusive' measurement outcome is added, allowing one to abstain from attempting to discriminate the hypotheses. The error probabilities are then conditioned on a successful attempt, with inconclusive trials disregarded. We completely characterise this task in both the single-shot and asymptotic regimes, providing exact formulas for the optimal error probabilities. In particular, we prove that the asymptotic error exponent of discriminating any two quantum states $\rho$ and $\sigma$ is given by the Hilbert projective metric $D_{\max}(\rho\|\sigma) + D_{\max}(\sigma \| \rho)$ in asymmetric hypothesis testing, and by the Thompson metric $\max \{ D_{\max}(\rho\|\sigma), D_{\max}(\sigma \| \rho) \}$ in symmetric hypothesis testing. This endows these two quantities with fundamental operational interpretations in quantum state discrimination. Our findings extend to composite hypothesis testing, where we show that the asymmetric error exponent with respect to any convex set of density matrices is given by a regularisation of the Hilbert projective metric. We apply our results also to quantum channels, showing that no advantage is gained by employing adaptive or even more general discrimination schemes over parallel ones, in both the asymmetric and symmetric settings. Our state discrimination results make use of no properties specific to quantum mechanics and are also valid in general probabilistic theories.

Adaptive quantum variational algorithms are particularly promising for simulating strongly correlated systems on near-term quantum hardware, but they are not yet viable due, in large part, to the severe coherence time limitations on current devices. In this work, we introduce an algorithm called TETRIS-ADAPT-VQE, which iteratively builds up variational ans\"atze a few operators at a time in a way dictated by the problem being simulated. This algorithm is a modified version of the ADAPT-VQE algorithm in which the one-operator-at-a-time rule is lifted to allow for the addition of multiple operators with disjoint supports in each iteration. TETRIS-ADAPT-VQE results in denser but significantly shallower circuits, without increasing the number of CNOT gates or variational parameters. Its advantage over the original algorithm in terms of circuit depths increases with the system size. Moreover, the expensive step of measuring the energy gradient with respect to each candidate unitary at each iteration is performed only a fraction of the time compared to ADAPT-VQE. These improvements bring us closer to the goal of demonstrating a practical quantum advantage on quantum hardware.

Quantum subspace diagonalization (QSD) methods are quantum-classical hybrid methods, commonly used to find ground and excited state energies by projecting the Hamiltonian to a smaller subspace. In applying these, the choice of subspace basis is critical from the perspectives of basis completeness and efficiency of implementation on quantum computers. In this work, we present Eigenvector Continuation (EC) as a QSD method, where low-energy states of the Hamiltonian at different points in parameter space are chosen as the subspace basis. This unique choice enables rapid evaluation of low-energy spectra, including ground and nearby excited states, with minimal hardware effort. As a particular advantage, EC is able to capture the spectrum across ground state crossovers corresponding to different symmetry sectors of the problem. We demonstrate this method for interacting spin models and molecules.

Topological Data Analysis (TDA) is a well-established field derived to give insight into the geometric structure of real-world data. However, many methods in TDA are computationally intensive. The method that computes the respective Betti number has been shown to obtain a speed-up from translating the algorithm into a quantum circuit. The quantum circuit to calculate a particular Betti number requires a significant number of gates and, without a small record of data, is currently unable to be implemented on a NISQ-era processor. Given this NISQ-era restriction, a hybrid-method is proposed that calculates the Euclidean distance of the encoded data and computes the desired Betti number. This method is applied to a toy data set with different encoding techniques. The empirical results show the noise within the data is intensified with each encoding method as there is a clear change in the geometric structure of the original data, exhibiting information loss.

Macroscopic resonant tunneling (MRT) in flux qubits is an important experimental tool for extracting information about noise produced by a qubit's surroundings. Here we present a detailed derivation of the MRT signal in the RF-SQUID flux qubit allowing for effects of flux and charge fluctuations on the interwell and intrawell transitions in the system. Taking into consideration transitions between the ground state in the initial well and excited states in the target well enable us to characterize both flux and charge noise source affecting the operation of the flux qubit. The MRT peak is formed by the dominant noise source affecting specific transition, with flux noise determining the lineshape of the ground to ground tunneling, whereas charge noise reveals itself as additional broadening of the ground to excited peak.

The ultraviolet and infrared finiteness of a parity-even massless planar quantum electrodynamics mimics the scale invariance in graphene.

There has been much recent interest in near-term applications of quantum computers. Variational quantum algorithms (VQA), wherein an optimization algorithm implemented on a classical computer evaluates a parametrized quantum circuit as an objective function, are a leading framework in this space.

In this paper, we analyze the iteration complexity of VQA, that is, the number of steps VQA required until the iterates satisfy a surrogate measure of optimality. We argue that although VQA procedures incorporate algorithms that can, in the idealized case, be modeled as classic procedures in the optimization literature, the particular nature of noise in near-term devices invalidates the claim of applicability of off-the-shelf analyses of these algorithms. Specifically, the form of the noise makes the evaluations of the objective function via circuits biased, necessitating the perspective of convergence analysis of variants of these classical optimization procedures, wherein the evaluations exhibit systematic bias. We apply our reasoning to the most often used procedures, including SPSA the parameter shift rule, which can be seen as zeroth-order, or derivative-free, optimization algorithms with biased function evaluations. We show that the asymptotic rate of convergence is unaffected by the bias, but the level of bias contributes unfavorably to both the constant therein, and the asymptotic distance to stationarity.

The ground state of a free-fermionic chain with inhomogeneous hoppings at half-filling can be mapped into the Dirac vacuum on a static curved space-time, which presents exactly homogeneous occupations due to particle-hole symmetry. Yet, far from half-filling we observe density modulations and depletion effects. The system can be described by a 1D Schr\"odinger equation on a different static space-time, with an effective potential which accounts for the depleted regions. We provide a semiclassical expression for the single-particle modes and the density profiles associated to different hopping patterns and filling fractions. Moreover, we show that the depletion effects can be compensated for all filling fractions by adding a chemical potential proportional to the hoppings. Interestingly, we can obtain exactly the same density profiles on a homogeneous chain if we introduce a chemical potential which is inverse to the hopping intensities, even though the ground state is different from the original one.

We investigate the Quantum Zeno Effect in spin 1/2, spin 1 and spin 3/2 open quantum systems undergoing Rabi oscillations. The systems interact with an environment designed to perform continuous measurements of an observable, driving the systems stochastically towards one of the eigenstates of the corresponding operator. The system-environment coupling constant represents the strength of the measurement. Stochastic quantum trajectories are generated by unravelling a Markovian Lindblad master equation using the quantum state diffusion formalism. This is regarded as a better representation of system behaviour than consideration of the averaged evolution since the latter can mask the effect of measurement. Complete positivity is maintained and thus the trajectories can be considered as physically meaningful. Increasing the measurement strength leads to greater dwell by the system in the vicinity of the eigenstates of the measured observable and lengthens the time taken by the system to return to that eigenstate, thus demonstrating the Quantum Zeno Effect. For very strong measurement, the Rabi oscillations develop into randomly occurring near-instantaneous jumps between eigenstates. The stochastic measurement dynamics compete with the intrinsic, deterministic quantum dynamics of the system, each attempting to drive the system in the Hilbert space in different ways. As such, the trajectories followed by the quantum system are heavily dependent on the measurement strength which other than slowing down and adding noise to the Rabi oscillations, changes the paths taken in spin phase space from a circular precession into elaborate figures-of-eight.

It is known that mixed quantum states are highly entropic states of imperfect knowledge (i.e., incomplete information) about a quantum system, while pure quantum states are states of perfect knowledge (i.e., complete information) with vanishing von Neumann entropy. In this paper, we propose an information geometric theoretical construct to describe and, to a certain extent, understand the complex behavior of evolutions of quantum systems in pure and mixed states. The comparative analysis is probabilistic in nature, it uses a complexity measure that relies on a temporal averaging procedure along with a long-time limit, and is limited to analyzing expected geodesic evolutions on the underlying manifolds. More specifically, we study the complexity of geodesic paths on the manifolds of single-qubit pure and mixed quantum states equipped with the Fubini-Study metric and the Sjoqvist metric, respectively. We analytically show that the evolution of mixed quantum states in the Bloch ball is more complex than the evolution of pure states on the Bloch sphere. We also verify that the ranking based on our proposed measure of complexity, a quantity that represents the asymptotic temporal behavior of an averaged volume of the region explored on the manifold during the evolution of the systems, agrees with the geodesic length-based ranking. Finally, focusing on geodesic lengths and curvature properties in manifolds of mixed quantum states, we observed a softening of the complexity on the Bures manifold compared to the Sjoqvist manifold.

Spectral- and time- multiplexing are currently explored to generate large multipartite quantum states of light for quantum technologies. In the continuous variable approach, the deterministic generation of large entangled states demands the generation of a large number of squeezed modes. Here, we demonstrate the simultaneous generation of 21 squeezed spectral modes at 156 MHz. We exploit the full repetition rate and the ultrafast shaping of a femtosecond light source to combine, for the first time, frequency- and time- multiplexing in multimode squeezing. This paves the way to the implementation of multipartite entangled states that are both scalable and fully reconfigurable.

We consider a massless Dirac field in $1+1$ dimensions, and compute the Tomita-Takesaki modular conjugation corresponding to the vacuum state and a generic multicomponent spacetime region. We do it by analytic continuation from the modular flow, which was computed recently. We use our result to discuss the validity of Haag duality in this model.

Non-Hermitian quantum system recently have attracted a lots of attentions theoretically and experimentally. However, the results based on the single-particle picture may not apply to understand the property of non-Hermitian many-body system. How the property of quantum many-body system especially the phase transition will be affected by the non-hermiticity remains unclear. Here we study non-Hermitian quantum contact process (QCP) model, whose effective Hamiltonian is derived from Lindbladian master equation. We show that there is a continuous phase transition induced by the non-hermiticity in QCP. We also determine the critical exponents $\beta$ of order parameter, $\gamma$ of susceptibility and study the correlation and entanglement near phase transition. We observe that the order parameter and susceptibility display infinitely singularity even for finite size system, since non-hermiticity endow many-body system with different singular behaviour from classical phase transition. Moreover our results show that the phase transition have no counterpart in Hermitian case and belongs to completely different universality class.

Operator size growth describes the scrambling of operators in quantum dynamics and stands out as an essential physical concept for characterizing quantum chaos. Important as it is, a scheme for direct measuring operator size on a quantum computer is still absent. Here, we propose a quantum algorithm for direct measuring the operator size and its distribution based on Bell measurement. The algorithm is verified with spin chains and meanwhile, the effects of Trotterization error and quantum noise are analyzed. It is revealed that saturation of operator size growth can be due to quantum chaos itself or be a consequence of quantum noises, which make a distinction between quantum integrable and chaotic systems difficulty on noisy quantum processors. Nevertheless, it is found that the error mitigation will effectively reduce the influence of noise, so as to restore the distinguishability of quantum chaotic systems. Our work provides a feasible protocol for investigating quantum chaos on noisy quantum computers by measuring operator size growth.

The quantum dimer and loop models attract great attentions, partially because the fundamental importance in the novel phases and phase transitions emerging in these prototypical constrained lattice models; and partially due to their intimate relevance towards the on-going experiments on Rydberg atom arrays in which the blockade mechanism naturally enforces the local constraint. Here we show, by means of the sweeping cluster quantum Monte Carlo method, the complete ground state phase diagram of the fully packed quantum loop model on the square lattice. We find between the lattice nematic (LN) phase with strong dimer attraction and the staggered phase (SP) with strong dimer repulsion, there emerges a resonating plaquette (RP) phase with off-diagonal translational symmetry breaking. Such a novel phase is separated from the LN via a first order transition and from the SP by the famous Rokhsar-Kivelson point. Our renormalization group analysis reveals the different flow directions towards different symmetry breaking phases, fully consistent with the order parameter histogram in Monte Carlo simulations. The realization of our phase diagram in Rydberg exmperiments is proposed.

Quantum metrology aims to use quantum resources to improve the precision of measurement. Quantum criticality has been presented as a novel and efficient resource. Generally, protocols of criticality-based quantum metrology often work without decoherence. In this paper, we address the issue whether the divergent feature of the inverted variance is indeed realizable in the presence of noise when approaching the QPT. Taking the quantum Rabi model (QRM) as an example, we obtain the analytical result for the inverted variance. We show that the inverted variance may be convergent in time due to the noise. When approaching the critical point, the maximum inverted variance demonstrates a power-law increase with the exponent -1.2, of which the absolute value is smaller than that for the noise-free case, i.e., 2. We also observe a power-law dependence of the maximum inverted variance on the relaxation rate and the temperature. Since the precision of the metrology is very sensitive to the noise, as a remedy, we propose performing the squeezing operation on the initial state to improve the precision under decoherence. In addition, we also investigate the criticality-based metrology under the influence of the two-photon relaxation. Contrary to the single-photon relaxation, the quantum dynamics of the inverted variance shows a completely-different behavior. It does not oscillate with the same frequency with respect to the re-scaled time for different dimensionless coupling strengths. Strikingly, although the maximum inverted variance still manifests a power-law dependence on the energy gap, the exponent is positive and depends on the dimensionless coupling strength. This observation implies that the criticality may not enhance but weaken the precision in the presence of two-photon relaxation. It can be well described by the non-linearity introduced by the two-photon relaxation.

An inertial sensor design is proposed in this paper to achieve high sensitivity and large dynamic range in the sub-Hz frequency regime. High acceleration sensitivity is obtained by combining optical cavity readout systems with monolithically fabricated mechanical resonators. A high-sensitivity heterodyne interferometer simultaneously monitors the test mass with an extensive dynamic range for low-stiffness resonators. The bandwidth is tuned by optical feedback cooling to the test mass via radiation pressure interaction using an intensity-modulated laser. The transfer gain of the feedback system is analyzed to optimize system parameters towards the minimum cooling temperature that can be achieved. To practically implement the inertial sensor, we propose a cascaded cooling mechanism to improve cooling efficiency while operating at low optical power levels. The overall system layout presents an integrated design that is compact and lightweight.

Quantum simulation is one of the central discipline to demonstrate the power of quantum computing. In recent years, the theoretical framework of quantum superchannels has been developed and applied widely as the extension of quantum channels. In this work, we study the quantum circuit simulation task of superchannels. We develop a quantum superchannel simulation algorithm based on the convex decomposition into sum of extreme superchannels, which can reduce the circuit cost. We demonstrate the algorithm by numerical simulation of qubit superchannels with high accuracy, making it applicable to current experimental platforms.

A framework for quantum simulations of real-time weak decays of hadrons and nuclei in a 2-flavor lattice theory in one spatial dimension is presented. A single generation of the Standard Model is found to require 16 qubits per spatial lattice site after mapping to spin operators via the Jordan-Wigner transformation. Both quantum chromodynamics and flavor-changing weak interactions are included in the dynamics, the latter through four-Fermi effective operators. Quantum circuits which implement time evolution in this lattice theory are developed and run on Quantinuum's H1-1 20-qubit trapped ion system to simulate the $\beta$-decay of a single baryon on one lattice site. These simulations include the initial state preparation and are performed for both one and two Trotter time steps. The potential intrinsic error-correction properties of this type of lattice theory are discussed and the leading lattice Hamiltonian required to simulate $0\nu\beta\beta$-decay of nuclei induced by a neutrino Majorana mass term is provided.

Recently Chen and Movassagh proposed the quantum Merkle tree, which is a quantum analogue of the well-known classical Merkle tree. It gives a succinct verification protocol for quantum state commitment. Although they only proved security against semi-honest provers, they conjectured its general security.

Using the proposed quantum Merkle tree, they gave a quantum analogue of Kilian's succinct argument for NP, which is based on probabilistically checkable proofs (PCPs). A nice feature of Kilian's argument is that it can be extended to a zero-knowledge succinct argument for NP, if the underlying PCP is zero-knowledge. Hence, a natural question is whether one can also make the quantum succinct argument by Chen and Movassagh zero-knowledge as well.

This work makes progress on this problem. We generalize the recent result of Broadbent and Grilo to show that any local quantum verifier can be made simulable with a minor reduction in completeness and soundness. Roughly speaking, a local quantum verifier is simulable if in the yes case, the local views of the verifier can be computed without knowing the actual quantum proof; it can be seen as the quantum analogue of the classical zero-knowledge PCPs. Hence we conjecture that applying the proposed succinct quantum argument of Chen and Movassagh to a simulable local verifier is indeed zero-knowledge.