The scale of quantum mechanical effects in matter is set by Planck's constant, $\hbar$. This represents the quantisation scale for material objects. In this article, we give a simple argument why the quantisation scale for space, and hence for {\it gravity}, cannot be equal to $\hbar$. Indeed, assuming a single quantisation scale for both matter and geometry leads to the `worst prediction in physics', namely, the huge difference between the observed and predicted vacuum energies. Conversely, assuming a different quantum of action for geometry, $\beta \neq \hbar$, allows us to recover the observed density of the Universe. Thus, by measuring its present-day expansion, we may in principle determine, empirically, the scale at which the geometric degrees of freedom must be quantised.

We analyze the difference between ex ante and ex post equilibria in classical games played with the assistance of a nonlocal (quantum or no-signaling) resource. In physics, the playing of these games is known as performing bipartite Bell-type experiments. By analyzing the Clauser-Horn-Shimony-Holt game, we find a constructive procedure to find two-person Bayesian games with a nonlocal (i.e. no-signaling, and, in many cases, quantum) advantage. Most games of this kind known from the literature can be constructed along this principle, and share the property that their relevant ex ante equilibria are ex post equilibria as well. We introduce a new type of game, based on the Bell-theorem by V\'ertesi and Bene, which does not have the latter property: the ex ante and ex post equilibria differ.

We present a spin-adapted time-dependent coupled cluster singles and doubles model for the molecular response to a sequence of ultrashort laser pulses. The implementation is used to calculate the electronic response to a valence-exciting pump pulse, and a subsequent core-exciting probe pulse. We assess the accuracy of the integration procedures used in solving the dynamic coupled cluster equations, in order to find a compromise between computational cost and accuracy. The transient absorption spectrum of lithium fluoride is calculated for various delays of the probe pulse with respect to the pump pulse. We observe that the transient probe absorption oscillates with the pump-probe delay, an effect that is attributed to the interference of states in the pump-induced superposition.

Quantum information processing often requires the preparation of arbitrary quantum states, such as all the states on the Bloch sphere for two-level systems. While numerical optimization can prepare individual target states, they lack the ability to find general solutions that work for a large class of states in more complicated quantum systems. Here, we demonstrate global quantum control by preparing a continuous set of states with deep reinforcement learning. The protocols are represented using neural networks, which automatically groups the protocols into similar types, which could be useful for finding classes of protocols and extracting physical insights. As application, we generate arbitrary superposition states for the electron spin in complex multi-level nitrogen-vacancy centers, revealing classes of protocols characterized by specific preparation timescales. Our method could help improve control of near-term quantum computers, quantum sensing devices and quantum simulations.

Fiber optic communication is the backbone of our modern information society, offering high bandwidth, low loss, weight, size and cost, as well as an immunity to electromagnetic interference. Microwave photonics lends these advantages to electronic sensing and communication systems, but - unlike the field of nonlinear optics - electro-optic devices so far require classical modulation fields whose variance is dominated by electronic or thermal noise rather than quantum fluctuations. Here we present a cavity electro-optic transceiver operating in a millikelvin environment with a mode occupancy as low as 0.025 $\pm$ 0.005 noise photons. Our system is based on a lithium niobate whispering gallery mode resonator, resonantly coupled to a superconducting microwave cavity via the Pockels effect. For the highest continuous wave pump power of 1.48 mW we demonstrate bidirectional single-sideband conversion of X band microwave to C band telecom light with a total (internal) efficiency of 0.03 % (0.7 %) and an added output conversion noise of 5.5 photons. The high bandwidth of 10.7 MHz combined with the observed very slow heating rate of 1.1 noise photons s$^{-1}$ puts quantum limited pulsed microwave-optics conversion within reach. The presented device is versatile and compatible with superconducting qubits, which might open the way for fast and deterministic entanglement distribution between microwave and optical fields, for optically mediated remote entanglement of superconducting qubits, and for new multiplexed cryogenic circuit control and readout strategies.

Quantum computers can in principle solve certain problems exponentially more quickly than their classical counterparts. We have not yet reached the advent of useful quantum computation, but when we do, it will affect nearly all scientific disciplines. In this review, we examine how current quantum algorithms could revolutionize computational biology and bioinformatics. There are potential benefits across the entire field, from the ability to process vast amounts of information and run machine learning algorithms far more efficiently, to algorithms for quantum simulation that are poised to improve computational calculations in drug discovery, to quantum algorithms for optimization that may advance fields from protein structure prediction to network analysis. However, these exciting prospects are susceptible to "hype", and it is also important to recognize the caveats and challenges in this new technology. Our aim is to introduce the promise and limitations of emerging quantum computing technologies in the areas of computational molecular biology and bioinformatics.

Running quantum programs is fraught with challenges on on today's noisy intermediate scale quantum (NISQ) devices. Many of these challenges originate from the error characteristics that stem from rapid decoherence and noise during measurement, qubit connections, crosstalk, the qubits themselves, and transformations of qubit state via gates. Not only are qubits not "created equal", but their noise level also changes over time. IBM is said to calibrate their quantum systems once per day and reports noise levels (errors) at the time of such calibration. This information is subsequently used to map circuits to higher quality qubits and connections up to the next calibration point.

This work provides evidence that there is room for improvement over this daily calibration cycle. It contributes a technique to measure noise levels (errors) related to qubits immediately before executing one or more sensitive circuits and shows that just-in-time noise measurements benefit late physical qubit mappings. With this just-in-time recalibrated transpilation, the fidelity of results is improved over IBM's default mappings, which only uses their daily calibrations. The framework assess two major sources of noise, namely readout errors (measurement errors) and two-qubit gate/connection errors. Experiments indicate that the accuracy of circuit results improves by 3-304% on average and up to 400% with on-the-fly circuit mappings based on error measurements just prior to application execution.

We study the charging process of open quantum batteries mediated by a common dissipative environment in two different scenarios. In the first case, we consider a two-qubit system as a quantum charger-battery model. Where the battery has the capability to properly charge under non-Markovian dynamics in a strong coupling regime, without any external power and any direct interaction with the charger, i.e., a wireless battery charging happens. In fact, the environment plays a major role in the charging of the battery, while this does not happen in the weak coupling regime. In the second scenario, we show the effect of individual and collective spontaneous emission rates on the charging process of quantum batteries by considering a two-qubit system in the presence of Markovian dynamics such that each one can be charged through an external field. Contrary to previous claims for individual environments, our results demonstrate that the battery can be satisfactorily charged in non-Markovian and Markovian dynamics. We also present a robust battery by taking into account subradiant states and an intermediate regime. Moreover, we propose an experimental setup to explore the ergotropy in the first scenario.

We investigate the performance of a quantum battery exposed to local Markovian and non-Markovian dephasing noises. The battery is initially prepared as the ground state of a one-dimensional transverse XY model with open boundary condition and is charged (discharged) via interactions with local bosonic reservoirs. We show that in the transient regime, quantum battery (QB) can store energy faster when it is affected by local phase-flip or bit-flip Markovian noise compared to the case when there is no noise in the system. In both the charging and discharging processes, we report the enhancement in work-output when all the spins are affected by non-Markovian Ohmic bath both in transient and steady-state regimes, thereby showing a counter-intuitive advantage of decoherence in QB. Among all the system parameters, we find that if the system is prepared in the paramagnetic phase and is affected by the bit-flip noise, we get the maximum improvement both in Markovian and non-Markovian cases. Moreover, we show that the benefit due to noise persists even with the initial state prepared at a moderate temperature.

In this work we consider two complex scalar fields distinguished by their masses coupled to constant background electric and magnetic fields in the $(3+1)$-dimensional Minkowski spacetime and subsequently investigate a few measures quantifying the quantum correlations between the created particle-antiparticle Schwinger pairs. Since the background magnetic field itself cannot cause the decay of the Minkowski vacuum, our chief motivation here is to investigate the interplay between the effects due to the electric and magnetic fields. We start by computing the entanglement entropy for the vacuum state of a single scalar field. Second, we consider some maximally entangled states for the two-scalar field system and compute the logarithmic negativity and the mutual information. Qualitative differences of these results pertaining to the charge content of the states are pointed out. Based upon our analyses, we make some speculations on the effect of a background magnetic field on the well known phenomenon of degradation of entanglement between states in an accelerated frame, for charged quantum fields.

Noisy, intermediate-scale quantum computers come with intrinsic limitations in terms of the number of qubits (circuit "width") and decoherence time (circuit "depth") they can have. Here, for the first time, we demonstrate a recently introduced method that breaks a circuit into smaller subcircuits or fragments, and thus makes it possible to run circuits that are either too wide or too deep for a given quantum processor. We investigate the behavior of the method on one of IBM's 20-qubit superconducting quantum processors with various numbers of qubits and fragments. We build noise models that capture decoherence, readout error, and gate imperfections for this particular processor. We then carry out noisy simulations of the method in order to account for the observed experimental results. We find an agreement within 20% between the experimental and the simulated success probabilities, and we observe that recombining noisy fragments yields overall results that can outperform the results without fragmentation.

The character of evolution of an open quantum system is often encoded in the correlation function of the environment or equivalently in the spectral density function of the interaction. When the environment is heterogeneous, e.g. consists of several independent environments, with different spectral functions, the character of evolution could have some distinctive features allowing a control by adjusting properties of one of the sub-environments. We investigate non-Markovian evolution of a two-level system (qubit) under influence of three independent decoherence channels, two of them has classical nature and originate from interaction with a stochastic field, and one is a quantum channel formed by interaction with a bosonic bath. By modifying spectral densities of the channels, we study their impact on steady states of the two-level system, evolution of its density matrix and the equilibrium emission spectrums, noting inaccuracy of the rotation-wave approximation of the bath channel in comparison with the full-interaction one.

We argue that in a framework for emergent quantum mechanics, the weak equivalence principle is a consequence of a general property of functions defined over high dimensional configuration spaces. Furthermore, as a consequence of the emergent framework and the properties that we assume for the fundamental dynamics, it is argued that gravitational interaction must be a classical, emergent interaction.

We obtain the first constant-round post-quantum multi-party computation protocol for general classical functionalities in the plain model, with security against malicious corruptions. We assume mildly super-polynomial quantum hardness of learning with errors (LWE), and quantum polynomial hardness of an LWE-based circular security assumption. Along the way, we also construct the following protocols that may be of independent interest.

(1) Constant-round zero-knowledge against parallel quantum verifiers from quantum polynomial assumptions.

Here, we develop a novel parallel no-cloning non-black-box simulation technique. This uses as a starting point the recently introduced no-cloning technique of Bitansky and Shmueli (STOC 2020) and Ananth and La Placa (ePrint 2019), which in turns builds on the classical non-black-box technique of Bitansky, Khurana and Paneth (STOC 2019). Our approach relies on a new technical tool, spooky encryption for relations computable by quantum circuits, that we also construct.

(2) Constant-round post-quantum non-malleable commitments from mildly super-polynomial quantum hardness of LWE.

This is the first construction of post-quantum non-malleable commitments in the plain model, and is obtained by transforming the construction of Khurana and Sahai (FOCS 2017) to obtain post-quantum security.

We achieve quantum security by building a new straight-line non-black-box simulator against parallel verifiers that does not clone the adversary's state. This technique may also be relevant to the classical setting.

This document is meant as a pedagogical introduction to the modern language used to talk about quantum theory, especially in the field of quantum information. It assumes that the reader has taken a first traditional course on quantum mechanics, and is familiar with the concept of Hilbert space and elementary linear algebra. As in the popular textbook on quantum information by Nielsen and Chuang, we introduce the generalised concept of states (density matrices), observables (POVMs) and transformations (channels), but we also characterise these structures from an algebraic standpoint, which provides many useful technical tools, and clarity as to their generality. This approach also makes it manifest that quantum theory is a direct generalisation of probability theory, and provides a unifying formalism for both fields. The focus on finite-dimensional systems allows for a self-contained presentation which avoids many of the technicalities inherent to the more general $C^*$-algebraic approach, while being appropriate for the quantum information literature.

Recently, there has been an increased interest in studying quantum entanglement and quantum coherence. Since both of these properties are attributed to the existence of quantum superposition, it would be useful to determine if some type of correlation between them exists. Hence, the purpose of this paper is to explore the type of the correlation in several systems with different types of anisotropy. The focus will be on the XY spin chains with the Dzyaloshinskii-Moriya interaction and the type of the mentioned bond will be explored using the quantum renormalization group method.

The read-out of a microwave qubit state occurs using an amplification chain that enlarges the quantum state to a signal detectable with a classical measurement apparatus. However, at what point in this process did we really `measure' the quantum state? In order to investigate whether the `measurement' takes place in the amplification chain, we propose to construct a microwave interferometer that has a parametric amplifier added to each of its arms. Feeding the interferometer with single photons, the visibility depends on the gain of the amplifiers and whether a measurement collapse has taken place during the amplification process. We calculate the interference visibility as given by standard quantum mechanics as a function of gain, insertion loss and temperature and find a magnitude of $1/3$ in the limit of large gain without taking into account losses. This number reduces to $0.26$ in case the insertion loss of the amplifiers is $2.2$ dB at a temperature of $50$ mK. We show that if the wave function collapses within the interferometer, we will measure a reduced visibility compared to the prediction from standard quantum mechanics once this collapse process sets in.

The clustering property of an equilibrium bipartite correlation is one of the most general thermodynamic properties in non-critical many-body quantum systems. Herein, we consider the thermalization properties of a system class exhibiting the clustering property. We investigate two regimes, namely, regimes of high and low density of states corresponding to high and low energy regimes, respectively. We show that the clustering property is connected to several properties on the eigenstate thermalization through the density of states. Remarkably, the eigenstate thermalization is obtained in the low-energy regime with sparse density of states, which is typically seen in gapped systems. For the high-energy regime, we demonstrate the ensemble equivalence between microcanonical and canonical ensembles even for subexponentially small energy shell with respect to the system size, which eventually leads to the weak version of eigenstate thermalization.

Quantum many-body systems have been extensively studied from the perspective of quantum technology, and conversely, critical phenomena in such systems have been characterized by operationally relevant resources like entanglement. In this paper, we investigate robustness of magic (RoM), the resource in magic state injection based quantum computation schemes in the context of the transverse field anisotropic XY model. We show that the the factorizable ground state in the symmetry broken configuration is composed of an enormous number of highly magical $H$ states. We find the existence of a point very near the quantum critical point where magic contained explicitly in the correlation between two distant qubits attains a sharp maxima. Unlike bipartite entanglement, this persists over very long distances, capturing the presence of long range correlation near the phase transition. We derive scaling laws and extract corresponding exponents around criticality. Finally, we study the effect of temperature on two-qubit RoM and show that it reveals a crossover between dominance of quantum and thermal fluctuations.

The CSL model predicts a progressive breakdown of the quantum superposition principle, with a noise randomly driving the state of the system towards a localized one, thus accounting for the emergence of a classical world within a quantum framework. In the original model the noise is supposed to be white, but since white noises do not exist in nature, it becomes relevant to identify some of its spectral properties. Experimental data set an upper bound on its frequencies, while in this paper we bound it from below. We do so in two ways: by considering a 'minimal' measurement setup, requiring that the collapse is completed within the measurement time; and in a measurement modeling-independent way, by requiring that the fluctuations average to zero before the measurement time.