We have witnessed an impressive advancement in computer performance in the last couple of decades. One would therefore expect a trickling down of the benefits of this technological advancement to the borough of computational simulation of multispin magnetic resonance spectra, but that has not been quite the case. Though some significant progress has been made, chiefly by Kuprov and collaborators, one cannot help but observe that there is still much to be done. In our view, the difficulties are not to be entirely ascribed to technology, but, rather, may mostly stem from the inadequacy of the conventional theoretical tools commonly used. We introduce in this paper a set of theoretical tools which can be employed in the description and efficient simulation of multispin magnetic resonance spectra. The so-called Holstein-Primakoff transformation lies at the heart of these, and provides a very close connection to discrete mathematics (from graph theory to number theory). The aim of this paper is to provide a reasonably comprehensive and easy-to-understand introduction to the Holstein-Primakoff (HP) transformation (and related bosons) to researchers and students working in the field of magnetic resonance. We also focus on how through the use of the HP transformation, we can reformulate many challenging computing problems encountered in multispin systems as enumerative combinatoric problems. This, one could say, is the HP transformation's primary forte. As a matter of illustration, our main concern here will be on the use of the HP bosons to characterize and eigendecompose a class of multispin Hamiltonians often employed in high-resolution magnetic resonance.

A central problem in biophysics and computational drug design is accurate modeling of biomolecules. The current molecular dynamics simulation methods can answer how a molecule inhibits a cancerous cell signaling pathway, or the role of protein misfolding in neurodegenerative diseases. However, the accuracy of current force fields (interaction potential) limits the reliability of computer simulations. Fundamentally a quantum chemistry problem, here we discuss developing new force fields using scalable ab initio quantum chemistry calculations on quantum computers. For a list of dipeptides for local parameterizations, we estimate the required number of qubits to be 1576 to 3808 with cc-pVTZ(-f) orbital basis and 88 to 276 with active-space reductions. We use Q# quantum computing chemistry package for our analysis. The estimated number of 100s of qubits put pharmaceutical application of near-term quantum processors in a realistic perspective.

Superconducting circuits consisting of a few low-anharmonic transmons coupled to readout and bus resonators can perform basic quantum computations. Since the number of qubits in such circuits is limited to not more than a few tens, the qubits can be designed to operate within the dispersive regime, where frequency detuning are much stronger than coupling strengths. However, scaling up the number of qubits will bring the circuit out of this regime and invalidates current theories. We develop a formalism that allows to consistently diagonalize superconducting circuit hamiltonian beyond dispersive regime. This will allow to study qubit-qubit interaction unperturbatively, therefore our formalism remains valid and accurate at small or even negligible frequency detuning; thus our formalism serves as a theoretical ground for designing qubit characteristics for scaling up the number of qubits in superconducting circuits. We study the most important circuits with single- and two-qubit gates, i.e. a single transmon coupled to a resonator and two transmons sharing a bus resonator. Surprisingly our formalism allows to determine the circuit characteristics, such as dressed frequencies and Kerr couplings, in closed-form formulas that not only reproduce perturbative results but also extrapolate beyond the dispersive regime and can ultimately reproduce (and even modify) the Jaynes-Cumming results at resonant frequencies.

We consider the (Renyi) mutual information, $I^{(n)}(A,B) = S^{(n)}_A+S^{(n)}_{B} - S^{(n)}_{A \cup B}$, of distant compact spatial regions A and B in the vacuum state of a free scalar field. The distance r between A and B is much greater than their sizes $R_{A,B}$. It is known that $I^{(n)}(A,B) \sim C^{(n)}_{AB} \left<0| \phi(r)\phi(0) |0\right>^2$ . We obtain the direct expression of $C^{(n)}_{AB}$ for arbitrary regions A and B. We perform the analytical continuation of $n$ and obtain the mutual information. The direct expression is useful for the numerical computation. By using the direct expression, we can compute directly $I(A,B)$ without computing $S_A, S_B$ and $S_{A \cup B}$ respectively, so it reduces significantly the amount of computation.

We study the protein folding problem on the base of the quantum approach we proposed recently by considering the model of protein chain with nine amino-acid residues. We introduced the concept of distance space and its projections on a $XY$-plane, and two characteristic quantities, one is called compactness of protein structure and another is called probability ratio involving shortest path. Our results not only confirmed the fast quantum folding time but also unveiled the existence of quantum intelligence hidden behind in choosing protein folding pathways.

A qualitative but formalized representation of microstates is first established quite independently of the quantum mechanical mathematical formalism, exclusively under epistemological-operational-methodological constraints. Then, using this representation as a reference-and-imbedding-structure, the foundations of an fully intelligible reconstruction of the Hilbert-Dirac formulation of Quantum Mechanics is developed. Inside this reconstruction, the measurement problem as well as the other major problems raised by the quantum mechanical formalism, dissolve. This, now, is indeed the definitive version of the development expressed in [v1] Mon, 1 Jun 2015 10:33:25 UTC (4,288 KB); [v2] Tue, 12 Apr 2016 21:39:31 UTC (851 KB): [v3] Thu, 14 Apr 2016 13:04:34 UTC (1,590 KB); [v4] Fri, 7 Jul 2017 07:03:28 UTC (4,707 KB). It closes a long-lasting research.

The probabilistic nature of single-photon sources and photon-photon interactions encourages encoding as much quantum information as possible in every photon for the purpose of photonic quantum information processing. Here, by encoding high-dimensional units of information (qudits) in time and frequency degrees of freedom using on-chip sources, we report deterministic two-qudit gates in a single photon with fidelities exceeding 0.90 in the computational basis. Constructing a two-qudit modulo SUM gate, we generate and measure a single-photon state with non-separability between time and frequency qudits. We then employ this SUM operation on two frequency-bin entangled photons, each carrying two 32-dimensional qudits, to realize a four-party high-dimensional Greenberger-Horne-Zeilinger state, occupying a Hilbert space equivalent to that of 20 qubits. Although high-dimensional coding alone is ultimately not scalable for universal quantum computing, our design shows the potential of deterministic optical quantum operations in large encoding spaces for practical and compact quantum information processing protocols.

This paper presents a framework for Quantum causal modeling based on the interpretation of causality as a relation between an observer's probability assignments to hypothetical or counterfactual experiments. The framework is based on the principle of `causal sufficiency': that it should be possible to make inferences about interventions using only the probabilities from a single `reference experiment' plus causal structure in the form of a DAG. This leads to several interesting results: we find that quantum measurements deserve a special status distinct from interventions, and that a special rule is needed for making inferences about what would happen if they are not performed (`un-measurements'). One natural candidate for this rule is found to be an equation of importance to the QBist interpretation of quantum mechanics. We find that the causal structure of quantum systems must have a `layered' structure, and that the model can naturally be made symmetric under reversal of the causal arrows.

Some versions of quantum theory treat wave function collapse as a fundamental physical phenomenon to be described by explicit laws. One motivation is to find a consistent unification of quantum theory and gravity, in which collapse prevents superpositions of space-times from developing. Another is to invoke collapse to explain our perception of definite measurement outcomes. Combining these motivations while avoiding two different collapse postulates seems to require that perceptibly different physical states necessarily create significantly different mass distributions in our organs of perception or brains. Bassi et al. investigated this question in the context of mass density dependent spontaneous collapse models. By analysing the mechanism of visual perception of a few photons in the human eye, they argued that collapse model parameters consistent with known experiment imply that a collapse would take place in the eye within the human perception time of ~100ms, so that a definite state of observing some or no photons would be created from an initial superposition. I reanalyse their arguments, and note a key problem: they treat the relevant processes as though they take place in vacuo, rather than in cytoplasm. This makes a significant difference, since the models imply that superpositions collapse at rates that depend on the difference between the coarse grained mass densities of their components. This increases the required collapse rate, most likely by at least an order of magnitude and plausibly by significantly more. This casts some doubt on the claim that there are collapse model parameters consistent with known experiment that imply collapse times of <~ 100ms within the human eye. A complete analysis would require a very detailed understanding of the physical chemistry and biology of rod cells at microscopic scales.

Bekenstein argued that black holes should have entropy proportional to their areas to make black hole physics compatible with the second law of thermodynamics. However, the heuristic picture for this Hawking radiation, creation of pairs of positive- and negative-energy particles, leads to an inconsistency among the first law of black hole mechanics, Bekenstein's argument and quantum mechanics. In this paper we propose an equation alternative to Bekenstein's from the viewpoint of quantum information, rather than thermodynamics, to resolve this inconsistency without changing Hawking's original proposal. This argues that the area of a black hole is proportional to the coherent information, which is minus the conditional entropy, defined only in the quantum regime, from the outside, to positive-energy particles inside it. This hints that negative-energy particles inside a black hole behave as if they have negative entropy. Our result suggests that the black holes store pure quantum information, rather than classical information.

The paradigm of Schr\"{o}dinger's cat illustrates how quantum states preclude the assignment of definite properties to a macroscopic object (realism). In this work we develop a method to investigate the indefiniteness of cat states using currently available cold atom technology. The method we propose uses the observation of a statistical distribution to demonstrate the macroscopic distinction between dead and alive states, and uses the determination of the interferometric sensitivity (Fisher information) to detect the indefiniteness of the cat's vital status. We show how combining the two observations can provide information about the structure of the quantum state without the need for full quantum state tomography, and propose a measure of the indefiniteness based on this structure. We test this method using a cat state proposed by Gordon and Savage [Phys. Rev. A 59, 4623 (1999)] which is dynamically produced from a coherent state. As a control, we consider a set of states produced using the same dynamical procedure acting on an initial thermal distribution. Numerically simulating our proposed method, we show that as the temperature of this initial state is increased, the produced state undergoes a quantum to classical crossover where the indefiniteness of the cat's vital status is lost, while the macroscopic distinction between dead and alive states of the cat is maintained.

Quantum algorithms can deliver asymptotic speedups over their classical counterparts. However, there are few cases where a substantial quantum speedup has been worked out in detail for reasonably-sized problems, when compared with the best classical algorithms and taking into account realistic hardware parameters and overheads for fault-tolerance. All known examples of such speedups correspond to problems related to simulation of quantum systems and cryptography. Here we apply general-purpose quantum algorithms for solving constraint satisfaction problems to two families of prototypical NP-complete problems: boolean satisfiability and graph colouring. We consider two quantum approaches: Grover's algorithm and a quantum algorithm for accelerating backtracking algorithms. We compare the performance of optimised versions of these algorithms, when applied to random problem instances, against leading classical algorithms. Even when considering only problem instances that can be solved within one day, we find that there are potentially large quantum speedups available. In the most optimistic parameter regime we consider, this could be a factor of over $10^5$ relative to a classical desktop computer; in the least optimistic regime, the speedup is reduced to a factor of over $10^3$. However, the number of physical qubits used is extremely large, and improved fault-tolerance methods will likely be needed to make these results practical. In particular, the quantum advantage disappears if one includes the cost of the classical processing power required to perform decoding of the surface code using current techniques.

Numerous scientific and engineering applications require numerically solving systems of equations. Classically solving a general set of polynomial equations requires iterative solvers, while linear equations may be solved either by direct matrix inversion or iteratively with judicious preconditioning. However, the convergence of iterative algorithms is highly variable and depends, in part, on the condition number. We present a direct method for solving general systems of polynomial equations based on quantum annealing, and we validate this method using a system of second-order polynomial equations solved on a commercially available quantum annealer. We then demonstrate applications for linear regression, and discuss in more detail the scaling behavior for general systems of linear equations with respect to problem size, condition number, and search precision. Finally, we define an iterative annealing process and demonstrate its efficacy in solving a linear system to a tolerance of $10^{-8}$.

In the large-$N$, classical limit, the Bose-Hubbard dimer undergoes a transition to chaos when its tunnelling rate is modulated in time. We use exact and approximate numerical simulations to determine the features of the dynamically evolving state that are correlated with the presence of chaos in the classical limit. We propose the statistical distance between initially similar number distributions as a reliable measure to distinguish regular from chaotic behaviour in the quantum dynamics. Besides being experimentally accessible, number distributions can be efficiently reconstructed numerically from binned phase-space trajectories in a truncated Wigner approximation. Although the evolving Wigner function becomes very irregular in the chaotic regions, the truncated Wigner method is nevertheless able to capture accurately the beyond mean-field dynamics.

We show that a one-dimensional chain of trapped ions can be engineered to produce a quantum mechanical system with discrete scale invariance and fractal-like time dependence. By discrete scale invariance we mean a system that replicates itself under a rescaling of distance for some scale factor, and a time fractal is a signal that is invariant under the rescaling of time. These features are reminiscent of the Efimov effect, which has been predicted and observed in bound states of three-body systems. We demonstrate that discrete scale invariance in the trapped ion system can be controlled with two independently tunable parameters. We also discuss the extension to n-body states where the discrete scaling symmetry has an exotic heterogeneous structure. The results we present can be realized using currently available technologies developed for trapped ion quantum systems.

Coupled parametric oscillators were recently employed as simulators of artificial Ising networks, with the potential to solve computationally hard minimization problems. We demonstrate a new dynamical regime within the simplest network - two coupled parametric oscillators, where the oscillators never reach a steady state, but show persistent, full-scale, coherent beats, whose frequency reflects the coupling properties and strength. We present a detailed theoretical and experimental study and show that this new dynamical regime appears over a wide range of parameters near the oscillation threshold and depends on the nature of the coupling (dissipative or energy preserving). Thus, a system of coupled parametric oscillators transcends the Ising description and manifests unique coherent dynamics, which may have important implications for coherent computation machines.

Periodically driven parametric oscillators offer a convenient way to simulate classical Ising spins. When many parametric oscillators are coupled dissipatively, they can be analogous to networks of Ising spins, forming an effective coherent Ising machine (CIM) that efficiently solves computationally hard optimization problems. In the companion letter, we studied experimentally the minimal realization of a CIM, i.e. two coupled parametric oscillators. We found that the presence of an energy-conserving coupling between the oscillators can dramatically change the dynamics, leading to everlasting beats, which transcend the Ising description. Here, we analyze this effect theoretically by solving numerically and, when possible, analytically the equations of motion of two parametric oscillators. Our main tools include: (i) a Floquet analysis of the linear equations, (ii) a multi-scale analysis based on a separation of time scales between the parametric oscillations and the beats, and (iii) the numerical identification of limit cycles and attractors. Using these tools, we fully determine the phase boundaries and critical exponents of the model, as a function of the intensity and the phase of the coupling and of the pump. Our study highlights the universal character of the phase diagram and its independence on the specific type of nonlinearity present in the system. Furthermore, we identify new phases of the model with more than two attractors, possibly describing a larger spin algebra.

In the present work, we introduce a Self-Consistent Density-Functional Embedding technique, which leaves the realm of standard energy-functional approaches in Density Functional Theory and targets directly the density-to-potential mapping that lies at its heart. Inspired by the Density Matrix Embedding Theory, we project the full system onto a set of small interacting fragments that can be solved accurately. Based on the rigorous relation of density and potential in Density Functional Theory, we then invert the fragment densities to local potentials. Combining these results in a continuous manner provides an update for the Kohn-Sham potential of the full system, which is then used to update the projection. The scheme proposed here converges to an accurate approximation for the density and the Kohn-Sham potential of the full system. Convergence to exact results can be achieved by increasing the fragment size. We find, however, that already for small embedded fragments accurate results are obtained. We benchmark our approach for molecular bond stretching in one and two dimensions and demonstrate that it reproduces the known steps and peaks that are present in the exact exchange-correlation potential with remarkable accuracy.

In this paper we investigate the relationship between the efficiency of a cyclic quantum heat engine with the Hilbert space dimension of the thermal baths. By means of a general inequality, we show that the Carnot efficiency can be obtained only when both the hot and cold baths are infinitely large. By further introducing a specific model where the baths are constituted of ensembles of finite-dimensional particles, we further demonstrate the relationship between the engine's power and efficiency, with the dimension of the working substance and the bath particles.

The existence of a spectral gap above the ground state has far-reaching consequences for the low-energy physics of a quantum many-body system. A recent work of Movassagh [R. Movassagh, PRL 119 (2017), 220504] shows that a spatially random local quantum Hamiltonian is generically gapless. Here we observe that a gap is more common for translation-invariant quantum spin chains, more specifically, that these are gapped with a positive probability if the interaction is of small rank. This is in line with a previous analysis of the spin-$1/2$ case by Bravyi and Gosset. The Hamiltonians are constructed by selecting a single projection of sufficiently small rank at random, and then translating it across the entire chain. By the rank assumption, the resulting Hamiltonians are automatically frustration-free and this fact plays a key role in our analysis.