The basic theoretical foundation for the modelling of phonon assisted absorption spectra in direct bandgap semiconductors, introduced by Elliott 60 years ago using second order perturbation theory, results in a square root shaped dependency close to the absorption edge. A careful analysis of the experiments reveals that for the yellow S excitons in Cu$_2$O the lineshape does not follow that square root dependence. The reexamination of the theory shows that the basic assumptions of constant matrix elements and constant energy denominators is invalid for semiconductors with dominant exciton effects like Cu$_2$O, where the phonon assisted absorption proceeds via intermediate exciton states. The overlap between these and the final exciton states strongly determines the dependence of the absorption on the photon energy. To describe the experimental observed line shape of the indirect absorption of the yellow S exciton states we find it necessary to assume a momentum dependent deformation potential for the optical phonons.

Schr\"odinger equation for Bose gas with repulsive contact interactions in one-dimensional space may be solved analytically with the help of the Bethe ansatz if we impose periodic boundary conditions. It was shown that in such a system there exist many-body eigenstates directly corresponding to dark soliton solutions of the mean-field equation. The system is still integrable if one switches from the periodic boundary conditions to an infinite square well potential. The corresponding eigenstates were constructed by M. Gaudin. We analyze weak interaction limit of Gaudin's solutions and identify parametrization of eigenstates strictly connected with single and multiple dark solitons. Numerical simulations of detection of particle's positions reveal dark solitons in the weak interaction regime and their quantum nature in the presence of strong interactions.

Quantum teleportation, the process by which Alice can transfer an unknown quantum state to Bob by using pre-shared entanglement and classical communication, is one of the cornerstones of quantum information. The standard benchmark for certifying quantum teleportation consists in surpassing the maximum average fidelity between the teleported and the target states that can be achieved classically. According to this figure of merit, not all entangled states are useful for teleportation. Here we propose a new benchmark that uses the full information available in a teleportation experiment and prove that all entangled states can implement a quantum channel which can not be reproduced classically. We introduce the idea of non-classical teleportation witness to certify if a teleportation experiment is genuinely quantum and discuss how to quantify this phenomenon. Our work provides new techniques for studying teleportation that can be immediately applied to certify the quality of quantum technologies.

The qubit depolarizing channel with noise parameter $\eta$ transmits an input qubit perfectly with probability $1-\eta$, and outputs the completely mixed state with probability $\eta$. We show that its complementary channel has positive quantum capacity for all $\eta>0$. Thus, we find that there exists a single parameter family of channels having the peculiar property of having positive quantum capacity even when the outputs of these channels approach a fixed state independent of the input. Comparisons with other related channels, and implications on the difficulty of studying the quantum capacity of the depolarizing channel are discussed.

In this paper it is argued how the dynamics of the classical Newtonian N-body system can be described in terms of the Schr$\ddot{o}$dinger-Poisson equations in the large $N$ limit. This result is based on the stochastic quantization introduced by Nelson, and on the Calogero conjecture. According to the Calogero conjecture, the emerging effective Planck constant is computed in terms of the parameters of the N-body system as $\hbar \sim M^{5/3} G^{1/2} (N/<\rho>)^{1/6}$, where is $G$ the gravitational constant, $N$ and $M$ are the number and the mass of the bodies, and $<\rho>$ is their average density. The relevance of this result in the context of large scale structure formation is discussed. In particular, this finding gives a further argument in support of the validity of the Schr$\ddot{o}$dinger method as numerical double of the N-body simulations of dark matter dynamics at large cosmological scales.

A new idea for the quantization of dynamic systems, as well as space time itself, using a stochastic metric is proposed. The quantum mechanics of a mass point is constructed on a spacetime manifold using a stochastic metric. The quantum theory in the local Minkowski space can be recognised as a classical theory on the stochastic Lorentz-metric-space. A stochastic calculus on the spacetime manifold is performed using white noise functional analysis. A path-integral quantisation is introduced as a stochastic integration of a function of the action integral, and it is shown that path-integrals on the stochastic metric space are mathematically well-defined for large variety of potential functions. The Newton--Nelson equation of motion can also be obtained from the Newtonian equation of motion on the stochastic metric space. It is also shown that the commutation relation required under the canonical quantisation is consistent with the stochastic quantisation introduced in this report. The quantum effects of general relativity are also analysed through natural use of the stochastic metrics. Some example of quantum effects on the universe is discussed.

We develop an alternative boson sampling model operating on single-photon states followed by linear interferometry and Gaussian measurements. The hardness proof for simulating such continuous-variable measurements is established in two main steps, making use of the symmetry of quantum evolution under time reversal. Namely, we first construct a twofold version of scattershot boson sampling in which, as opposed to the original proposal, both legs of a collection of two-mode squeezed vacuum states undergo parallel linear-optical transformations. This twofold scattershot model yields, as a corollary, an instance of boson sampling from Gaussian states where photon counting is hard to simulate. Then, a time-reversed setup is used to exhibit a boson sampling model in which the simulation of Gaussian measurements -- namely the outcome of eight-port homodyne detection -- is proven to be computationally hard. These results illustrate how the symmetry of quantum evolution under time reversal may serve as a tool for analyzing the computational complexity of novel physically-motivated computational problems.

We derive a unified quantum theory of coherent and incoherent energy transfer between two atoms (donor and acceptor) valid in arbitrary Markovian nanophotonic environments. Our theory predicts a fundamental bound $\eta_{max} = \frac{\gamma_a}{\gamma_d + \gamma_a}$ for energy transfer efficiency arising from the spontaneous emission rates $\gamma_{d}$ and $\gamma_a$ of the donor and acceptor. We propose the control of the acceptor spontaneous emission rate as a new design principle for enhancing energy transfer efficiency. We predict an experiment using mirrors to enhance the efficiency bound by exploiting the dipole orientations of the donor and acceptor. Of fundamental interest, we show that while quantum coherence implies the ultimate efficiency bound has been reached, reaching the ultimate efficiency does not require quantum coherence. Our work paves the way towards nanophotonic analogues of efficiency enhancing environments known in quantum biological systems.

The connection between contextuality and graph theory has led to many developments in the field. In particular, the sets of probability distributions in many contextuality scenarios can be described using well known convex sets from graph theory, leading to a beautiful geometric characterization of such sets. This geometry can also be explored in the definition of contextuality quantifiers based on geometric distances, which is important for the resource theory of contextuality, developed after the recognition of contextuality as a potential resource for quantum computation. In this paper we review the geometric aspects of contextuality and use it to define several quantifiers, which have the advantage of being applicable to the exclusivity approach to contextuality, where previously defined quantifiers do not fit.

The Barenco gate ($\mathbb{B}$) is a type of two-qubit quantum gate based on which alone universal quantum computation can be accomplished. Each $\mathbb{B}$ is characterized by three angles ($\alpha,\theta$, and $\phi$) though it works in a two-qubit Hilbert space. Here we design $\mathbb{B}$ via a non-collinear interaction $V|r_1r_2\rangle\langle r_1r_3|+$H.c., where $|r_i\rangle$ is a state that can be excited from a qubit state and $V$ is adjustable. We present two protocols of $\mathbb{B}$. The first (second) protocol consists of two (six) pulses and one (two) wait period(s), where the former causes rotations between the qubit states and excited states, and the latter induces gate transformation via the non-collinear interaction. In the first protocol, the variable $\phi$ can be tuned by varying phases of external controls, and the other two variables $\alpha$ and $\theta$, tunable via adjusting the wait duration, have a linear dependence upon each other. In the second protocol, $\alpha,\theta$, and $\phi$ can be varied by changing the interaction amplitudes and wait durations, and the latter two are dependent on $\alpha$ non-linearly. Both protocols can also give rise to another universal gate when $\{\alpha,\phi\}=\{1/4,1/2\}\pi$ by choosing appropriate parameters. Implementation of these universal gates is analyzed based on the van der Waals interaction of neutral Rydberg atoms.

This paper defines a complexity between states in quantum field theory by introducing a Finsler structure based on created and annihilated operators. Two simple models are computed as examples and to clarify the differences between complexity and other conceptions such as complexity of formation and entanglement entropy. When it is applied into thermofield double states, results show the complexity between them and corresponding vacuum state is finite and proportional to $T^{d-1}$ in $d$-dimensional conformal field theory. Especially, a proof is given to show fidelity susceptibility of a TFD state is equivalent to the complexity between it and corresponding vacuum state, which gives an explanation why they may share the same object in holographic duality. Some enlightenments to holographic conjectures of complexity are also discussed.

According to the Hughston-Jozsa-Wootters (HJW) theorem (a.k.a., the Uhlmann theorem), if two sets of quantum states have the same density matrix, then it is possible to construct a composite system, such that there are two measurements on one side of the system, which can collapse the other side into an element of the above two sets, respectively. Here we show that in infinite-dimensional systems, unlike what the HJW theorem predicts, there exist two sets of states for which the corresponding measurements are no longer two different ones. Instead, they become identical. As a consequence, the steering between these two sets of states is impossible. Also, for finite high-dimensional systems, our result reveals a kind of chaos effect. That is, a subtle change on the measurement result at one side of a composite system can lead to a drastic difference in the resultant states of the steering at the other side.

We study a driven harmonic oscillator operating an Otto cycle between two thermal baths of finite size. By making extensive use of the tools of Gaussian quantum mechanics, we directly simulate the dynamics of the engine as a whole, without the need to make any approximations. This allows us to understand the non-equilibrium thermodynamics of the engine not only from the perspective of the working medium, but also as it is seen from the thermal baths' standpoint. For sufficiently large baths, our engine is capable of running a number of ideal cycles, delivering finite power while operating very close to maximal efficiency. Thereafter, having traversed the baths, the perturbations created by the interaction abruptly deteriorate the engine's performance. We additionally study the correlations generated in the system, and relate the buildup of working medium-baths and bath-bath correlations to the degradation of the engine's performance over the course of many cycles.

Graph states have been used for quantum error correction by Schlingemann et al. [Physical Review A 65.1 (2001): 012308]. Hypergraph states [Physical Review A 87.2 (2013): 022311] are generalizations of graph states and they have been used in quantum algorithms. We for the first time demonstrate how hypergraph states can be used for quantum error correction. We also point out that they are more efficient than graph states in the sense that to correct equal number of errors on the same graph topology, suitably defined hypergraph states require less number of gate operations than the corresponding graph states.

We analyze a multi-qubit circuit QED system in the regime where the qubit-photon coupling dominates over the system's bare energy scales. Under such conditions a manifold of low-energy states with a high degree of entanglement emerges. Here we describe a time-dependent protocol for extracting these quantum correlations and converting them into well-defined multi-partite entangled states of non-interacting qubits. Based on a combination of various ultrastrong-coupling effects the protocol can be operated in a fast and robust manner, while still being consistent with experimental constraints on switching times and typical energy scales encountered in superconducting circuits. Therefore, our scheme can serve as a probe for otherwise inaccessible correlations in strongly-coupled circuit QED systems. It also shows how such correlations can potentially be exploited as a resource for entanglement-based applications.

In this paper, we present a quantum heuristic approach to solve the subset-sum problem on quantum computers. Under described certain assumptions, we show that the approach is able to yield the exact solution in polynomial time and may provide exponential speed-up over the classical algorithms. We give a numerical example and discuss the complexity of the approach and its further application to the knapsack problem.

We study the dynamics of the biased sub-Ohmic spin-boson model by means of a time-dependent variational matrix product state (TDVMPS) algorithm. The evolution of both the system and the environment is obtained in the weak- and the strong-coupling regimes, respectively characterized by damped spin oscillations and by a nonequilibrium process where the spin freezes near its initial state, which are explicitly shown to arise from a variety of reactive environmental quantum dynamics. We also explore the rich phenomenology of the intermediate-coupling case, a nonperturbative regime where the system shows a complex dynamical behavior, combining features of both the weakly and the strongly coupled case in a sequential, time-retarded fashion. Our work demonstrates the potential of TDVMPS methods for exploring otherwise elusive, nonperturbative regimes of complex open quantum systems, and points to the possibilities of exploiting the qualitative, real-time modification of quantum properties induced by nonequilibrium bath dynamics in ultrafast transient processes.

We show that it is possible to use a quantum walk to find a path from one marked vertex to another. In the specific case of $M$ stars connected in a chain, one can find the path from the first star to the last one in $O(M\sqrt{N})$ steps, where $N$ is the number of spokes of each star. First we provide an analytical result showing that by starting in a phase-modulated highly superposed initial state we can find the path in $O(M\sqrt{N}\log M)$ steps. Next, we improve this efficiency by showing that the recovery of the path can also be performed by a series of successive searches when we start at the last known position and search for the next connection in $O(\sqrt{N})$ steps leading to the overall efficiency of $O(M\sqrt{N})$. For this result we use the analytical solution that can be obtained for a ring of stars of double the length of the chain.

We present a table-top quantum estimation protocol to measure the gravitational acceleration $g$ by using an optomechanical cavity. In particular, we exploit the non-linear quantum light-matter interaction between an optical field and a massive mirror acting as mechanical oscillator. The gravitational field influences the system dynamics affecting the phase of the cavity field during the interaction. Reading out such phase carried by the radiation leaking from the cavity, we provide an estimate of the gravitational acceleration through interference measurements. Contrary to previous studies, having adopted a fully quantum description, we are able to propose a quantum analysis proving the ultimate bound to the estimability of the gravitational acceleration and verifying optimality of homodyne detection. Noticeably, thanks to the light-matter decoupling at the measurement time, no initial cooling of the mechanical oscillator is in principle demanded.

We propose an information-theoretic framework to quantify multipartite correlations in classical and quantum systems, answering questions such as: what is the amount of seven-partite correlations in a given state of ten particles? We identify measures of genuine multipartite correlations, i.e. statistical dependencies which cannot be ascribed to bipartite correlations, satisfying a set of desirable properties. Inspired by ideas developed in complexity science, we then introduce the concept of weaving to classify states which display different correlation patterns, but cannot be distinguished by correlation measures. The weaving of a state is defined as the weighted sum of correlations of every order. Weaving measures are good descriptors of the complexity of correlation structures in multipartite systems.