Fine control of the dynamics of a quantum system is the key element to perform quantum information processing and coherent manipulations for atomic and molecular systems. In this paper we propose a control protocol using a tangent-pulse driven model and demonstrate that it indicates a desirable design, i.e., of being both fast and accurate for population transfer. As opposed to other existing strategies, a remarkable character of the present scheme is that high velocity of the nonadiabatic evolution itself not only will not lead to unwanted transitions but also can suppress the error caused by the truncation of the driving pulse.

In quantum mechanics, we define the measuring system $M$ in a selective measurement by two conditions. Firstly, when we define the measured system $S$ as the system in which the non-selective measurement part acts, $M$ is independent from the measured system $S$ as a quantum system in the sense that any time-dependent process in the total system $S+M$ is divisible into parts for $S$ and $M$. Secondly, when we can separate $S$ and $M$ from each other without changing the unitary equivalence class of the state of $S$ from that obtained by the partial trace of $M$, the eigenstate selection in the selective measurement cannot be realized. In order for such a system $M$ to exist, we show that in one selective measurement of an observable of a quantum system $S_0$ of particles in $S$, there exists a negative entropy transfer from $M$ to $S$ that can be directly transformed into an amount of Helmholtz free energy of $k_BT$ where $T$ is the thermodynamic temperature of the system $S$. Equivalently, an extra amount of work, $k_BT$, is required to be done by the system $M$.

Amplitude amplification is one of primary tools in building algorithms for quantum computers. This technique generalizes key ideas of the Grover search algorithm. Potentially useful modifications are connected with changing phases in the rotation operations and replacing the intermediate Hadamard transform with arbitrary unitary one. In addition, arbitrary initial distribution of the amplitudes may be prepared. We examine trade-off relations between measures of quantum coherence and the success probability in amplitude amplification processes. As measures of coherence, the geometric coherence and the relative entropy of coherence are considered. In terms of the relative entropy of coherence, complementarity relations with the success probability seem to be the most expository. The general relations presented are illustrated within several model scenarios of amplitude amplification processes.

In the Entropic Dynamics framework the dynamics is driven by maximizing entropy subject to appropriate constraints. In this work we bring Entropic Dynamics one step closer to full equivalence with quantum theory by identifying constraints that lead to wave functions that remain single-valued even for multi-valued phases by recognizing the intimate relation between quantum phases, gauge symmetry, and charge quantization.

Quantum state preparation in high-dimensional systems is an essential requirement for many quantum-technology applications. The engineering of an arbitrary quantum state is, however, typically strongly dependent on the experimental platform chosen for implementation, and a general framework is still missing. Here we show that coined quantum walks on a line, which represent a framework general enough to encompass a variety of different platforms, can be used for quantum state engineering of arbitrary superpositions of the walker's sites. We achieve this goal by identifying a set of conditions that fully characterize the reachable states in the space comprising walker and coin, and providing a method to efficiently compute the corresponding set of coin parameters. We assess the feasibility of our proposal by identifying a linear optics experiment based on photonic orbital angular momentum technology.

The need for a measure of coherence which does not change under the change of basis has been recognized long before. The resource-theoretical framework proposed by Baumgratz \textit{et al.}, Phys. Rev. Lett. 113, 140401, (2014) for quantifying coherence in high-dimensional quantum systems is manifestly basis-dependent and has implications incongruent with the conventional understanding of coherence. In contrast, within the conventional, optical coherence theory, the coherence of a 2-dimensional system can be quantified in terms of a basis-independent quantity called the degree of polarization $P_2$. The quantity $P_2$ has five different known physical interpretations, namely: (i) It is the Frobenius distance between the state and identity matrix (ii) it is the norm of the Bloch-vector representing the state, (iii) it is the visibility of an interference experiment, (iv) it is the maximum pairwise-coherence of the state over all orthonormal bases, and (v) it is determined by the weightage of the pure part of the state. Recently, Yao \textit{et al.}, Sci. Rep. 6, 32010, (2016) have constructed a basis-independent measure of coherence $P_{N}$ for $N$-dimensional quantum states by generalizing the Frobenius distance interpretation of $P_2$. In this paper, we demonstrate that all the remaining interpretations of $P_{2}$ generalizes to $P_N$ as well. Our results theoretically establish the suitability of $P_{N}$ as an intrinsic measure of coherence. This measure can be used for quantifying coherence in high-dimensional quantum states in orbital-angular-momentum and photon number bases.

Superconducting qubits are sensitive to a variety of loss mechanisms which include dielectric loss from interfaces. The calculation of participation near the key interfaces of planar designs can be accomplished through an analytical description of the electric field density based on conformal mapping. In this way, a two-dimensional approximation to coplanar waveguide and capacitor designs produces values of the participation as a function of depth from the top metallization layer as well as the volume participation within a given thickness from this surface by reducing the problem to a surface integration over the region of interest. These quantities are compared to finite element method numerical solutions, which validate the values at large distances from the coplanar metallization but diverge near the edges of the metallization features due to the singular nature of the electric fields. A simple approximation to the electric field energy at shallow depths (relative to the waveguide width) is also presented that closely replicates the numerical results based on conformal mapping and those reported in prior literature. These techniques are applied to the calculation of surface participation within a transmon qubit design, where the effects due to shunting capacitors can be easily integrated with those associated with metallization comprising the local environment of the qubit junction.

Quantum algorithms have demonstrated promising speed-ups over classical algorithms in the context of computational learning theory - despite the presence of noise. In this work, we give an overview of recent quantum speed-ups, revisit the Bernstein-Vazirani algorithm in a new learning problem extension over an arbitrary cyclic group and discuss applications in cryptography, such as the Learning with Errors problem.

We turn to post-quantum cryptography and investigate attacks in which an adversary is given quantum access to a classical encryption scheme. In particular, we consider new notions of security under non-adaptive quantum chosen-ciphertext attacks and propose symmetric-key encryption schemes based on quantum-secure pseudorandom functions that fulfil our definitions. In order to prove security, we introduce novel relabeling techniques and show that, in an oracle model with an arbitrary advice state, no quantum algorithm making superposition queries can reliably distinguish between the class of functions that are randomly relabeled at a small subset of the domain.

Finally, we discuss current progress in quantum computing technology, particularly with a focus on implementations of quantum algorithms on the ion-trap architecture, and shed light on the relevance and effectiveness of common noise models adopted in computational learning theory.

In a general, multi-mode scattering setup, we show how the permutation symmetry of a many-particle input state determines those scattering unitaries which exhibit strictly suppressed many-particle transition events. We formulate purely algebraic suppression laws that identify these events and show that the many-particle interference at their origin is robust under weak disorder and imperfect indistinguishability of the interfering particles. Finally, we demonstrate that all suppression laws so far described in the literature are embedded in the general framework that we here introduce.

Several distinct classes of unitary mode transformations have been known to exhibit the strict suppression of a large set of transmission events, as a consequence of totally destructive many-particle interference. In another work [Dittel et al., Phys. Rev. Lett. 120, 240404 (2018)] we unite these cases by identifying a general class of unitary matrices which exhibit such interferences. Here, we provide a detailed theoretical analysis that substantially expands on all aspects of this generalisation: We prove the suppression laws put forward in our other paper, establish how they interrelate with forbidden single-particle transitions, show how all suppression laws hitherto known can be retrieved from our general formalism, and discuss striking differences between bosons and fermions. Furthermore, beyond many-particle Fock states on input, we consider arbitrary pure initial states and derive suppression laws which stem from the wave function's permutation symmetry alone. Finally, we identify conditions for totally destructive interference to persist when the involved particles become partially distinguishable.

Machine learning is actively being explored for its potential to design, validate, and even hybridize with near-term quantum devices. A central question is whether neural networks can provide a tractable representation of a given quantum state of interest. When true, stochastic neural networks can be employed for many unsupervised tasks, including generative modeling and state tomography. However, to be applicable for real experiments such methods must be able to encode quantum mixed states. Here, we parametrize a density matrix based on a restricted Boltzmann machine that is capable of purifying a mixed state through auxiliary degrees of freedom embedded in the latent space of its hidden units. We implement the algorithm numerically and use it to perform tomography on some typical states of entangled photons, achieving fidelities competitive with standard techniques.

For a discrimination problem $\Phi_\eta$ consisting of $N$ linearly independent pure quantum states $\Phi=\{|\phi_i\rangle\}$ and the corresponding occurrence probabilities $\eta=\{\eta_i\}$ we associate, up to a permutation over the probabilities $\{\eta_i\}$, a unique pair of density matrices $\boldsymbol{\rho_{_{T}}}$ and $\boldsymbol{\eta_{{p}}}$ defined on the $N$-dimensional Hilbert space $\mathcal{H}_N$. The first one, $\boldsymbol{\rho_{_{T}}}$, provides a representation of a generic full-rank density matrix in terms of the parameters of the discrimination problem, i.e. the mutual overlaps $\gamma_{ij}=\langle{\phi_i}|{\phi_j}\rangle$ and the occurrence probabilities $\{\eta_i\}$. The second one on the other hand is defined as a diagonal density matrix $\boldsymbol{\eta_p}$ with the diagonal entries given by the probabilities $\{\eta_i\}$ with the ordering induced by the permutation $p$ of the probabilities. When the set $\Phi$ can be discriminated unambiguously with probability one then $\boldsymbol{\rho_{_{T}}} \rightarrow \boldsymbol{\eta_{{p}}}$. On the other hand if the set lacks its independency and cannot be discriminated anymore the distinguishability of the pair, measured by the fidelity $F(\boldsymbol{\rho_{_{T}}}, \boldsymbol{\eta_{{p}}})$, becomes minimum. This enables one to associate to each discrimination problem $\Phi_{\eta}$ a unique distinguishability problem between two states $\boldsymbol{\rho_{_{T}}}$ and $\boldsymbol{\eta_{{p}}}$, and define the maximum fidelity between them (maximum over all permutations of the probabilities $\{\eta_i\}$) as the extent to which the set is discriminable. Calculating this quantity does not require any optimization and we study its behaviour with some examples.

This paper shows a novel way of simulating a Markov process by a quantum computer. The main purpose of the paper is to show a particular application of quantum computing in the field of stochastic processes analysis. Using a Quantum computer, the process could be superposed, where the random variables of the Markov chain are represented by entangled qubit states, which gives the great opportunity of having all the possible scenarios simultaneously.

This is a collection of notes that are about spectral form factors of standard ensembles in the random matrix theory, written for the practical usage of current study of late time quantum chaos. More precisely, we consider Gaussian Unitary Ensemble (GUE), Gaussian Orthogonal Ensemble (GOE), Gaussian Symplectic Ensemble (GSE), Wishart-Laguerre Unitary Ensemble (LUE), Wishart-Laguerre Orthogonal Ensemble (LOE), and Wishart-Laguerre Symplectic Ensemble (LSE). These results and their physics applications cover a three-fold classification of late time quantum chaos in terms of spectral form factors.

This note is devoted to the investigation of Susskind's proposal concerning the correspondence between the operator growth in chaotic theories and the radial momenta of the particle falling in the AdS black hole. We study this proposal and consider the simple example of an operator with the global charge described by the charged particle falling to the Reissner-Nordstrom-AdS black hole. Different charges of the particle lead to qualitatively different behavior of the particle momenta and consequently change of the operator size behavior. This holographic result is supported by different examples of melonic models with a finite chemical potential where the suppression of chaos has been observed.

Spontaneous breaking of continuous time translation symmetry into a discrete one is related to time crystal formation. While the phenomenon is not possible in the ground state of a time-independent many-body system, it can occur in an excited eigenstate. Here, we concentrate on bosons on a ring with attractive contact interactions and analyze a quantum quench from the time crystal regime to the non-interacting regime. We show that dynamical quantum phase transitions can be observed where the return probability of the system to the initial state before the quench reveals a non-analytical behavior in time. The problem we consider constitutes an example of the dynamical quantum phase transitions in a system where both time and space continuous translation symmetries are broken.

We study theoretically entanglement and operator growth in a spin system coupled to an environment, which is modeled with classical dephasing noise. Since classical noise retains the unitarity of the time evolution, it does not destroy the quantum information. As a consequence, the main effect of noise is to introduce a new time scale which is proportional to the noise strength, as we obtain from a perturbative treatment. We numerically simulate the noisy spin dynamics and show that entanglement growth and its fluctuations are described by the Kardar-Parisi-Zhang equation. Moreover, we find that the wavefront in the out-of-time ordered correlator (OTOC), which is a measure for the operator growth, propagates linearly with the butterfly velocity and broadens diffusively, with a diffusion constant that is larger than the one of spin transport. A comparison between the entanglement and butterfly velocities unveils that both of them are strongly suppressed with noise, consistent with our perturbative calculation, with the former being smaller than the latter. In our study, we focus on Markovian white noise but also discuss the role of non-Markovian noise.

Thermodynamic irreversibility is well characterized by the entropy production arising from non-equilibrium quantum processes. We show that the entropy production of a quantum system undergoing open-system dynamics can be formally split into a term that only depends on population unbalances, and one that is underpinned by quantum coherences. The population unbalances are found to contribute to both an entropy flux and an entropy production rate. The decoherence, on the other hand, contributes only to the entropy production rate. This allows us to identify a genuine quantum contribution to the entropy production in non-equilibrium quantum processes. We make use of such a division to address the open-system dynamics of a spin $J$ particle, which we describe in phase space through a spin-coherent representation.

We study the effect of disorder on work exchange associated to quantum Hamiltonian processes by considering an Ising spin chain in which the strength of coupling between spins are randomly drawn from either Normal or Gamma distributions. The chain is subjected to a quench of the external transverse field which induces this exchange of work. In particular, we study the irreversible work incurred by a quench as a function of the initial temperature, field strength and magnitude of the disorder. While presence of weak disorder generally increases the irreversible work generated, disorder of sufficient strength can instead reduce it, giving rise to a disorder induced lubrication effect. This reduction of irreversible work depends on the nature of the distribution considered, and can either arise from acquiring the behavior of an effectively smaller quench for the Normal-distributed spin couplings, or that of effectively single spin dynamics in the case of Gamma distributed couplings.

I evaluate a quantum upper bound on the Fisher information for estimating the moments of a subdiffraction object in incoherent optical imaging. The result matches the performance of a spatial-mode-demultiplexing measurement scheme in terms of order of magnitude.