Recently, Kavan Modi \emph{et al.} found that masking quantum information is impossible in bipartite scenario in [Phys. Rev. Lett. \textbf{120}, 230501 (2018)]. This adds another item of the no-go theorems. In this paper, we present some new schemes different from error correction codes, which show that quantum states can be masked when more participants are allowed in the masking process. Moreover, using a pair of mutually orthogonal Latin squares of dimension $d$, we show that all the $d$ level quantum states can be masked into tripartite quantum systems whose local dimensions are $d$ or $d+1$. This highlight some difference between the no-masking theorem and the classical no-cloning theorem or no-deleting theorem.

We measure and quantify non-Markovian effects in IBM's Quantum Experience. Specifically, we analyze the temporal correlations in a sequence of gates by characterizing the performance of a gate conditioned on the gate that preceded it. With this method, we estimate (i) the size of fluctuations in the performance of a gate, i.e., errors due to non-Markovianity; (ii) the length of the memory; and (iii) the total size of the memory. Our results strongly indicate the presence of non-trivial non-Markovian effects in almost all gates in the universal set. However, based on our findings, we discuss the potential for cleaner computation by adequately accounting the non-Markovian nature of the machine.

We can only perform a finite rounds of measurements in protocols with local operations and classical communication (LOCC). In this paper, we propose a set of product states, which require infinite rounds of measurements in order to distinguish this given set of states perfectly by LOCC. Therefore, we can conclude that these sets of states are locally indistinguishable. More accurately, given any multipartite LOCC indistinguishable set where every local system cannot start with a nontrivial measurement, then after appending these states with arbitrarily choosing two nonorthogonal states, we obtain another LOCC indistinguishable set. It can be seen that some parties can perform some nontrivial measurements. Hence, these sets are quite different from those constructed before. This result broadens the knowledge of nonlocality without entanglement to a certain extent.

We consider a second-order differential equation $$ -y''(z)-(iz)^{N+2}y(z)=\lambda y(z), \quad z\in \Gamma $$ with an eigenvalue parameter $\lambda \in \mathbb{C}$. In $\mathcal{PT}$ quantum mechanics $z$ runs through a complex contour $\Gamma\subset \mathbb{C}$, which is in general not the real line nor a real half-line. Via a parametrization we map the problem back to the real line and obtain two differential equations on $[0,\infty)$ and on $(-\infty,0].$ They are coupled in zero by boundary conditions and their potentials are not real-valued. The main result is a classification of this problem along the well-known limit-point/ limit-circle scheme for complex potentials introduced by A.R.\ Sims 60 years ago. Moreover, we associate operators to the two half-line problems and to the full axis problem and study their spectra.

Fault tolerant quantum computing requires quantum gates with high fidelity. Incoherent errors reduce the fidelities of quantum gates when the operation time is too long. Optimal control techniques can be used to decrease the operation time in theory, but generally do not take into account the realistic nature of uncertainty regarding the system parameters. We apply robust optimal control techniques to demonstrate that it is feasible to reduce the operation time of the cross-resonance gate in superconducting systems to under 100\,ns with two-qubit gate fidelities of $\mathcal{F}>0.99$, where the gate fidelity will not be coherence limited. This is while ensuring robustness for up to 10\% uncertainty in the system, and having chosen a parameterization that aides in experimental feasibility. We find that the highest fidelity gates can be achieved in the shortest time for the transmon qubits compared with a two-level flux qubit system. This suggests that the third-level of the transmon may be useful for achieving shorter cross-resonance gate times with high fidelity. The results further indicate a speed limit for experimentally feasible pulses with the inclusion of robustness and the maximum amount of uncertainty allowable to achieve fidelities with $\mathcal{F}>0.999$.

We present a path analysis of the condition under which the outcomes of previous observation affect the results of the measurements yet to be made. It is shown that this effect, also known as "signalling in time", occurs whenever the earlier measurements are set to destroy interference between two or more virtual paths. We also demonstrate that Feynman's negative "probabilities" provide for a more reliable witness of "signalling in time", than the Leggett-Garg inequalities, while both methods are frequently subject to failure

Ultralow-field nuclear magnetic resonance (NMR) provides a new regime for many applications ranging from materials science to fundamental physics. However, the experimentally observed spectra show asymmetric amplitudes, differing greatly from those predicted by the standard theory. Its physical origin remains unclear, as well as how to suppress it. Here we provide a comprehensive model to explain the asymmetric spectral amplitudes, further observe more unprecedented asymmetric spectroscopy and find a way to eliminate it. Moreover, contrary to the traditional idea that asymmetric phenomena were considered as a nuisance, we show that more information can be gained from the asymmetric spectroscopy, e.g., the light shift of atomic vapors and the sign of Land$\acute{\textrm{e}}$ $g$ factor of NMR systems.

We introduce a new architecture-agnostic methodology for mapping abstract quantum circuits to realistic quantum computing devices with restricted qubit connectivity, as implemented by Cambridge Quantum Computing's tket compiler. We present empirical results showing the effectiveness of this method in terms of reducing two-qubit gate depth and two-qubit gate count, compared to other implementations.

Reliable resource estimation and benchmarking of quantum algorithms is a critical component of the development cycle of viable quantum applications for quantum computers of all sizes. Determining critical resource bottlenecks in algorithms, especially when resource intensive error correction protocols are required, will be crucial to reduce the cost of implementing viable algorithms on actual quantum hardware.

A data bus, for reducing the qubit counts within quantum computations (protected by surface codes), is introduced. For general computations an automated trade-off analysis (software tool and source code are open sourced and available online) is performed to determine to what degree qubit counts are reduced by the data bus: is the time penalty worth the qubit count reductions? We provide two examples where the qubit counts are convincingly reduced: 1) interaction of two surface code patches on NISQ machines with 28 and 68 qubits, and 2) very large-scale circuits with a structure similar to state-of-the-art quantum chemistry circuits. The data bus has the potential to transform all layers of the quantum computing stack (e.g. as envisioned by Google, IBM, Riggeti, Intel), because it simplifies quantum computation layouts, hardware architectures and introduces lower qubits counts at the expense of a reasonable time penalty.

In this work we study the unitary time-evolutions of quantum systems defined on infinite-dimensional separable time-dependent Hilbert spaces. Two possible cases are considered: a quantum system defined on a stochastic interval and another one defined on a Hilbert space with stochastic integration measure (stochastic time-dependent scalar product). The formulations of the two problems and a comparison with the general theory of open quantum systems are discussed. Possible physical applications of the situations considered are analyzed.

An even number of fermions can behave in a bosonic way. The simplest scenario involves two fermions which can form a single boson. But four fermions can either behave as two bipartite bosons or further assemble into a single four-partite bosonic molecule. In general, for 2N fermions there are many possible arrangements into composite bosons. The question is: what determines which fermionic arrangement is going to be realized in a given situation and can such arrangement be considered truly bosonic? This work aims to find the answer to the above question. We propose an entanglement-based method to assess bosonic quality of fermionic arrangements and apply it to study how the ground state of the extended one-dimensional Hubbard model changes as the strength of intra-particle interactions increases.

We present a method to extract M-partie bosonic correlations from an N-partite maximally symmetric state (M < N) with the help of successive applications of single-boson subtractions. We also propose an experimental photonic setup to implement it that can be done with the present technologies.

We investigate entanglement breaking times of Markovian evolutions in discrete and continuous time. In continuous time, we characterize which Markovian evolutions are eventually entanglement breaking, that is, evolutions for which there is a finite time after which any entanglement initially has been destroyed by the noisy evolution. In the discrete time framework, we consider the entanglement breaking index, that is, the number of times a quantum channel has to be composed with itself before it becomes entanglement breaking. The PPT-squared conjecture is that every PPT quantum channel has an entanglement breaking index of at most 2; we prove that every faithful PPT quantum channel has a finite entanglement breaking index. We also provide a method to obtain concrete bounds on this index for any faithful quantum channel. To obtain these estimates, we introduce a notion of robustness of separability which we use to obtain bounds on the radius of the largest separable ball around faithful product states. We also extend the framework of Poincar\'e inequalities for nonprimitive semigroups to the discrete setting to quantify the convergence of quantum semigroups in discrete time, which might be of independent interest.

Matsumoto and Amano (2008) showed that every single-qubit Clifford+T operator can be uniquely written of a particular form, which we call the Matsumoto-Amano normal form. In this mostly expository paper, we give a detailed and streamlined presentation of Matsumoto and Amano's results, simplifying some proofs along the way. We also point out some corollaries to Matsumoto and Amano's work, including an intrinsic characterization of the Clifford+T subgroup of SO(3), which also yields an efficient T-optimal exact single-qubit synthesis algorithm. Interestingly, this also gives an alternative proof of Kliuchnikov, Maslov, and Mosca's exact synthesis result for the Clifford+T subgroup of U(2).

We design forward and backward fault-tolerant conversion circuits, which convert between the Steane code and the 15-qubit Reed-Muller quantum code so as to provide a universal transversal gate set. In our method, only 7 out of total 14 code stabilizers need to be measured, and we further enhance the circuit by simplifying some stabilizers; thus, we need only to measure eight weight-4 stabilizers for one round of forward conversion and seven weight-4 stabilizers for one round of backward conversion. For conversion, we treat random single-qubit errors and their influence on syndromes of gauge operators, and our novel single-step process enables more efficient fault-tolerant conversion between these two codes. We make our method quite general by showing how to convert between any two adjacent Reed-Muller quantum codes $\overline{\textsf{RM}}(1,m)$ and $\overline{\textsf{RM}}\left(1,m+1\right)$, for which we need only measure stabilizers whose number scales linearly with m rather than exponentially with m obtained in previous work. We provide the explicit mathematical expression for the necessary stabilizers and the concomitant resources required.

Blind delegation protocols allow a client to delegate a computation to a server so that the server learns nothing about the input to the computation apart from its size. For the specific case of quantum computation we know that blind delegation protocols can achieve information-theoretic security. In this paper we prove, provided certain complexity-theoretic conjectures are true, that the power of information-theoretically secure blind delegation protocols for quantum computation (ITS-BQC protocols) is in a number of ways constrained. In the first part of our paper we provide some indication that ITS-BQC protocols for delegating $\sf BQP$ computations in which the client and the server interact only classically are unlikely to exist. We first show that having such a protocol with $O(n^d)$ bits of classical communication implies that $\mathsf{BQP} \subset \mathsf{MA/O(n^d)}$. We conjecture that this containment is unlikely by providing an oracle relative to which $\mathsf{BQP} \not\subset \mathsf{MA/O(n^d)}$. We then show that if an ITS-BQC protocol exists with polynomial classical communication and which allows the client to delegate quantum sampling problems, then there exist non-uniform circuits of size $2^{n - \mathsf{\Omega}(n/log(n))}$, making polynomially-sized queries to an $\sf NP^{NP}$ oracle, for computing the permanent of an $n \times n$ matrix. The second part of our paper concerns ITS-BQC protocols in which the client and the server engage in one round of quantum communication and then exchange polynomially many classical messages. First, we provide a complexity-theoretic upper bound on the types of functions that could be delegated in such a protocol, namely $\mathsf{QCMA/qpoly \cap coQCMA/qpoly}$. Then, we show that having such a protocol for delegating $\mathsf{NP}$-hard functions implies $\mathsf{coNP^{NP^{NP}}} \subseteq \mathsf{NP^{NP^{PromiseQMA}}}$.

We investigate the size scaling of the entanglement entropy (EE) in nonequilibrium steady states (NESSs) of a one-dimensional open quantum system with a random potential. It models a mesoscopic conductor, composed of a long quantum wire (QWR) with impurities and two electron reservoirs at zero temperature. The EE at equilibrium obeys the logarithmic law. However, in NESSs far from equilibrium the EE grows anomalously fast, obeying the `quasi volume law,' although the conductor is driven by the zero-temperature reservoirs. This anomalous behavior arises from both the far from equilibrium condition and multiple scatterings due to impurities.

For a bipartite entangled state shared by two observers, Alice and Bob, Alice can affect the post-measured states left to Bob by choosing different measurements on her half. Alice can convince Bob that she has such ability, if and only if the unnormalized post-measured states can not be described by a local hidden state (LHS) model. In this case, the state is termed steerable from Alice to Bob. By converting the problem to construct LHS models for two-qubit Bell diagonal states to the one for Werner states, we obtian the optimal models given by Jevtic \textit{et .al.} [J. Opt. Soc. Am. B 32, A40 (2015)], which are develeped by using the steering ellipsoid formalism. Such conversion also enables us to derive a sufficient criterion for unsteerability of any two-qubit state.

We present the mapping of a class of simplified air traffic management (ATM) problems (strategic conflict resolution) to quadratic unconstrained boolean optimization (QUBO) problems. The mapping is performed through an original representation of the conflict-resolution problem in terms of a conflict graph, where nodes of the graph represent flights and edges represent a potential conflict between flights. The representation allows a natural decomposition of a real world instance related to wind-optimal trajectories over the Atlantic ocean into smaller subproblems, that can be discretized and are amenable to be programmed in quantum annealers. In the study, we tested the new programming techniques and we benchmark the hardness of the instances using both classical solvers and the D-Wave 2X and D-Wave 2000Q quantum chip. The preliminary results show that for reasonable modeling choices the most challenging subproblems which are programmable in the current devices are solved to optimality with 99% of probability within a second of annealing time.