Algebraic number theory relates SIC-POVMs in dimension $d>3$ to those in dimension $d(d-2)$. We define a SIC in dimension $d(d-2)$ to be aligned to a SIC in dimension $d$ if and only if the squares of the overlap phases in dimension $d$ appear as a subset of the overlap phases in dimension $d(d-2)$ in a specified way. We give 19 (mostly numerical) examples of aligned SICs. We conjecture that given any SIC in dimension $d$ there exists an aligned SIC in dimension $d(d-2)$. In all our examples the aligned SIC has lower dimensional equiangular tight frames embedded in it. If $d$ is odd so that a natural tensor product structure exists, we prove that the individual vectors in the aligned SIC have a very special entanglement structure, and the existence of the embedded tight frames follows as a theorem. If $d-2$ is an odd prime number we prove that a complete set of mutually unbiased bases can be obtained by reducing an aligned SIC to this dimension.

Coherence and entanglement are fundamental properties of quantum systems, promising to power the near future quantum computers, sensors and simulators. Yet, their experimental detection is challenging, usually requiring full reconstruction of the system state. We show that one can extract quantitative bounds to the relative entropy of coherence and the coherent information, coherence and entanglement quantifiers respectively, by a limited number of purity measurements. The scheme is readily implementable with current technology to verify quantum computations in large scale registers, without carrying out expensive state tomography.

Rapidly developing experiments across multiple platforms now aim to realise small quantum codes, and so demonstrate a memory within which a logical qubit can be protected from noise. There is a need to benchmark the achievements in these diverse systems, and to compare the inherent power of the codes they rely upon. We describe a recently-introduced performance measure called integrity, which relates to the probability that an ideal agent will successfully 'guess' the state of a logical qubit after a period of storage in the memory. Integrity is straightforward to evaluate experimentally without state tomography and it can be related to various established metrics such as the logical fidelity and the pseudo-threshold. We offer a set of experimental milestones that are steps towards demonstrating unconditionally superior encoded memories. Using intensive numerical simulations we compare memories based on the five-qubit code, the seven-qubit Steane code, and a nine-qubit code which is the smallest instance of a surface code; we assess both the simple and fault-tolerant implementations of each. While the 'best' code upon which to base a memory does vary according to the nature and severity of the noise, nevertheless certain trends emerge.

We consider a paradigmatic quantum harmonic Otto engine operating in finite time. We investigate its performance when shortcut-to-adiabaticity techniques are used to speed up its cycle. We compute efficiency and power by taking the energetic cost of the shortcut driving explicitly into account. We analyze in detail three different shortcut methods, counterdiabatic driving, local counterdiabatic driving and inverse engineering. We demonstrate that all three lead to a simultaneous increase of efficiency and power for fast cycles, thus outperforming traditional heat engines.

We apply the Wigner formalism of quantum optics to study the role of the zeropoint field fluctuations in entanglement swapping produced via parametric down conversion. It is shown that the generation of mode entanglement between two initially non interacting photons is related to the quadruple correlation properties of the electromagnetic field, through the stochastic properties of the vacuum. The relationship between the process of transferring entanglement and the different zeropoint inputs at the nonlinear crystal and the Bell-state analyser is emphasized.

In recent years, much effort has been devoted to the construction of a proper measure of quantum non-Markovianity. However, those proposed measures are shown to be at variance with different situations. In this work, we utilize the theory of $k$-positive maps to generalize a hierarchy of $k$-divisibility and develop a powerful tool, called $k$-divisibility phase diagram, which can provide a further insight into the nature of quantum non-Markovianity. By exploring the phase diagram with several paradigms, we can explain the origin of the discrepancy between two frequently used measures and find the condition under which the two measures coincide with each other.

In a finite dimensional Hilbert space, each normalized vector (state) can be chosen as a member of an orthonormal basis of the space. We give a proof of this statement in a manner that seems to be more comprehensible for physics students than the formal abstract one.

The principle of non-violation of "information causality", has been proposed as one of the foundational properties of nature\cite{nature}. The main goal of the paper is to explore the gap between quantum mechanical correlations and those allowed by "information causality" in the context of local randomness by using Cabello's nonlocality argument. This is interesting because the gap is slightly different than in the context of Hardy's similar nonlocality argument\cite{gazi}

We treat the eigenvalue problem posed by self-similar potentials, i.e. homogeneous functions under a particular affine transformation, by means of symmetry techniques. We find that the eigenfunctions of such problems are localized, even when the potential does not rise to infinity in every direction. It is shown that the logarithm of the energy displays levels contained in families that are analogous to Wannier-Stark ladders. The position of each ladder is proved to be determined by the specific details of the potential and not by its transformation properties. This is done by direct computation of matrix elements. The results are compared with numerical solutions of the Schr\"odinger equation.

The ability to manipulate the spectral-temporal waveform of optical pulses has enabled a wide range of applications from ultrafast spectroscopy to high-speed communications. Extending these concepts to quantum light has the potential to enable breakthroughs in optical quantum science and technology. However, filtering and amplifying often employed in classical pulse shaping techniques are incompatible with non-classical light. Controlling the pulsed mode structure of quantum light requires efficient means to achieve deterministic, unitary manipulation that preserves fragile quantum coherences. Here we demonstrate an electro-optic method for modifying the spectrum of non-classical light by employing a time lens. In particular we show highly-efficient wavelength-preserving six-fold compression of single-photon spectral intensity bandwidth, enabling over a two-fold increase of single-photon flux into a spectrally narrowband absorber. These results pave the way towards spectral-temporal photonic quantum information processing and facilitate interfacing of different physical platforms where quantum information can be stored or manipulated.

Bell inequalities have traditionally been used to demonstrate that quantum theory is nonlocal, in the sense that there exist correlations generated from composite quantum states that cannot be explained by means of local hidden variables. With the advent of device-independent quantum information protocols, Bell inequalities have gained an additional role as certificates of relevant quantum properties. In this work we consider the problem of designing Bell inequalities that are tailored to detect maximally entangled states. We introduce a class of Bell inequalities valid for an arbitrary number of measurements and results, derive analytically their tight classical, non-signalling and quantum bounds and prove that the latter is attained by maximally entangled states. Our inequalities can therefore find an application in device-independent protocols requiring maximally entangled states.

In an ideal linear amplifier, the output signal is linearly related to the input signal with an additive noise that is independent of the input. The decoherence of a quantum-mechanical state due to amplification is usually assumed to be due to the addition of noise. Here we show that entanglement between the input signal and the amplifying medium can produce an exponentially-large amount of decoherence in an ideal optical amplifier even when the gain is arbitrarily close to unity and the added noise is arbitrarily small. These effects suggest that the usual input/output relationship of a linear amplifier does not provide a complete description of its performance.

A fiber-integrated spectrometer for single-photon pulses outside the telecommunications wavelength range based upon frequency-to-time mapping, implemented by chromatic group delay dispersion (GDD), and precise temporally-resolved single-photon counting, is presented. A chirped fiber Bragg grating provides low-loss GDD, mapping the frequency distribution of an input pulse onto the temporal envelope of the output pulse. Time-resolved detection with fast single-photon-counting modules enables monitoring of a wavelength range from 825 nm to 835 nm with nearly uniform efficiency at 55 pm resolution (24 GHz at 830 nm). To demonstrate the versatility of this technique, spectral interference of heralded single photons and the joint spectral intensity distribution of a photon-pair source are measured. This approach to single-photon-level spectral measurements provides a route to realize applications of time-frequency quantum optics at visible and near-infrared wavelengths, where multiple spectral channels must be simultaneously monitored.

We discuss - in what is intended to be a pedagogical fashion - a criterion, which is a lower bound on a certain ratio, for when a stock (or a similar instrument) is not a good investment in the long term, which can happen even if the expected return is positive. The root cause is that prices are positive and have skewed, long-tailed distributions, which coupled with volatility results in a long-run asymmetry. This relates to bubbles in stock prices, which we discuss using a simple binomial tree model, without resorting to the stochastic calculus machinery. We illustrate empirical properties of the aforesaid ratio. Log of market cap and sectors appear to be relevant explanatory variables for this ratio, while price-to-book ratio (or its log) is not. We also discuss a short-term effect of volatility, to wit, the analog of Heisenberg's uncertainty principle in finance and a simple derivation thereof using a binary tree.

Single-photon detection with high efficiency, high timing resolution, low dark counts and high photon detection-rates is crucial for a wide range of optical measurements. Although efficient detectors have been reported before, combining all performance parameters in a single device remains a challenge. Here, we show a broadband NbTiN superconducting nanowire detector with an efficiency exceeding 92%, over 150MHz photon detection-rate, a dark count-rate below 130Hz, operated in a Gifford-McMahon cryostat. Furthermore, with careful optimization of the detector design and readout electronics, we reach an ultra-low system timing jitter of 14.80ps (13.95ps decoupled) while maintaining high detection efficiencies.

The framework of entropic dynamics (ED) allows one to derive quantum mechanics as an application of entropic inference. In this work we derive the classical limit of quantum mechanics in the context of ED. Our goal is to find conditions so that the center of mass (CM) of a system of N particles behaves as a classical particle. What is of interest is that Planck's constant remains finite at all steps in the calculation and that the classical motion is obtained as the result of a central limit theorem. More explicitly we show that if the system is sufficiently large, and if the CM is initially uncorrelated with other degrees of freedom, then the CM follows a smooth trajectory and obeys the classical Hamilton-Jacobi with a vanishing quantum potential.

Cooling the qubit into a pure initial state is crucial for realizing fault-tolerant quantum information processing. Here we envisage a star-topology arrangement of reset and computation qubits for this purpose. The reset qubits cool or purify the computation qubit by transferring its entropy to a heat-bath with the help of a heat-bath algorithmic cooling procedure. By combining standard NMR methods with powerful quantum control techniques, we cool central qubits of two large star topology systems, with 13 and 37 spins respectively. We obtain polarization enhancements by a factor of over 24, and an associated reduction in the spin temperature from 298 K down to 12 K. Exploiting the enhanced polarization of computation qubit, we prepare combination-coherences of orders up to 15. By benchmarking the decay of these coherences we investigate the underlying noise process. Further, we also cool a pair of computation qubits and subsequently prepare them in an effective pure-state.

The minimal memory required to model a given stochastic process - known as the statistical complexity - is a widely adopted quantifier of structure in complexity science. Here, we ask if quantum mechanics can fundamentally change the qualitative behaviour of this measure. We study this question in the context of the classical Ising spin chain. In this system, the statistical complexity is known to grow monotonically with temperature. We evaluate the spin chain's quantum mechanical statistical complexity by explicitly constructing its provably simplest quantum model, and demonstrate that this measure exhibits drastically different behaviour: it rises to a maximum at some finite temperature then tends back towards zero for higher temperatures. This demonstrates how complexity, as captured by the amount of memory required to model a process, can exhibit radically different behaviour when quantum processing is allowed.

We present a quantum algorithm for fitting a linear regression model to a given data set using the least squares approach. Different from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time $\operatorname{poly}(\operatorname{log}(N), d, \kappa, 1/\epsilon)$, where $N$ is the size of the data set, $d$ is the number of adjustable parameters, $\kappa$ is the condition number of the design matrix, and $\epsilon$ is the desired precision in the output. We also show that the polynomial dependence on $d$ and $\kappa$ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place.

The Stern-Gerlach experiment has played an important role in our understanding of quantum behavior. We propose and analyze a modified version of this experiment where the magnetic field of the detector is in a quantum superposition, which may be experimentally realized using a superconducting flux qubit. We show that if incident spin-$1/2$ particles couple with the two-state magnetic field, a discrete target distribution results that resembles the distribution in the classical Stern-Gerlach experiment. As an application of the general result, we compute the distribution for a square waveform of the incident fermion. This experimental setup allows us to establish: (1) the quantization of the intrinsic angular momentum of a spin-$1/2$ particle, and (2) a correlation between EPR pairs leading to nonlocality, without necessarily collapsing the particle's spin wavefunction.