In this note we address the exact solutions of a time-dependent Hamiltonian composed by an oscillator-like interaction with both a frequency and a mass term that depend on time. The latter is achieved by constructing the appropriate point transformation such that it deforms the Schr\"odinger equation of a stationary oscillator into the one of the time-dependent model. Thus, the solutions of the latter can be seen as deformations of the well known solutions of the stationary oscillator, and thus an orthogonal set of solutions can be determined in a straightforward way. The latter is possible since the inner product structure is preserved by the point transformation. Also, any invariant operator of the stationary oscillator is transformed into an invariant of the time-dependent model. This property leads to a straightforward way to determine constants of motion without requiring to use ansatz.

Multiparticle entanglement is of great significance for quantum metrology and quantum information processing. We here present an efficient scheme to generate stable multiparticle entanglement in a solid state setup, where an array of silicon-vacancy centers are embedded in a quasi-one-dimensional acoustic diamond waveguide. In this scheme, the continuum of phonon modes induces a controllable dissipative coupling among the SiV centers. We show that, by an appropriate choice of the distance between the SiV centers, the dipole-dipole interactions can be switched off due to destructive interferences, thus realizing a Dicke superradiance model. This gives rise to an entangled steady state of SiV centers with high fidelities. The protocol provides a feasible setup for the generation of multiparticle entanglement in a solid state system.

The development of the first generation of commercial quantum computers is based on superconductive qubits and trapped ions respectively. Other technologies such as semiconductor quantum dots, neutral ions and photons could in principle provide an alternative to achieve comparable results in the medium term. It is relevant to evaluate if one or more of them is potentially more effective to address scalability to millions of qubits in the long term, in view of creating a universal quantum computer. We review an all-electrical silicon spin qubit, that is the double quantum dot hybrid qubit, a quantum technology which relies on both solid theoretical grounding on one side, and massive fabrication technology of nanometric scale devices by the existing silicon supply chain on the other.

We discuss the possibility to enhance the sensitivity of optical interferometric devices by increasing its open area using an external field gradient that act differently on the two arms of the interfer-ometers. The use of combined electric and magnetic field cancel non linear terms that dephases the interferometer. This is possible using well defined (typically with n $\sim$ 20 Rydberg) states, a magnetic field of few Tesla and an electric field gradient of $\sim$ 10V/cm 2. However this allows only for interaction times on the order of tens of $\mu$s leading a reachable accuracy of only 1 or 2 order of magnitude higher than standard light-pulse atom interferometers. Furthermore, the control of fields and states and 3D trajectories puts severe limits to the reachable accuracy. This idea is therefore not suitable for precision measurement but might eventually be used for gravity or neutrality in antimatter studies.

Atoms trapped in a red detuned retro-reflected Laguerre-Gaussian beam undergo orbital motion within rings whose centers are on the axis of the laser beam. We determine the wave functions, energies and degeneracies of such quantum rotors (QRs), and the microwave transitions between the energy levels are elucidated. We then show how such QR atoms can be used as high-accuracy rotation sensors when the rings are singly-occupied.

The celebrated minimax principle of Yao (1977) says that for any Boolean-valued function $f$ with finite domain, there is a distribution $\mu$ over the domain of $f$ such that computing $f$ to error $\epsilon$ against inputs from $\mu$ is just as hard as computing $f$ to error $\epsilon$ on worst-case inputs. Notably, however, the distribution $\mu$ depends on the target error level $\epsilon$: the hard distribution which is tight for bounded error might be trivial to solve to small bias, and the hard distribution which is tight for a small bias level might be far from tight for bounded error levels.

In this work, we introduce a new type of minimax theorem which can provide a hard distribution $\mu$ that works for all bias levels at once. We show that this works for randomized query complexity, randomized communication complexity, some randomized circuit models, quantum query and communication complexities, approximate polynomial degree, and approximate logrank. We also prove an improved version of Impagliazzo's hardcore lemma.

Our proofs rely on two innovations over the classical approach of using Von Neumann's minimax theorem or linear programming duality. First, we use Sion's minimax theorem to prove a minimax theorem for ratios of bilinear functions representing the cost and score of algorithms.

Second, we introduce a new way to analyze low-bias randomized algorithms by viewing them as "forecasting algorithms" evaluated by a proper scoring rule. The expected score of the forecasting version of a randomized algorithm appears to be a more fine-grained way of analyzing the bias of the algorithm. We show that such expected scores have many elegant mathematical properties: for example, they can be amplified linearly instead of quadratically. We anticipate forecasting algorithms will find use in future work in which a fine-grained analysis of small-bias algorithms is required.

We prove two new results about the randomized query complexity of composed functions. First, we show that the randomized composition conjecture is false: there are families of partial Boolean functions $f$ and $g$ such that $R(f\circ g)\ll R(f) R(g)$. In fact, we show that the left hand side can be polynomially smaller than the right hand side (though in our construction, both sides are polylogarithmic in the input size of $f$).

Second, we show that for all $f$ and $g$, $R(f\circ g)=\Omega(\mathop{noisyR}(f)\cdot R(g))$, where $\mathop{noisyR}(f)$ is a measure describing the cost of computing $f$ on noisy oracle inputs. We show that this composition theorem is the strongest possible of its type: for any measure $M(\cdot)$ satisfying $R(f\circ g)=\Omega(M(f)R(g))$ for all $f$ and $g$, it must hold that $\mathop{noisyR}(f)=\Omega(M(f))$ for all $f$. We also give a clean characterization of the measure $\mathop{noisyR}(f)$: it satisfies $\mathop{noisyR}(f)=\Theta(R(f\circ gapmaj_n)/R(gapmaj_n))$, where $n$ is the input size of $f$ and $gapmaj_n$ is the $\sqrt{n}$-gap majority function on $n$ bits.

Quantum speed limit is a fundamental concept in quantum mechanics, which aims at finding the minimum time scale or the maximum dynamical speed for some fixed targets. In a large number of studies in this field, the construction of valid bounds for the evolution time is always the core mission, yet the physics behind it and some fundamental questions like which states can really fulfill the target, are ignored. Understanding the physics behind the bounds is at least as important as constructing attainable bounds. Here we provide an operational approach for the definition of quantum speed limit, which utilizes the set of states that can fulfill the target to define the speed limit. Its performances in various scenarios have been investigated. For time-independent Hamiltonians, it is inverse-proportional to the difference between the highest and lowest energies. The fact that its attainability does not require a zero ground-state energy suggests it can be used as an indicator of quantum phase transitions. For time-dependent Hamiltonians, it is shown that contrary to the results given by existing bounds, the true speed limit should be independent of the time. Moreover, in the case of spontaneous emission, we find a counterintuitive phenomenon that a lousy purity can benefit the reduction of the quantum speed limit.

We study Landau-Zener-St\"uckelberg (LZS) interferometry in a cQED architecture under effects of dissipation. To be specific, we consider a superconducting qubit driven by a dc+ac signal and coupled to a transmission line resonator, but our results are valid for general qubit-resonators devices. To take the environment into account, we assume that the resonator is coupled to an ohmic quantum bath. The Floquet-Born-Markov master equation is numerically solved to obtain the dynamics of the system for arbitrary amplitude of the drive and different time scales. We unveil important differences in the resonant patterns between the Strong Coupling and Ultra Strong Coupling regimes in the qubit-resonator interaction, which are mainly due to the magnitude of photonic gaps in the energy spectrum of the system. We identify in the LZS patterns the contribution of the qubit gap and the photonic gaps, showing that for large driving amplitudes the patterns present a weaving structure due to the combined intercrossing of the different gaps contributions.

The NV-NMR spectrometer is a promising candidate for detection of NMR signals at the nano scale. Field inhomogeneities, however, are a major source of noise that limits spectral resolution in state of the art NV - NMR experiments and constitutes a major bottleneck in the development of nano scale NMR. Here we propose, a route in which this limitation could be circumvented in NV-NMR spectrometer experiments, by utilising the nanometric scale and the quantumness of the detector.

We study coherent perfect absorption (CPA) theoretically based on a weakly coupled atom-cavity system with an optically pumped second-order nonlinear crystal (SOC) embedded in the cavity. Our system does not require a strong coupling, which is often needed for CPA in previous studies but is challenging to implement experimentally in some systems. The role of the SOC is to introduce a tunable effective decay rate of the cavity, which can lead to CPA in the weak coupling regime. The proposed system exhibits bistable behaviors, with bistable patterns switchable between conventional and unconventional shapes. By varying the properties of the SOC, the operation point of CPA can be tuned to be inside or outside the bistable regime. It can also be located at the upper or the lower stable branch or even the unstable branch of the bistable hysteresis loop. It is however robust against the parameters of the SOC for any fixed effective decay rate. Our system can potentially be applied to realize optical devices such as optical switches in the weakly coupled regime.

The article proposes the implementation of a universal system of quantum gates on asynchronous excitations of two-level atoms in optical cavities. The entangling operator of the CSign type is implemented without beam splitters, approximately, based on the incommensurability of the periods of Rabi oscillations in a cavity with single and double excitations.

We propose a scheme for the detection of qubit-environment entanglement at time $\tau$ which requires only operations and measurements on the qubit, all within reach of current experimental state-of-the-art. The scheme works for any type of interaction which leads to pure dephasing of the qubit as long as the initial qubit state is pure. It becomes particularly simple when one of the qubit states is neutral with respect to the environment, such as in case of the most common choice of the NV center spin qubit or for excitonic charge qubits, when the environment is initially at thermal equilibrium.

Quantum computing devices in the NISQ era share common features and challenges like limited connectivity between qubits. Since two-qubit gates are allowed on limited qubit pairs, quantum compilers must transform original quantum programs to fit the hardware constraints. Previous works on qubit mapping assume different gates have the same execution duration, which limits them to explore the parallelism from the program. To address this drawback, we propose a Multi-architecture Adaptive Quantum Abstract Machine (maQAM) and a COntext-sensitive and Duration-Aware Remapping algorithm (CODAR). The CODAR remapper is aware of gate duration difference and program context, enabling it to extract more parallelism from programs and speed up the quantum programs by 1.23 in simulation on average in different architectures and maintain the fidelity of circuits when running on Origin Quantum noisy simulator.

The problem of compiling general quantum algorithms for implementation on near-term quantum processors has been introduced to the AI community. Previous work demonstrated that temporal planning is an attractive approach for part of this compilationtask, specifically, the routing of circuits that implement the Quantum Alternating Operator Ansatz (QAOA) applied to the MaxCut problem on a quantum processor architecture. In this paper, we extend the earlier work to route circuits that implement QAOA for Graph Coloring problems. QAOA for coloring requires execution of more, and more complex, operations on the chip, which makes routing a more challenging problem. We evaluate the approach on state-of-the-art hardware architectures from leading quantum computing companies. Additionally, we apply a planning approach to qubit initialization. Our empirical evaluation shows that temporal planning compares well to reasonable analytic upper bounds, and that solving qubit initialization with a classical planner generally helps temporal planners in finding shorter-makespan compilations for QAOA for Graph Coloring. These advances suggest that temporal planning can be an effective approach for more complex quantum computing algorithms and architectures.

We review some of the recent efforts in devising and engineering bosonic qubits for superconducting devices, with emphasis on the Gottesman-Kitaev-Preskill (GKP) qubit. We present some new results on decoding repeated GKP error correction using finitely-squeezed GKP ancilla qubits, exhibiting differences with previously studied stochastic error models. We discuss circuit-QED ways to realize CZ gates between GKP qubits and we discuss different scenario's for using GKP and regular qubits as building blocks in a scalable superconducting surface code architecture.

We report the experimental implementation of the Dicke model in the semiclassical approximation, which describes a large number of two-level atoms interacting with a single-mode electromagnetic field in a perfectly reflecting cavity. This is managed by making use of two non-linearly coupled active, synthetic LC circuits, implemented by means of analog electrical components. The simplicity and versatility of our platform allows us not only to experimentally explore the coexistence of regular and chaotic trajectories in the Dicke model but also to directly observe the so-called ground-state and excited-state ``quantum'' phase transitions. In this analysis, the trajectories in phase space, Lyapunov exponents and the recently introduced Out-of-Time-Order-Correlator (OTOC) are used to identify the different operating regimes of our electronic device. Exhaustive numerical simulations are performed to show the quantitative and qualitative agreement between theory and experiment.

Scattering theory is a standard tool for the description of transport phenomena in mesoscopic systems. Here, we provide a detailed derivation of this method for nano-scale conductors that are driven by oscillating electric or magnetic fields. Our approach is based on an extension of the conventional Lippmann-Schwinger formalism to systems with a periodically time dependent Hamiltonian. As a key result, we obtain a systematic perturbation scheme for the Floquet scattering amplitudes that describe the transition of a transport carrier through a periodically driven sample. Within a general multi-terminal setup, we derive microscopic expressions for the mean values and time-integrated correlation functions, or zero-frequency noise, of matter and energy currents, thus unifying the results of earlier studies. We show that this framework is inherently consistent with the first and the second law of thermodynamics and prove that the mean rate of entropy production vanishes only if all currents in the system are zero. As an application, we derive a generalized Green-Kubo relation, which makes it possible to express the response of any mean currents to small variations of temperature and chemical potential gradients in terms of time integrated correlation functions between properly chosen currents. Finally, we discuss potential topics for future studies and further reaching applications of the Floquet scattering approach to quantum transport in stochastic and quantum thermodynamics.

Hybrid codes simultaneously encode both quantum and classical information, allowing for the transmission of both across a quantum channel. We construct a family of nonbinary error-detecting hybrid stabilizer codes that can detect one error while also encoding a single classical bit over the residue class rings $\mathbb{Z}_{q}$ inspired by constructions of nonbinary non-additive codes.

We show that exponential sums (ES) of the form \begin{equation*} S(f, N)= \sum_{k=0}^{N-1} \sqrt{w_k} e^{2 \pi i f(k)}, \end{equation*} can be efficiently carried out with a quantum computer (QC). Here $N$ can be exponentially large, $w_k$ are real numbers such that sum $S_w(M)=\sum_{k=0}^{M-1} w_k$ can be calculated in a closed form for any $M$, $S_w(N)=1$ and $f(x)$ is a real function, that is assumed to be easily implementable on a QC. As an application of the technique, we show that Riemann zeta (RZ) function, $\zeta(\sigma+ i t)$ in the critical strip, $\{0 \le \sigma <1, t \in \mathbb{R} \}$, can be obtained in polyLog(t) time. In another setting, we show that RZ function can be obtained with a scaling $t^{1/D}$, where $D \ge 2$ is any integer. These methods provide a vast improvement over the best known classical algorithms; best of which is known to scale as $t^{4/13}$. We present alternative methods to find $\lvert S(f,N) \rvert$ on a QC directly. This method relies on finding the magnitude $A=\lvert \sum_0^{N-1} a_k \rvert$ of a $n$-qubit quantum state with $a_k$ as amplitudes in the computational basis. We present two different ways to do obtain $A$. Finally, a brief discussion of phase/amplitude estimation methods is presented.