Main idea: There is a problem that 1D noiseless, 3D noisy constant depth quantum circuits solve but classical circuits need non-constant depth to solve.
The 1D problem uses the magic square game on 2 sets of n qubit pairs where qubit pair j from set 1 and k from set 2 are inputs. The classical constant-depth circuit cannot achieve much above 8/9 chance of success.
A sub-theory is a subset of operations, closed under joint measurements.
Main idea: If a sub-theory is contextual, it needs O(n2) space to be classically simulated (vs O(n) for some noncontextual sub-theories, such as qubit stabilizer codes.) This quadratic scaling matches the Gottesman-Knill algorithm.
Things to clarify:
Where else does contextuality imply simulation hardness?
Noncontextuality is a generalization of classical-ness.
Noncontextuality implies a hidden variable theory can explain the results of measurements
If everything has to commute or anticommute (as in with Pauli measurements), then it is easy to simulate quasi-quantized, “noncontextual” Pauli Hamiltonians! So one can simulate more than just the diagonal terms.
Randomly use unitaries eitHj with probability cj/∑cj
With enough gates, it will approximate eitH (wow!)
It’s good compared to Trotter/Suzuki since dependence on number of terms is much better (not directly on L, just on ∑cj)
Stochastically drifts towards target unitary.
There is an interesting quantum channel argument, showing that the Taylor expansion of eitHρe − itH is close to the probabilistic mixing of gates in qDRIFT (choosing Hj with probability cj/∑cj, many times)
Is there a higher-order qDRIFT?
What about H = H_a + H_b, using qDRIFT on each one? (Perhaps we can partition the terms so that [Ha, Hb] ≈ 0 ?
What if you have one big λ and a lot of little corrections? Can you run qDRIFT on the little corrections and make HA + Hqdrift?
Things to clarify
What is “compilation”?
Why does qdrift perform so much better than trotter? even numerically?
How does this compare to time dependent perturbation theory?
With an ensemble of unitaries, you only need a few measurements to generate a “classical shadow” (a best guess, like least squares). This can approximate a lot of linear observables without knowing what you’re looking for.
But you have to bound the shadow norm of operators.