You can find all of my papers on Google Scholar or on arXiv.
Here is a wordcloud of my research.
I’m interested in technological tools that make research easier. If you have a favorite tool, please write to me!
Writing papers: I use Overleaf. But in the brainstorming phase, Overleaf feels way too formal, so I use HackMD. It’s still shareable and you can write in Markdown and LaTeX. I switch over when the project has enough steam.
Reading literature: I occasionally check SciRate (and as of 2022, I’m the project maintainer). Another graduate student at UChicago set up bots that message on Slack with all the newest CS theory papers and also all papers with “quantum” in the title. I skim through the titles of those most days. I also have Google Scholar updates which send TONS of email, so I set up scholar-alert-digest to convert it into a single page (and it archives my emails). Write to me if you want help setting this up.
Diagrams: My favorite is draw.io (which renamed to diagrams.net). I like its simplicity and the “Sketch” feature.
Presentations: I have two recommendations here:
Google Slides. They are very simple to make, and you know exactly what it looks like. I use the default templates. I use this scratchpad to write LaTeX equations and take screenshots.
I really like the “Slide Mode” in HackMD. You can get a feel for it here and see an example here. I love being able to use Markdown and LaTeX; to edit from any computer and share the link easily; and to not worry much about formatting. I used it for a talk in 2023 and loved it.
August 2023 note: One downside with HackMD slides is actually exporting to PDF (vs using the online slide viewer). To do this, I use decktape. You can install it with npm install -g decktape
. Then run: decktape generic 'https://hackmd.io/@kmarw/SJxRLR3DO2?controls=false&fragments=false' ~/output.pdf —-key=ArrowDown —-key=ArrowRight
. The ?fragments=false
only exports the slide where all slide animations/fragments have been run. (Note: sometimes the --
symbol doesn’t copy-paste correctly, so type it yourself.)
December 2023 note: decktape
is also good for exporting Google Slides to PDF with slide animations/fragments! The standard export to PDF doesn’t do this. Get the Share link, and change /edit
to /present
.
For recording videos, I record the screen with OBS and use a (cheap, small) microphone that plugs into my computer.
Posters: So far, I’ve made Google slides, export them as PDF, convert to PNG online, and use ImageMagick to convert them to a grid: montage -density 2400 -tile 3x0 -geometry +0+0 -bordercolor "#000000" -border 10 *.png ../poster.png
See also my projects related to research.
I’d love to see these solved! (updated December 2024)
Computer science as a research area (especially in theory) is concerned with very different problems than the types of questions that today’s computer technology industry is worried about. However, compared to mathematicians, computer scientists seem to be much more interested in the relevance of their work to the world. It’s a convenient myth to think that one’s own research is societally important, but I think this is often false. This is because many theorists model the world with unspoken assumptions that do not match reality:
To a practitioner, this means I can run the algorithm using off-the-shelf hardware in a reasonable amount of time (let’s say, within a few minutes). To a theorist, however, this means something asymptotic: there exist constants c, k where the algorithm takes any input size n and runs in time c ⋅ nk. But theorists rarely pay attention to what the constants c, k are. For example, an algorithm won’t run practically if k is 100; there are only 1080 atoms in the universe!
Some recent work gives a lot of importance to finding the right value of k; this is often called “fine-grained complexity”. For example, advances in the theory of algorithms (circa ~2021) have found “almost-linear-time” algorithms for many problems; i.e. k ≈ 1. However, these algorithms have astronomical values of c that make them completely impractical.
Other major offenders include sum-of-squares and low-degree polynomial algorithms of degree larger than 4, and recent algorithms for matrix multiplication (I attended a talk that estimated c > 1050000.)
Some questions, like “is there a path from point A to point B?”, have a YES or NO answer (decision problem). But many computational questions are about finding the best answer (search problem), like “what is the path from point A to point B?”. It turns out that many such questions have search-to-decision reductions: We can find the answer by asking many YES or NO questions. For example, we could remove point A from our graph, and ask if any neighbor point C has a path to point B. If it does, then we can add A-C to our path, and repeat this process until we reach point B. This gives some evidence that just studying the complexity of decision problems is enough to understand how computation works.
However, not every problem has a known search-to-decision reduction. For example, we can “efficiently” tell whether a number is prime or not, but we don’t know any efficient algorithms to find the prime factors of a number. (We are so confident about this that we have built cybersecurity protocols off of this fact!) Moreover, in the quantum setting, we may ask questions about quantum states and unitaries, which may not have a reduction to decision problems. In fact, some questions about unitaries may not have a reduction to search problems; this is known as the unitary synthesis problem. To combat this, recent work studies “state complexity” and “unitary complexity” directly, but so far (~2025) we have not been able to say much.
You may have heard of “complexity theory”. In this area, we define a set of problems (primarily decision problems) that can be solved in a certain way, and give it a name (like P or NP). After defining many such sets, we try to compare the “complexity” of the sets: i.e., which set has “harder” problems? These sets are called complexity classes. It is extremely difficult to say that one set is larger or smaller than another set (for example, we still do not know if P = NP, or even if P = PSPACE!). And some complexity classes (like RE) include questions so mind-bogglingly complex (like the halting problem) that they are really probing the theory of computability; i.e. the kinds of questions that can even be answered with a computer. Complexity theory has been a spectacular failure in that very few of the defined complexity classes are useful to practitioners of computers. The burden for researchers is only compounded when we include new “types” of complexity theory, i.e. for “relations” (search problems), quantum states, and quantum unitaries. In this type of research, often the only achievable goal is to prove something about a complexity class, let alone anything meaningful.
To a practitioner, this question is simple: a problem is hard to solve when their particular use case is hard to solve. To a theorist, the figure-of-merit has long been worst-case analysis: a problem is hard to solve when at least one particular instance of the problem is hard to solve. It does not take a genius to realize that a problem could be easy for my use case, but hard in some contrived setting. But here we are. Most complexity classes are classified based on the hardest problems in the class, and by that I mean the hardest instances of these problems. For example, any problem that can encode SAT (“is there an input that satisfies all the input constraints?”) is NP-hard, regardless of whether that problem is hard for a practitioner’s use case, or even “typically” hard-to-solve. In defense of early complexity theorists, it’s hard to imagine what else you would do without predicting the exact uses of computers 50 years in the future. It seems that the deep connection between computer science and cryptography encourages the worst-case mindset, since cryptographers are worried about adversarial attacks (i.e. hacking). Most people using a computer algorithm don’t need to assume that their data is deliberately trying to mess with them.
There are some alternative approaches to complexity, such as average-case or semirandom models. Here, the problem instances come from a distribution, and we want to know if it can be solved “most of the time”. In fact, random SAT is sometimes easy to solve, even though SAT is the canonical “hard” (NP-hard) problem. Of course, there is a lot of flexibility in choosing these distributions, and there is no guarantee it will exactly match a practitioner’s use case. Still, I see these approaches as noble attempts to understand why some tasks (like training a neural net) can be done in practice, even if they are NP-hard in the worst case. There also can be interesting mathematics or concepts in statistical physics behind this inquiry. For example, one recent (~2025) work gives evidence that it is in fact hard to find the global optimum of a random optimization problem, but one can easily find local optima that are almost as good.
There is good work that is mathematically beautiful, and there is good work that matters to today’s world. But to me, the worst type of research is theoretically convoluted and practically motivated, yet mathematically immature and irrelevant to the real world. I think theoretical computer science is rife with papers that fall in this “awkward middle”. Including some of my own! Hopefully making these assumptions explicit will reduce (at least my own) hubris when trying to argue about the practical relevance of this work.