Kunal Marwaha

notes on Imre Lakatos and passionate rationalism


Here is a set of incomplete notes and reflections on Brendan Larvor’s “Lakatos: an Introduction”.

thought worlds

Imre Lakatos was a fallibilist philosopher of science, mixing Hegelian dialectic with Popperian ideas on rationalism and falsification. Fallibilism means nothing is certain.

In his view, there are different thought worlds where concepts make sense. Each world can suffer either internal refutations or shifts in meaning. He calls each world a dominant theory or “theory of XXX”.

Worlds don’t always get falsified but abandoned (consider catastrophe theory or the theory of solids). Sometimes, worlds are abandoned because they are subsets of other worlds; you can explain everything and more with the new world.

Lakatos applies this to the progression of concepts in mathematics. In his view, a word’s meaning depends on the world it’s used in. For example:

We can redefine concepts in existing worlds to keep their meaning in a new world.

It’s not clear if there are more “perfect” worlds than others. I’m also not clear what makes the progression of thought worlds “rational”. Does the meaning of any word change “rationally”?

How does mathematics progress? What leads people to find and select “better” or more “natural” proofs? Perhaps it’s an “art” to some mathematicians themselves. Can philosophy understand how this works?

Logic, to me, is an overused word, but it references the formal or informal system underneath our proofs. What happens when we play with this system?

enter Kuhn

Thomas Kuhn is most famous for The Structure of Scientific Revolutions, where immature sciences can suddenly shift into orthodoxy with a compelling and suggestive work (like Newton’s Opticks). To Kuhn, this process is not exactly random, but still more psychological than logical.

Many people took offense to this: how could science not be scientific? Perhaps we come to new understanding via theories that unify previous theories (or worlds, or paradigms).

But what defines “suggestive” work? If the field isn’t mature, how could you test anything? Maybe this isn’t science but “idea exploration”. To me, medicine 300 years ago was full of crazy ideas. Now, we have a “paradigm” to think about medicine and the ideas to explore within it.

Perhaps the process we choose new scientific paradigms resembles Lakatos’ “worlds” or “theories”. Some fall into disuse, as we explore new ideas that explain old results.

If you trained a neural net to explain the world, would you be doing science? The results are falsifiable in some sense (you can check against reality) but it’s easy to overfit to the specific problem, even if you have explanatory power.

If a concept has a new paradigm, the others aren’t wrong. They’re just abandoned.

Maybe worlds can be abandoned for simplicity. We don’t always talk about Peano axioms when using algebra, or algebraic geometry when using functions.

We may also use different worlds in different contexts. As in “More is Different” by P. W. Anderson, we can assume ammonia molecules have a dipole. We don’t need to know that ammonia molecules oscillate rapidly, neutralizing this dipole to satisfy quantum-mechanical constraints. Representing an ammonia molecule with polarity “makes sense” when modeling molecular interactions.

(how) does science progress?

Extra complexity in a scientific model isn’t necessarily better (as Occam’s Razor would suggest). But we prefer models that explain everything we know and explain more things that we can test. Do we come on these models from Hegelian dialectic, Lakatosian heuristic, or just Kuhnian orthodoxy?

Kuhn doesn’t believe there is a complete way to measure if science has progressed, as the question “What is scientific” gets updated (teleological, corpuscal, etc).

More importantly, Kuhn is not convinced that stereotypical, “normal scientists” are falsifying anything at all. Maybe they’re just solving puzzles within their paradigm. The question here is whether scientists are just blind to the orthodoxy of their paradigm. Karl Popper in particular thought it was dangerous to model scientists as so Orwellian and uncritical.

Maybe scientists falsify “within their lane” as they search for solutions to smaller problems (mostly from increased precision; i.e. we know more).

Students trust many broad theories in classes (e.g. special relativity); further studies and experiments try to falsify smaller claims (like a comet’s motion) dependent on those trusted things.

Scientists don’t falsify every theory they come across. Each problem has gotten narrower, relying on a larger network of trusted ideas or theories. Maybe scientists inspect one tiny part at a time, never inspecting the whole paradigm at once.

While scientists trust some paradigms, they are critical at a micro-level, falsifying and deliberating about a small set of problems. And orthodoxies change (Kuhn’s “paradigm shift”). For example, we no longer search for or trust teleological explanations for particle motion, but we do expect physical explanations.

There can be two thought worlds that are both useful, neither falsified, and together contradictory. My go-to example is particle theory and wave theory. Claims are consistent and empirically supported within each framework, but it’s not clear if one is more “correct”. You can do science in either paradigm.

This explains why many scientists crave theories of unification.

research programmes and layered trust

It can’t be that we are only searching for explanatory power. If we train neural nets (or other machines) to discover these explanations, but can’t understand why they work: Is that science? It “predicts” the world, but “because Deep Thought said so” doesn’t feel scientific…

Maybe our theories (or “paradigms”) aren’t so disparate but increasingly rely on each other for support. For example, most trust the quantum mechanics underpinning chemistry, in turn underpinning cell science. If an underlying framework is falsified (or “abandoned”), we need some replacement framework to do the “explaining”. In a way, paradigms that rely on quantum mechanics give support to quantum mechanics as a paradigm.

In the physical realm, science is the framework that explains phenomena we communally sense (gravity, fluid motion, etc). For example, Einsteinian relativity has to collapse to Newtonian mechanics: we already have observations of the latter.

This makes (at least) scientific explanations anthropocentric: Our theories make sense to us but don’t necessarily make sense to all other life forms.

Maybe there isn’t a knowable answer to “How the universe works”. There are too many unanswerable questions (telos, origin stories) that prevent us from knowing all the answers. Science is an organized approach to learning what we can about the universe. We may not find truth but we may approximate truth more and more precisely.

Hegel, Kant, and Popper influence Lakatos’s views on the philosophy of science. After Kuhn’s book, Popperian thought in particular was in decay; “falsification” theory didn’t really match up with the historical progression of scientific theories. Lakatos sought to blend the two views (perhaps in a dialectical sense).

Lakatos re-describes a scientific paradigm as a layered research programme. The outer layers can alter with new evidence, but the inner layers retain the self-consistent, near-axiomatic worldviews. As ideas are falsified, the research programme adjusts if possible.

While Kuhn and Paul Feyeraband (in Against Method) claimed that schools of thought progress, die out, or get abandoned in a tumultuous, non-rational way, Lakatos thought there are underlying concrete processes for these events. Larvor describes both heuristic progress (explaining new things) and empirical progress (finding new results) as ways a theory can rationally progress. Lakatos then describes the theory of research programmes using the theory itself.

At least, here’s a Popperian view that agrees with history: Results are often falsified experimentally, but considered “experimental error” in low doses.

I like that research programmes have unfalsifiable “hard cores”. It makes explicit the hunches and axioms that a scientist will hold onto with more grip than the evidence suggests. Most people don’t drop their hypotheses after a single falsifying result.

Did we just get lucky as humans that our explanations continually improve? If we had to start over… would we end up with the same theories?

I think we trust contemporary “science” more because the paradigms are often “unified” (we call it “science”, not “sciences”…). Results in one science paradigm (e.g. condensed matter physics) almost always depend on results of other sciences (e.g. particle physics). You’re not believing in one paradigm, you’re believing in many. In some sense, a Kuhnian “maturing” of a paradigm is integration into the rest of the scientific network.

We’ve created a supercluster of paradigms and schools of thought, that we today trust as a singular “science”. This can include competing schools of thought (as mentioned before) but we can trust most of the supercluster’s results. Things like the healing arts, political sciences, psychoanalysis, and psychiatry don’t have strong connections back into the scientific supercluster: one should trust these paradigms less. I understand why people imagine scientists conspiring with each other…

Contemporary science’s unified view may be incorrect, but look how precisely many things are explained! That’s reason enough to trust our theories over older (often authority-enforced) ones.

Does it matter how the disparate paradigms merge into shared, allied research programmes? Scientific progress may be rational or random, but I only care that it’s explanatory. I think we will never know the Truth without serious assumptions (time and space are smooth, and we see can successfully generalize our small view of the Universe to the whole thing, etc.), but we are getting closer to Truth.

If other research programmes merge into the supercluster we may then call them “science”. For now they remain simply pseudosciences, ideas, and schools of thought. But how does a paradigm merge in? For example, is cognitive science trying to merge parts of psychology and philosophy with the rest of science?

I don’t know if we have a privileged or rational reason to trust contemporary science, except by appeal to experience, sensory evidence, and use in inventions (medicine, fertilizer, cars, etc). Parts could be wrong (or not explanatory) but I expect humans to update if we find a better explanation.

Of course we don’t know if any of it is True. I’m not sure if there is a universal, transhuman rationality. We have only weak methods to understand and predict the workings of the Universe.

anarchic rationalism?

Lakatos, although a rationalist, still has streaks of anarchy in his methology:

Lakatos, in particular, believes in a Platonic “rationality” in science and mathematics that cannot be co-opted by the state. This is key for democracy-loving philosophers like Popper.

To Lakatos, there are no self-evident truths…

Contemporary science isn’t clearly “better” than other traditions, but it does explain more things humans have encountered. (I would not include all things labeled “science” in this category, i.e. most of the social sciences.)

But how do we evaluate origin stories? We shouldn’t just check the auxiliary (outer-layer) hypotheses of research programmes. We must, however, use the “heuristic” of the science to judge the scientific ideas.

When a program is out of favor (“degenerating”), it may come back at any point. Can we ever fully rule out the research programme of creationism?

Somehow, we have to honestly keep score between programmes in a way that’s not just a popularity contest. This will always have some smear of elitism. Only an insider will have a “feeling for relevance”; they would know best when the science is useful, perpetuating, and progressive. But this can always be fooled.

How do you compare across connected research programmes? Are there ways to formalize or avoid immature “sciences”? How do you evaluate the claim of “much more evidence” (e.g. evolution vs creation)? How do you reconcile “no common ground” (i.e. natives who don’t trust “white science”)?

Perhaps each programme is explanatory. We decide how to allocate our trust. Cultural traditions can be research programmes, too. They aren’t necessarily good or bad: to me, the power is in prediction. How much “knowledge-that” comes from the theory?

If a theory can explain a lot for its size, and it’s well-verified, then we should trust it over other things. This evaluation can’t have strict rules, and is unavoidably elitist.

A fallibilist wouldn’t be a true believer in anything, even the science itself. To a fallibilist, nothing would pass Descartes’ evil demon test. Yet, some methods collect more information about how the world works: I follow these.

I wouldn’t toss traditional nor time-tested theories. Look for ways to embed them in other scientific processes. Compare them with a contemporary research programme: Only if the programme is better, discard it. (For example, no need to worry about 2012.)

Even scientific processes can be co-opted by states and other authoritarian forces. The United States can still influence scientific progress via defense research. Scientific results cannot be controlled by authority, but authority can control scientific progress.

I think most knowledge in science and mathematics is “knowledge-that” (which is never objective).

If the knowledge-producing framework can explain a lot (algebra; number theory; QED; Ising models; algebraic geometry; topology; etc), keep it around!

Compare across research programmes: The way they advance may be varied and new, but follow the vehicles that produce and predict knowledge!

inconsistencies and legacy

Lakatos certainly had haters. His theories are partially motivated by the politics of his time.

But the dialectical, curious philosopher of mathematics has more ties to the French, continental philosophies than we realize. Lakatos was not an irrationalist, but he was looking for Truth implicit in the practice.

The story of mathematical progress can be painted in broad strokes, but the details are complicated and heterogeneous.

Thought worlds and research programmes help us broadly understand “what is science” or “how does math develop”. But Lakatos leaves us with no clear, universal criteria. Partly as a result, his work is largely a degenerating research programme. But that just means it’s out of fashion, not wrong.

A plethora of models (and degenerated research programmes) just gives us more origin stories to choose from. All models are wrong, all models are useful for some purpose, and some you might like…

It’s not clear what we should trust. Does science accurately depicts Truth? Does knowledge evolves coherently? Is even a Truth or description of Truth? Can we measure which research programme captures more information about the world?

If the goal is understanding, there is no clear method to understand more. Science may not be the only place to gain understanding. But with certain claims, it should be considered more trustworthy. Yet it develops strangely: dependent on the programme, and not always “forward”. Its mission can be co-opted for evil – not via bad science, but via bad implications.

Maybe science’s success is determined by the civilization’s success. When you can use new knowledge and models to make your society win wars, the science is in higher regard, too.

I’m not clear why science provides more and more accurate depictions of our species’ shared reality. Science is fallible, but it’s winning the battle for new understanding. There may be ways to justify its successes, but perhaps it’s no more rational than our desires for explanation.

Corrections? Thoughts? Send me a message.