What definition of pseudoscience would capture economics without capturing medicine, ecology, or meteorology?
Everyone’s just using models here, and the way we incorporate statistical observations to define the limits of the models’ scope, and refine the models over time, or reject the models entirely, applies to economists, meteorologists, seismologists, and many branches of actual human medicine.
Popper would define pseudoscience as predictions that can’t be falsified, but surely that can’t apply to the idea of the weatherman predicting rain and being wrong, right?
Kuhn came along and argued that science is about solving problems within paradigms, and sometimes rejecting paradigms in scientific revolutions (geocentrism vs heliocentrism, Newtonian physics versus Einstein’s relativity), but it wasn’t a particularly robust test for separating out pseudoscience.
Lakatos categorized things further at explaining how model-breaking observations could be handled within the structure of how science performs its work (limiting the scope of the model, expanding the complexity of the model to fit the new observations, proposing specific exception handlers), but also observed the difference between the hard core of a discipline, in which attempts at refutation were not tolerated, and auxiliary hypotheses where the scientists were free to test their ideas for falsifiability.
But when you use these ideas to try to understand how science works, I don’t think economics really stands out as less scientific than cancer research or climatology or other statistically driven scientific disciplines.
Namely, the scientific method relies on inductive reasoning and foundational economics relies heavily on deductive reasoning.
The difference isn’t the data itself, it’s what they do with it. Medicine takes subjective, self-reported pain scales and plugs them directly into rigorous, double-blind, randomized controlled trials where they isolate variables to test a strictly falsifiable hypothesis.
Foundational economics, on the other hand, takes subjective concepts like “utility” or “rational self-interest” and uses them as unfalsifiable, deductive assumptions to guess how massive, open systems work.
Basically, you can put a new painkiller in a placebo-controlled trial to scientifically prove if it reduces that subjective pain, but you can’t put a macroeconomy in a petri dish to run a controlled, repeatable experiment on supply and demand.
Plenty of medical science doesn’t lend itself well to double blind studies. In vivo infection models can’t ethically be tested with double blind studies, and can only be observed. Lots of medicine advances through observational studies, too, like almost anything relating to nutrition or lifestyle or trauma. There’s no double blind study on how survivable car accidents are.
Plus double blind studies themselves don’t necessarily have any kind of explanatory power (see the entire field of anesthesia where we know how much of each anesthetic it generally takes to put people under, but we don’t know the underlying mechanism it uses to make people go under). Or, for that matter, Tylenol (whose mechanism of action remains a mystery).
That’s just it, though. Outliers are treated fundamentally differently between them, they are treated as bugs in economics, but as features in medicine.
If a “universal” drug fails for a specific group, medicine views that outlier as a falsification that proves the rule is incomplete. They use the exception to fix the theory.
Foundational economics does the opposite: it treats axioms like “rational actors” as holy scripture, so when people don’t behave like the math says they should, the economists just dismiss them as “irrational” and keep the model exactly the same.
Even if we don’t know the mechanism behind Tylenol, we can still falsify whether or not it works. You can’t falsify a “rational actor” because the moment someone does something weird, you just move the goalposts. Medicine is trying to map the territory; foundational economics insists the map is right and the territory is just acting up. It’s barely based in reality.
Outliers are treated fundamentally differently between them, they are treated as bugs in economics, but as features in medicine.
I don’t understand what you mean by this.
Let’s take for example a simple example of the outlier of the person who smokes a lot of cigarettes but outlives the person who doesn’t smoke. Does this break the model where smoking harms health and increases all cause mortality (which we know through epidemiological observation of deaths, which is not in any sense a double blind test)? Where does this observation fit into medicine?
Or take the example of a discontinuity regression in economics. A jurisdiction passes a law increasing the minimum wage above the market-clearing wage in that area, which shares a border with another jurisdiction that has a similar market clearing wage. Can we observe the differences on both sides of that border to see whether the minimum wage increase leads to an increase in unemployment? Yes, it’s just applied math at that point.
Where does behavioral economics fit into your ideas of how economics expects a rational actor? There are differences in behavior that have been measured by economists in different situations, and those are important ideas in economic behavior and observations. So why do you assume those models have been discarded in favor of some sort of doctrinal insistence that humans behave in a particular way?
And if you’re describing the reluctance of practitioners to abandon the core ideas of their models, or the core paradigms of their disciplines, I’d observe that you’re largely correct but wrong to assume it doesn’t happen in things that you’d probably call science, from medicine to meteorology to epidemiology. Things get overturned slowly, and sometimes these paradigm shifts meet a lot of resistance for an entire generation: phlogiston proponents slowly coming around on oxygen, cosmologists saying “fine I guess dark energy exists.”
The critiques you lob at economics are valid. I just think you under appreciate how much they apply to hard science, too.
When a weatherman’s prediction is falsified, the model itself is not disproven. The fact that the practitioners of that discipline stick with it even when a prediction is falsified starts to look like the pseudoscience side of Popper’s falsifiability criterion.
What definition of pseudoscience would capture economics without capturing medicine, ecology, or meteorology?
Everyone’s just using models here, and the way we incorporate statistical observations to define the limits of the models’ scope, and refine the models over time, or reject the models entirely, applies to economists, meteorologists, seismologists, and many branches of actual human medicine.
Popper would define pseudoscience as predictions that can’t be falsified, but surely that can’t apply to the idea of the weatherman predicting rain and being wrong, right?
Kuhn came along and argued that science is about solving problems within paradigms, and sometimes rejecting paradigms in scientific revolutions (geocentrism vs heliocentrism, Newtonian physics versus Einstein’s relativity), but it wasn’t a particularly robust test for separating out pseudoscience.
Lakatos categorized things further at explaining how model-breaking observations could be handled within the structure of how science performs its work (limiting the scope of the model, expanding the complexity of the model to fit the new observations, proposing specific exception handlers), but also observed the difference between the hard core of a discipline, in which attempts at refutation were not tolerated, and auxiliary hypotheses where the scientists were free to test their ideas for falsifiability.
But when you use these ideas to try to understand how science works, I don’t think economics really stands out as less scientific than cancer research or climatology or other statistically driven scientific disciplines.
To quote the order commenter here, basic foundational “observations” by Economics aren’t based on the Scientific Method.
In what way? And how does that differ from how medicine measures pain?
There are several physical autonomic responses that demonstrate feeling pain that can be measured objectively.
It’s not just faces on a numbered chart
Namely, the scientific method relies on inductive reasoning and foundational economics relies heavily on deductive reasoning.
The difference isn’t the data itself, it’s what they do with it. Medicine takes subjective, self-reported pain scales and plugs them directly into rigorous, double-blind, randomized controlled trials where they isolate variables to test a strictly falsifiable hypothesis.
Foundational economics, on the other hand, takes subjective concepts like “utility” or “rational self-interest” and uses them as unfalsifiable, deductive assumptions to guess how massive, open systems work.
Basically, you can put a new painkiller in a placebo-controlled trial to scientifically prove if it reduces that subjective pain, but you can’t put a macroeconomy in a petri dish to run a controlled, repeatable experiment on supply and demand.
This difference invites a lot of woo.
Plenty of medical science doesn’t lend itself well to double blind studies. In vivo infection models can’t ethically be tested with double blind studies, and can only be observed. Lots of medicine advances through observational studies, too, like almost anything relating to nutrition or lifestyle or trauma. There’s no double blind study on how survivable car accidents are.
Plus double blind studies themselves don’t necessarily have any kind of explanatory power (see the entire field of anesthesia where we know how much of each anesthetic it generally takes to put people under, but we don’t know the underlying mechanism it uses to make people go under). Or, for that matter, Tylenol (whose mechanism of action remains a mystery).
That’s just it, though. Outliers are treated fundamentally differently between them, they are treated as bugs in economics, but as features in medicine.
If a “universal” drug fails for a specific group, medicine views that outlier as a falsification that proves the rule is incomplete. They use the exception to fix the theory.
Foundational economics does the opposite: it treats axioms like “rational actors” as holy scripture, so when people don’t behave like the math says they should, the economists just dismiss them as “irrational” and keep the model exactly the same.
Even if we don’t know the mechanism behind Tylenol, we can still falsify whether or not it works. You can’t falsify a “rational actor” because the moment someone does something weird, you just move the goalposts. Medicine is trying to map the territory; foundational economics insists the map is right and the territory is just acting up. It’s barely based in reality.
I don’t understand what you mean by this.
Let’s take for example a simple example of the outlier of the person who smokes a lot of cigarettes but outlives the person who doesn’t smoke. Does this break the model where smoking harms health and increases all cause mortality (which we know through epidemiological observation of deaths, which is not in any sense a double blind test)? Where does this observation fit into medicine?
Or take the example of a discontinuity regression in economics. A jurisdiction passes a law increasing the minimum wage above the market-clearing wage in that area, which shares a border with another jurisdiction that has a similar market clearing wage. Can we observe the differences on both sides of that border to see whether the minimum wage increase leads to an increase in unemployment? Yes, it’s just applied math at that point.
Where does behavioral economics fit into your ideas of how economics expects a rational actor? There are differences in behavior that have been measured by economists in different situations, and those are important ideas in economic behavior and observations. So why do you assume those models have been discarded in favor of some sort of doctrinal insistence that humans behave in a particular way?
And if you’re describing the reluctance of practitioners to abandon the core ideas of their models, or the core paradigms of their disciplines, I’d observe that you’re largely correct but wrong to assume it doesn’t happen in things that you’d probably call science, from medicine to meteorology to epidemiology. Things get overturned slowly, and sometimes these paradigm shifts meet a lot of resistance for an entire generation: phlogiston proponents slowly coming around on oxygen, cosmologists saying “fine I guess dark energy exists.”
The critiques you lob at economics are valid. I just think you under appreciate how much they apply to hard science, too.
A weatherman predicting rain has made a falsifiable prediction, how does that relate to Popper?
When a weatherman’s prediction is falsified, the model itself is not disproven. The fact that the practitioners of that discipline stick with it even when a prediction is falsified starts to look like the pseudoscience side of Popper’s falsifiability criterion.