Skip to content

Cargo Cult Science Essay Competitions

The first principle [of science] is that you must not fool yourself — and you are the easiest person to fool. –– Richard Feynman, from his 1974 commencement address at Caltech

In 1974, Richard Feynman gave the commencement address at Caltech, in which he cautioned the audience to understand and be wary of the difference between real science and what he called “Cargo Cult” science. The lecture was a warning to students that science is a rigorous field that must remain grounded in hard rules of evidence and proof. Feynman went on to explain to the students that science is extremely hard to get right, that tiny details matter, that it is always a struggle to make sure that personal bias and motivated reasoning are excluded from the process.

It’s not the good will, the high intelligence, or the expertise of scientists that makes science work as the best tool for discovering the nature of the universe. Science works for one simple reason: It relies on evidence and proof. It requires hypothesis, prediction, and confirmation of theories through careful experiment and empirical results. It requires excruciating attention to detail, and a willingness to abandon an idea when an experiment shows it to be false. Failure to follow the uncompromising rules of science opens the door to bias, group-think, politically-motivated reasoning, and other failures.

Science is the belief in the ignorance of experts. — Richard Feynman

As an example of how unconscious bias can influence even the hardest of sciences, Feynman recounted the story of the Millikan Oil Drop Experiment. The purpose of the experiment was to determine the value of the charge of an electron. This was a rather difficult thing to measure with the technology of the time, and Millikan got a result that was just slightly too high due to experimental error — he used the wrong value for the viscosity of air in his calculations. This was the result that was published.

Now, a slightly incorrect result is not a scandal — it’s why we insist on replication. Even the best scientists can get it wrong once in awhile. This is why the standard protocol is to publish all data and methods so that other scientists can attempt to replicate the results. Millikan duly published his methods along with the slightly incorrect result, and others began doing oil drop experiments themselves.

As others published their own findings, an interesting pattern emerged: The first published results after Millikan’s were also high – just not quite as much. And the next generation of results were again too high, but slightly lower than the last . This pattern continued for some time until the experiments converged on the true number.

Why did this happen? There was nothing about the experiment that should lead to a consistently high answer. If it was just a hard measurement to make, you would expect experimental results to be randomly distributed around the real value. What Feynman realized was that psychological bias was at work: Millikan was a great scientist, and no one truly expected him to be wrong. So when other scientists found their results were significantly different from his, they would assume that they had made some fundamental error and throw the results out. But when randomness in the measurement resulted in a measurement closer to Millikan’s, they assumed that it was a better result. They were filtering the data until the result reached a value that was at least close enough to Millikan’s that the error was ‘acceptable’. And then when that result was added to the body of knowledge, it made the next generation of researchers a little more willing to settle on an even smaller, but still high result.

Note that no one was motivated by money, or politics, or by anything other than a desire to be able to replicate a great man’s work. They all wanted to do the best job they could and find the true result. They were good scientists. But even the subtle selection bias caused by Millikan’s stature was enough to distort the science for some time.

The key thing to note about this episode is that eventually they did find the real value, but not by relying on the consensus of experts or the gravitas and authority of a great scientist. No, the science was pulled back to reality only because of the discipline of constant testing and because the scientific question was falsifiable and experimentally determinable.

Failure to live up to these standards, to apply the rigor of controlled double-blind tests, predictions followed by tests of those predictions and other ways of concretely testing for the truth of a proposition means you’re not practising science, no matter how much data you have, how many letters you have after your signature, or how much money is wrapped up in your scientific-looking laboratory. At best, you are practising cargo-cult science, or as Friedrich Hayek called it in his Nobel speech, ‘scientism’ – adopting the trappings of science to bolster an argument while at the same time ignoring or glossing over the rigorous discipline at the heart of true science.

This brings us back to cargo cults. What is a cargo cult, and why is it a good metaphor for certain types of science today? To see why, let’s step back in time to World War II, and in particular the war against Japan.

The Pacific Cargo Cults

During World War II, the allies set up forward bases in remote areas of the South Pacific. Some of these bases were installed on islands populated by locals who had never seen modern technology, who knew nothing of the strange people coming to their islands. They watched as men landed on their island in strange steel boats, and who then began to cut down jungle and flatten the ground. To the islanders, it may have looked like an elaborate religious ritual.

In due time, after the ground was flat and lights had been installed along its length, men with strange disks over their ears spoke into a little box in front of their mouths, uttering incantations. Amazingly, after each incantation a metal bird would descend from the sky and land on the magic line of flat ground. These birds brought great wealth to the people – food they had never seen before, tools, and medicines. Clearly the new God had great power.

Years after the war ended and the strange metal birds stopped coming, modern people returned to these islands and were astonished by what they saw; ‘runways’ cut from the jungle by hand, huts with bamboo poles for antennas, locals wearing pieces of carved wood around their ears and speaking into wooden ‘microphones’, imploring the great cargo god of the sky to bring back the metal birds.

Ceremony for the new Tuvaluan Stimulus Program

Understand, these were not stupid people. They were good empiricists. They painstakingly watched and learned how to bring the cargo birds. If they had been skilled in modern mathematics, they might even have built mathematical models exploring the correlations between certain words and actions and the frequency of cargo birds appearing. If they had sent explorers out to other islands, they could have confirmed their beliefs: every island with a big flat strip and people with devices on their heads were being visited by the cargo birds. They might have found that longer strips bring even larger birds, and used that data to predict that if they found an island with a huge strip it would have the biggest birds.

Blinded with Science

There’s a lot of “science” that could have been done to validate everything the cargo culters believed. There could be a strong consensus among the most learned islanders that their cult was the ‘scientific’ truth. And they could have backed it up with data, and even some simple predictions. For example, the relationship between runway length and bird size, the fact that the birds only come when it’s not overcast, or that they tended to arrive on a certain schedule. They might even have been able to dig deeply into the data and find all kinds of spurious correlations, such as a relationship between the number of birds on the ground and how many were in the sky, or the relationship between strange barrels of liquid on the ground and the number of birds that could be expected to arrive. They could make some simple short-term predictions around this data, and even be correct.

Then one day, the predictions began to fail. The carefully derived relationships meticulously measured over years failed to hold. Eventually, the birds stopped coming completely, and the strange people left. But that wasn’t a problem for the island scientists: They knew the conditions required to make the birds appear. They meticulously documented the steps taken by those first strangers on the island to bring the birds in the first place, and they knew how to control for bird size by runway length, and how many barrels of liquid were required to entice the birds. So they put their best engineers to work rebuilding all that with the tools and materials they had at hand – and unexpectedly failed.

How did all these carefully derived relationships fail to predict what would happen? Let’s assume these people had advanced mathematics. They could calculate p-values, do regression analysis, and had most of the other tools of science. How could they collect so much data and understand so much about the relationships between all of these activities, and yet be utterly incapable of predicting what would happen in the future and be powerless to control it?

The answer is that the islanders had no theory for what was happening, had no way of testing their theories even if they had had them, and were hampered by being able to see only the tiniest tip of an incredibly complex set of circumstances that led to airplanes landing in the South Pacific.

Imagine two island ‘scientists’ debating the cause of their failure. One might argue that they didn’t have metal, and bamboo wasn’t good enough. Another might argue that his recommendation for how many fake airplanes should be built was ignored, and the fake airplane austerity had been disastrous. You could pore over the reams of data and come up with all sorts of ways in which the recreation wasn’t quite right, and blame the failure on that. And you know what? This would be an endless argument, because there was no way of proving any of these propositions. Unlike Millikan, they had no test for the objective truth.

And in the midst of all their scientific argumentation as to which correlations mattered and which didn’t, the real reason the birds stopped coming was utterly opaque to them: The birds stopped coming because some people sat on a gigantic steel ship they had never seen, anchored in the harbor of a huge island they had never heard of, and signed a piece of paper agreeing to end the war that required those South Pacific bases. And the signing itself was just the culmination of a series of events so complex that even today historians argue over it. The South Sea Islanders were doomed to hopeless failure because what they could see and measure was a tiny collection of emergent properties caused by something much larger, very complex and completely invisible to them. The correlations so meticulously collected were not describing fundamental, objective properties of nature, but rather the side-effects of a temporary meta-stability of a constantly changing, wholly unpredictable and wildly complex system.

The Modern Cargo Cults

Today, entire fields of study are beginning to resemble a form of modern cargo cult science. We like to fool ourselves into thinking that because we are modern, ‘scientific’ people that we could never do anything as stupid as the equivalent of putting coconut shells on our ears and believing that we could communicate with metal birds in the sky through them. But that’s exactly what some are doing in the social sciences, in macroeconomics, and to some extent in climate science and in some areas of medicine. And these sciences share a common characteristic with the metal birds of the south sea cargo cults: They are attempts to understand, predict, and control large complex systems through examination of their emergent properties and the relationships between them.

No economist can hope to understand the billions of decisions made every day that contribute to change in the economy. So instead, they choose to aggregate and simplify the complexity of the economy into a few measures like GDP, consumer demand, CPI, aggregate monetary flows, etc. They do this so they can apply mathematics to the numbers and get ‘scientific’ results. But like the South Sea islanders, they have no way of proving their theories and a multitude of competing explanations for why the economy behaves as it does with no objective way to solve disputes between them. In the meantime, their simplifications may have aggregated away the information that’s actually important for understanding the economy.

You can tell that these ‘sciences’ have gone wrong by examining their track record of prediction (dismal), and by noticing that there does not seem to be steady progress of knowledge, but rather fads and factions that ebb and flow with the political tide. In my lifetime I have seen various economic theories be discredited, re-discovered, discredited once more, then rise to the top again. There are still communist economics professors, for goodness’ sake. That’s like finding a physics professor who still believes in phlogiston theory. And these flip-flops have nothing to do with the discovery of new information or new techniques, but merely by which economic faction happens to have random events work slightly in favor of their current model or whose theories give the most justification for political power.

As Nate Silver pointed out in his excellent, “The Signal and the Noise,” economists’ predictions of future economic performance are no better than chance once you get away from the immediate short term. Annual surveys of macroeconomists return predictions that do no better than what you’d get throwing darts at a dartboard. When economists like Christina Romer have the courage to make concrete predictions of the effects of their proposed interventions, they turn out to be wildly incorrect. And yet, these constant failures never seem to falsify their underlying beliefs. Like the cargo cultists, they’re sure that all they need to do is comb through the historical patterns in the economy and look for better information, and they’ll surely be able to control the beast next time.

Other fields in the sciences are having similar results. Climate is a complex system with millions of feedbacks. It adapts and changes by its own rules we can’t begin to fully grasp. So instead we look to the past for correlations and then project them, along with our own biases, into the future. And so far, the history of prediction of climate models is very underwhelming.

In psychology, Freudian psychoanalysis was an unscientific, unfalsifiable theory based on extremely limited evidence. However, because it was being pushed by a “great man” who commanded respect in the field, it enjoyed widespread popularity in the psychology community for many decades despite there being no evidence that it worked. How many millions of dollars did hapless patients spend on Freudian psychotherapy before we decided it was total bunk? Aversion therapy has been used for decades for the treatment of a variety of ills by putting the patient through trauma or discomfort, despite there being very little clinical evidence that it works. Ulcers were thought to have been caused by stress. Facilitated communication was a fad that enjoyed widespread support for far too long.

A string of raw facts; a little gossip and wrangle about opinions; a little classification and generalization on the mere descriptive level; a strong prejudice that we have states of mind, and that our brain conditions them: but not a single law in the sense in which physics shows us laws, not a single proposition from which any consequence can causally he deduced. This is no science, it is only the hope of a science.

— William James, “Father of American psychology”, 1892

These fields are adrift because there are no anchors to keep them rooted in reality. In real science, new theories are built on a bedrock of older theories that have withstood many attempts to falsify them, and which have proven their ability to describe and predict the behavior of the systems they represent. In cargo cult sciences, new theories are built on a foundation of sand — of other theories that themselves have not passed the tests of true science. Thus they become little more than fads or consensus opinions of experts — a consensus that ebbs and flows with political winds, with the presence of a charismatic leader in one faction or another, or with the accumulation of clever arguments that temporarily outweigh the other faction’s clever arguments. They are better described as branches of philosophy, and not science — no matter how many computer models they have or how many sophisticated mathematical tools they use.

In a cargo cult science, factions build around popular theories, and people who attempt to discredit them are ostracised. Ad hominem attacks are common. Different theories propagate to different political groups. Data and methods are often kept private or disseminated only grudgingly. Because there are no objective means to falsify theories, they can last indefinitely. Because the systems being studied are complex and chaotic, there are always new correlations to be found to ‘validate’ a theory, but rarely a piece of evidence to absolutely discredit it. When an economist makes a prediction about future GDP or the effect of a stimulus, there is no identical ‘control’ economy that can be used to test the theory, and the real economy is so complex that failed predictions can always be explained away without abandoning the underlying theory.

There is currently a crisis of non-reproducibility going on in these areas of study. In 2015, Nature looked at 98 peer-reviewed papers in psychology, and found that only 39 of them had results that were reproducible. Furthermore, 97 percent of the original studies claimed that their results were statistically significant, while only 36 percent of the replication studies found statistically significant results. This is abysmal, and says a lot about the state of this “science.”

This is not to say that science is impossible in these areas, or that it isn’t being done. All the areas I mentioned have real scientists working in them using the real methods of science. It’s not all junk. Real science can help uncover characteristics and behaviors of complex systems, just as the South Sea Islanders could use their observations to learn concrete facts such as the amount of barrels of fuel oil being an indicator of how many aircraft might arrive. In climate science, there is real value to be had in studying the relationships between various aspects of the climate system — so long as we recognize that what we are seeing is subject to change and that what is unseen may represent the vast majority of interactions.

The complex nature of these systems and our inability to carry out concrete tests means we must approach them with great humility and understand the limits of our knowledge and our ability to predict what they will do.And we have to be careful to avoid making pronouncements about truth or settled science in these areas, because our understanding is very limited and likely to remain so.

Science alone of all the subjects contains within itself the lesson of the danger of belief in the infallibility of the greatest teachers of the preceding generation.

— Richard Feynman

If you create a good enough airport—the cargo will come.

What does it take for an individual to do innovative intellectual work, such as scientific discovery? Mere mastery of methods is not good enough.

What does it take for a community or institution to address a volatile, uncertain, complex, and ambiguous world effectively? Mission statements, structures, principles, and procedures are not good enough.

Cargo cult science

Richard Feynman—the foremost physicist of the mid-20th century—gave a famous commencement address on “cargo cult science.”1

During World War II, many Pacific islands that previously had little or no contact with the modern world were used as air bases by the Americans or Japanese. Suddenly, enormous quantities of food, clothes, tools, and equipment, such as the islanders had never seen, appeared out of the sky in magic flying boats. Some of this “cargo” trickled down to the natives, and it was fabulous. Then the war ended, the planes vanished, and—no more cargo!

How to make the cargo flow again? The islanders had observed that, just before cargo arrived, the foreigners performed elaborate rituals involving inscrutable religious paraphernalia. Clearly, these summoned the sky spirits that brought cargo.

Religious entrepreneurs founded cults that duplicated sky spirit rituals using locally-produced copies of the paraphernalia. They imitated the actions of the airstrip ground crews using wicker control towers, coconut headsets, and straw planes (such as the one photographed above). Some cargo cults are still going, generations later, despite their failure to deliver even one landing by the sky spirits. Ha ha, stupid primitive savages!

Except, this is a perfect metaphor for most of what is called “science,” done by people with PhDs.

“Cargo cult science” performs rituals that imitate science, but are not science. Real science sometimes delivers cargo (fame and promotions for scientists; profits for R&D companies; technologies for everyone else). So, you think, OK, what do I have to do to make that happen? How did those guys do it? So you look to see what they did, and you do the same thing. But usually that doesn’t work well.

“Doing what scientists do” is not doing science, and won’t deliver—just as “doing what a ground crew does” doesn’t bring planes. It’s just going through the motions.

But exactly why doesn’t it work? And what does work? What makes the difference between cargo cult science and the real thing?

Cargo cults everywhere

“Cargo cult” describes not just science, but much of what everyone does in sophisticated rich countries. I’m not speaking of our religions; I mean our jobs and governments and schools and medical systems, which frequently fail to deliver. Companies run on cargo cult business management; states run on cargo cult policies; schools run on cargo cult education theories (Feynman mentioned this one); mainstream modern medicine is mostly witch doctoring.

An outsider could see that these cannot deliver, because they are scripted busy-work justified by ideologies that lack contact with reality. Often they imitate activities that did work once, for reasons that have been forgotten or were never understood.

So how do you go beyond cargo cultism? How do you do actual science? Or economics or policy; education or medicine?

And why is cargo cultism so common, if it keeps failing to deliver?

Upgrading

In some video games, you direct the technical and economic development of a handful of hunter-gatherers in straw huts. You start them farming, and they multiply. They build a wooden palisade to keep out hostile strangers. You invent the plow, so their farms become more efficient, and the village grows into a small town. You start them mining, and they build stone houses, and a stone wall to repel invasions. You discover copper smelting and they can make metal plows and swords. And so on—upgrading technology step by step, until eventually your people develop fusion power, take over the whole earth, build spaceships, and set off to colonize the galaxy.

So what about those stupid savages, doing their silly rituals on their Pacific islands?

Suppose they got their imitation runway level enough, and put tarmac on it, and upgraded the control tower from straw to wood to concrete, and installed modern radar and landing control systems, and sent their “ground crew” to Pittsburgh to be trained and certified.

What then?

Imitation and learning know-how

Let’s say you are a new graduate student starting a science PhD program. What you learned as an undergraduate were an enormous number of facts, a few calculation methods, and basic familiarity with some experimental equipment. You learned mainly by being lectured at in classrooms, by reading, by solving artificial puzzle-like problems, and in lab courses where you used the equipment to try to get the known-correct answer to make-believe “experiments.” None of this is anything like actual science: discovering previously-unknown truths.2

Much education assumes the wrong idea that learning consists of ingesting bits of knowledge (facts, concepts, procedures), and storing them, and when you have enough, you can make useful deductions using innate human reasoning. A more sophisticated wrong idea is that there are methods of thinking, and once you have learned them, you can use them reliably. Both of these are partly true—you do need to learn and remember and use facts, and learn and practice and use rational methods—but they are not sufficient.

You can’t learn how to do science from classes or books (although what you do learn there is important). You certainly can’t figure out how to do it from rational first principles! No one has any detailed rational theory of how science works.3 More generally, you mostly can’t learn doing from books or classes or reasoning; you can only learn doing by doing.

In doing, ability precedes understanding, which precedes representation. Knowing-how is not reducible to knowing-that.4 Riding a bicycle is the classic example: no amount of classroom instruction, or rational reflection, could enable a novice to stay upright.

How do you learn know-how?

Imitation is one powerful and common way—one that is unfortunately underemphasized in current American theories of education. The Melanesian cargo cults were founded on the accurate observation that imitation often results in new abilities that you do not understand—at first, at least.

In fact, you start doing science—or any serious intellectual work—by imitation, by going through the motions, not seeing the point of the rituals. Gradually you come to understand something of how and why they work. (If you are smart and lucky; many people never do.) Gradually, you find yourself doing the real thing. At some point, you can improvise, step into the unknown, and create your own methods.

In other words, you can only begin your career as a scientist by doing cargo-cult science. Eventually—if you are smart and lucky—you can upgrade. But almost all scientists get stuck at the cargo cult stage; and almost all supposed science is cargo culting.

Cargo cult science, and cargo cult government and management and education, are based on the perfectly sensible principle of imitation. Why doesn’t that work? Why isn’t classroom science instruction plus learning through imitation good enough?

Why isn’t imitation a sufficient upgrade?

Actually… Why don’t the literal cargo cults work? The answer is not quite as obvious as it may seem at first!

The first obvious answer is: Ha ha, straw airplanes can’t fly, and coconuts are not headphones. But that’s wrong. Proper technology is neither necessary nor sufficient for a functional airport:

  • I have landed (as a passenger) at a remote airport in Alaska that consisted of a dry river bed with the larger rocks cleared off, plus a closet-sized wooden shed with emergency fuel and repair supplies.
  • If someone installed a complete airport facility with all the latest technology on one of the cargo cult islands, and then left, that would be a useless pile of junk. Without a competent ground crew, the buildings and equipment are not an airport.

Better technology would be a significant upgrade—but it is not the whole answer, or even the main one. It would not make the cargo come.

The second obvious answer is: Ha ha, the cargo cultists are only imitating a ground crew; they have no understanding, so they are just going through the motions. But this isn’t right either. Imitating is often a good way of learning, and understanding an activity is often neither necessary nor sufficient to performing it—even to performing it excellently.

You don’t need understanding to ride a bicycle. In fact, almost no one has an accurate mental model of how a bicycle works.5 I am pretty confident that much of what an expert ground crew does, they don’t understand either.

Better understanding, like better technology, would be a significant upgrade for a cargo cult. The same is true in cargo cult science. One commonly suggested antidote is to understand the principles of the field, so you know why its methods work, and aren’t just performing experiments as inscrutable rituals. I advocated this in “How to think real good,” and it’s important enough that I’m working on a post just on it, to follow up this one.

What are “principles” and how do you find them? If they are so great, why aren’t they just taught in the introductory class? Partly because even the best people in the field can’t quite say what the principles are, because tacit understanding does not always enable explicit explanation. Also, many methods are worked out by trial and error, by many people over many years; they do work, but it’s not clearly known why.

Anyway, I doubt a ground crew knows, or is taught, any profound principles of airport operation. The problem with imitation is not solely or primarily lack of deep understanding.

What is missing?

Feynman found the question awkward:

[Cargo cult scientists] follow all the apparent precepts and forms of scientific investigation, but they’re missing something essential.

Now it behooves me, of course, to tell you what. But it would be just about as difficult to explain to the South Sea islanders how they have to arrange things so that they get some wealth in their system. It is not something simple like telling them how to improve the shapes of the earphones.

He goes on to suggest that “utter honesty” is the key. He also describes this as “scientific integrity.” And, he points out ruefully, this is rarely taught:

But this long history of learning how to not fool ourselves—of having utter scientific integrity—is, I’m sorry to say, something that we haven’t specifically included in any particular course that I know of. We just hope you’ve caught on by osmosis.

If this is as important as he—and I—believe, we ought to ask why it is not taught in universities. (I’ll suggest a reason later in this post.)

I vaguely remember being taught something like this in high school, or even grade school. At that point, it’s irrelevant because you can’t understand what scientific honesty even means until you do your own research.

Until then, there’s only what Feynman calls “conventional honesty,” meaning you don’t make things up. If the meter read 2.7, you put 2.7 in your report, even though 3.9 would be much more exciting.

Although I do consider “utter honesty” important, I don’t think it’s quite right that this is what cargo cult science lacks.6 Or anyway, it’s not the whole story. I think it points in a promising direction, however: toward epistemic virtue.

Epistemic virtue and epistemic vice

Honesty is a moral virtue. It is also an . Epistemic virtues are cognitive traits that tend to lead to accurate knowledge and understanding. Tenacity, courage, generosity, conscientiousness, and curiosity are some other epistemic virtues—which is why I said I think “utter honesty” is not the whole story.7

Cargo cult science is bad science; and “bad” is a moral, or at least normative, term. Upgrading a cargo cult is, I think, a moral responsibility. Doing bad science is wrong—in a specialized way that goes beyond everyday morality.

What did Feynman mean by “utter honesty”? He didn’t explain exactly, but he did say that it’s not mostly about scientific fraud. Avoiding that is a very low bar, and fraud is relatively rare,8 and easy to eschew. Not committing fraud is, as he puts it, “conventional honesty,” not the special “utter honesty” required in science—and, I would argue, in all intellectual work.

Utter honesty, I suspect, means not just telling the truth, but caring about the truth. Feynman uses the phrase “bending over backward” to suggest a higher standard. You will go to extreme lengths to avoid fooling yourself—partly because then you won’t fool others, but more importantly because you really want to know what’s going on.

“Utter honesty” is about overcoming the “good enough” mediocrity of cargo cult science. Mediocrity comes from going along with the social conventions of your field; accepting its assumptions uncritically; using its methods without asking hard questions about whether they actually do what they are supposed to.

Cargo cultism is the bureaucratic rationality of blindly following established procedures and respecting authority. In the moral domain, that can lead ordinary people into committing genocide without reflection; in science, it leads to nutritional recommendations that may also have killed millions of people. When you look into how those recommendations were arrived at, it becomes obvious that honesty would compel the entire field of nutrition science to resign in recognition of its total failure—both scientific failure and moral failure.

Unflinching lustful curiosity

Important as honesty is, I might rate even higher curiosity, courage, and desire. These are not separable from each other, or from honesty, but it may be helpful to present them as facets of epistemic virtue.

Be curious!

Yeah, good, whatever…

Exhortations to epistemic virtue, and lists of virtues, are not helpful by themselves. We need details. For that, we need to look carefully at specific cases in which epistemic virtue or vice led to success or failure. From them, we can extract heuristics and principles.9

Curiosity

Feynman’s best case study is the rat-running one. (It’s a little too complicated to explain here; it’s near the end of his talk if you still haven’t read that!) It seems to me that the scientists who got this wrong weren’t dishonest. They were incurious: they didn’t actually care about rats. They lacked intellectual desire. They lacked the courage to say “maybe we keep getting inconsistent results because our experimental apparatus is defective.” At some level, they understood that admitting this would lead to a lot of boring difficult work, for which there would be no career reward. (As, Feynman says, occurred: the guy who figured out the problem was ignored and never cited.)

Honesty comes out of curiosity, mostly, I think. If you really do want to know, there’s much less motivation to promote a wrong answer—arrived at either through deliberate fraud or sloppy, inadequately-controlled experimentation.

A reliable recipe for “how to be curious” is impossible (and probably undesirable—you need to choose skillfully what to be curious about). However, we can and should give descriptions of what curiosity is like, so you can recognize when you are curious—and when you are not. Cargo cult science comes from merely going through the motions because you don’t care enough about understanding the phenomena you are studying. It is common for graduate students, or postdocs, or professors, to gradually lose interest in their field without even noticing. Then you do bad work.

Desire

Curiosity is not just caring about which facts are true versus false. It is lust for understanding. What matters is that you want, above all else, to figure out what is actually going on.

Where does curiosity come from? It is not “disinterested,” as some philosophers of science would advocate. You want to know what is actually going on because the thing is cool. If you don’t love your phenomenon of study, you won’t care enough to want to understand it. I would guess that liking rats, finding them cute and funny and interesting and enjoying their company, can make you a better rat-running scientist.

I wrote about this in “Going down on the phenomenon,” making an extended metaphor with sexual desire—which is why I use the term “lust” here.

Beyond respect, one must care about the phenomenon. It seems to me that most academic intellectuals I talk to do not genuinely care about their subject matter. They are more interested in getting papers out of it than they are in learning about it. Analogously, many people in approaching sex are more interested in getting something out of someone than they are in learning about another person (and themselves).

Courage

Every scientist (probably—me for sure) sometimes screws up and promotes an attractive idea that isn’t actually right. That’s unavoidable, probably. Courage and honesty means recognizing and admitting this when it happens, and being as transparent as possible so other people can detect it.

Courage and honesty may also demand that you be transparent about going beyond the boundaries of your discipline. That can be a taboo—but breaking it is a virtue, because mindlessly adhering to disciplinary conventions is a main cause of cargo cult science. A seminal and excellent paper on research management10 explains:

Research has come to be as ritualistic as the worship of a primitive tribe, and each established discipline has its own ritual. As long as the administrator operates within the rituals of the various disciplines, he is relatively safe. But let him challenge the adequacy of ritualistic behavior and he is in hot water with everyone.

The first conviction of the research specialist is that a problem can be factored in such a way that his particular specialty is the only important aspect. If he has difficulty in making this assumption, he will try to redefine the problem in such a way that he can stay within the boundaries of his ritual. If all else fails, he will argue that the problem is not “appropriate.” Research specialists, like all other living organisms, will go to great lengths to maintain a comfortable position. Having invested much time and energy in becoming specialists in a given methodology, they can be expected to resist efforts to expand the boundaries of the methodology or to warp the methodology into an unfamiliar framework.

I’ll give one example. It is self-serving, but I hope you’ll forgive that if you find it funny. It was a time while I was a graduate student in the MIT Electrical Engineering and Computer Science Department. When anyone asked what I was up to, I replied honestly:

I’m reading about the Balinese Rangda-Barong ritual so I can use existential phenomenology to figure out how to make breakfast.

This was something of a risk.11 It is not the sort of thing EECS students are encouraged to spend their time on. My research was funded by the US Department of Defense. The DOD might not have looked favorably on having their money spent on tantric rituals, phenomenology, or breakfast-making. On the other hand, my understanding of those things led directly to new technical methods and insights that underlie the current generation of military robots. (For better or worse.)

If you do realize that you have lost interest in your field—the fire has gone out of your romance—it may take huge courage to admit that and leave. It’s the right thing to do, though.

Legitimate peripheral participation

Earlier, I asked: why isn’t learning know-how through imitation (plus learning facts through classroom instruction) good enough? Part of the answer is: you need feedback, not just a passive source of emulation.

Consider learning to drive a car. When you take a driving class, you get a bit of lecturing, and there’s a booklet you’re supposed to read, but they don’t tell you anything that isn’t obvious. Are you ready to learn by imitation? No, that would be disastrous. As I wrote elsewhere:12

You need someone to teach you how to drive; someone who will sit beside you and explain the controls, and give directions, and watch you screw up, and tell you what to do instead. The skill can only be transmitted by apprenticeship.

Situated learning theory explains apprenticeship as in a community of practice.13 Let’s unpack that.

You learn know-how by doing. However, in most cases, just doing on your own is inadequate. Imitation is also mainly inadequate—as the Melanesian cargo cults so dramatically illustrate. means doing with other people who know what they are doing. Typically, we learn from collaboration, not from observing and then accurately duplicating the action by ourselves. We aren’t that smart!

means that you are accepted as part of the group activity, and given a role within it that everyone agrees to. If you walk out onto an airport landing strip and start “helping,” you probably won’t learn anything (even if you aren’t immediately dragged away by security dudes). Members of a ground crew have complex, interlocking duties; you have to fit into that schema to participate.

Becoming a junior member of a research team grants you the legitimacy needed for participation in its scientific activity. This cognitive apprenticeship is the only way to learn to be a scientist.

means that the group initially assigns you a minor role: simple, low-risk tasks that are nevertheless useful. As you master each, you are given increasing responsibility, and trickier, more central roles.

Legitimate peripheral participation is a major reason someone would bother to tutor you. In formal instruction, teachers get paid. But most learning is informal, and most “teaching” is unpaid. The learner’s valuable labor gets exchanged for tuition. This is part of the science system, too: graduate students and postdocs contribute to their professor’s research program.

Legitimate peripheral participation is a more powerful motivation for accurate feedback than money. If a student’s labor contributes to the success or failure of your project, you want to be sure they are doing it right—and so you will scrutinize their work carefully, and give detailed corrective advice.

Feedback is not the whole story, however. People learn from collaboration in ways that go beyond both imitation and explicit correction. We pick up a great deal “by osmosis,” as Feynman put it. The situated learning research program has observed this carefully in hundreds of diverse contexts, and has gone some way toward explaining how it works.

The problem with the cargo cults is not that they are imitating. It’s that their members are not legitimate participants in airport operation.

Imagine a cargo cult downloaded all the manuals for ground crew procedures from the web, and watched thousands of hours of videos of competent ground crews doing their jobs. Imagine they learned them perfectly, and were able to execute them perfectly.

Still no airline would be willing to use their airport. The cult is not certified for operation; it is not legitimate. The proper bureaucratic rituals have not been observed. These rituals are rational: there has to be a fixed procedure for assuring that a ground crew is competent, and making special exceptions could be disastrous. “These cultists sure seem to know what they are doing; let’s create a set of tests to verify that, without putting them through our usual training regimen”? That would risk airplanes and lives, and would probably end the careers of everyone involved.

Communities of practice

A develops informally and automatically among any group of people who engage in an activity that requires specialized know-how. Whether you are getting seriously into knitting or tokamak optimization, you want and need to talk to other people doing that.

Informal contact naturally develops into a feeling of community. That typically becomes increasingly structured, with multiple communication channels, central authorities, cliques and factions, scheduled and spontaneous group events, and so on. Leaders may formalize the community into an organization, with defined roles and procedures. Air transport organizations take formal bureaucratic rationality to extremes; science somewhat less so.

A community of practice develops its own culture, worldview, and way of being. That includes its own ethical norms, and its own epistemic norms. These may be partly formalized, but remain mainly tacit. They are absorbed by osmosis, as know-how more than as know-that. They are “the way we do things,” which members can gesture at, but not necessarily explain. Becoming a ground crew member, or a scientist, requires a process of enculturation to acquire this tacit knowledge.

Tacit knowledge often contradicts explicit standards—and therefore could not, even in principle, be learned from manuals. In every workplace, there are the official rules, and then there is “the way we do things,” which involves extensive implicit exceptions.14 Those are not ethical norm violations—from the community’s point of view, at least—because “the way we do things” is the ethical standard of the workplace. In every laboratory, there is the protocol manual’s way to run an assay, and there is the way “we” run the assay. That is not an epistemic norm violation—from this research group’s point of view, at least—because the way “we” run the assay is better; or at least takes a lot less hassle and “works perfectly OK.” (Which may very well be true—or not.)

Every social group has two inseparable aspects: it is an invaluable and inescapable resource, but also a zone of socially-enforced conformity, thought-taboos, and dysfunctional practices and attitudes. Every intellectual community transmits to its members a mixture of epistemic virtues and epistemic vices. Some are far more virtuous than others, but none is perfect, nor perfectly depraved.15

Epistemic virtue and vice are not just learned from a community of practice, they inhere in it. The ways that community members interact, and the way the community comes to consensus as a body, are epistemically virtuous or depraved partly independent of the epistemic qualities of individuals. Just as moral preference falsification can lead a community of good people to do terrible things, epistemic preference falsification can lead a community of smart people to believe false or even absurd things.

The problem with nutrition “science” is not that individual nutritionists are stupid, ill-informed, or malicious. It is that the collective epistemic practices of the community are self-serving, wicked, wanton, paranoid, and deranged.16

Like other eternalisms, Melanesian cargo cults involve ideological “beliefs” that work quite differently from pragmatic beliefs like “my bicycle is blue.” Many Christians profess to “believe” the Rapture is imminent, but usually their actions show that this is not a belief in the ordinary sense. Cargo cultists may “believe” that their rituals will bring cargo, but this “belief” is probably as remote and theoretical as Christians’. Such “beliefs” have important functions in maintaining religious identity, membership, and institutions, and in advancing the careers of religious professionals, but they are not taken literally.

The “belief” that particular ritual activities will bring about scientific breakthroughs is often similarly unconnected with scientific discovery. Yet it is similarly important to the smooth functioning of “scientific” institutions and careers.

The replication crisis: mo’ betta rationality vs. epistemic vice

Clueful scientists have recognized for decades that most supposed science is actually cargo culting—but it seemed little could be done.

As Feynman said, cargo cult scientists “follow all the apparent precepts and forms.” The problem is mostly not disregard for epistemic norms; it is that the norms themselves are inadequate. But it is those very norms that define the epistemic community.

The current replication crisis is driven largely by broad moral outrage.17 That motivates a research practices reform movement, seeking to correct epistemic failures that are due to rampant, collective epistemic vice. The moral character of that vice is stressed by some scientific community members—and resisted by others.

The old guard’s attitude is: We followed all the rules, so we deserve to be rewarded accordingly. To which the rebels say: Yes, but the things you thought you discovered weren’t true.

Leaders of cargo cults—in science as well as religion—usually fight to keep their status, power, and income, by opposing attempts at epistemic reform. “Science advances one funeral at a time,” wrote Max Planck:

A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it.18

The current reform movement in academic psychology is led mainly by junior members of the epistemic community, and instigated partly by outsider skeptics. However, some senior members have demonstrated heroic epistemic courage by retracting their own earlier work and advocating epistemic reform.

Reformers—in psychology and other fields such as medical research—advocate better explicit research practice standards. These valuable methods of technical rationality include, for example, more frequent replications; experiment pre-registration; publishing all negative results; reporting effect sizes; and abandoning the famously flawed p<0.05 significance test. If adopted, these will be significant upgrades in epistemic communities that have been practicing mainly cargo cult science. This will be a big win, I think!

Unfortunately… it also embodies the essential epistemological failure of cargo culting. That is the belief that there must besome definite method that will reliably bring the desired results. Then you just need to follow the recipe, and cargo will arrive, summoned by magic out of the sky.

But Campbell’s Law19 says that if you set up any explicit evaluation criteria, people will find ways to game the system. They’ll find ways to excel according to the standards, without producing your desired outcome. They’ll follow the letter of the rules, but not the spirit. John Ioannidis, who has done more than anyone to improve medical research standards, details exactly how and why this happens in his searing “Evidence-based medicine has been hijacked.” Institutional changes cannot guarantee science (or government, or education, or software development) that goes beyond cargo-cultish adherence to procedures.

So, better explicit epistemic norms are a significant upgrade, but they aren’t the answer. There is no substitute for actually trying to figure out what is going on. That requires technical rationality—but it also requires going beyond technical rationality.

There is no method: only methods

Do not seek to follow in the footsteps of the wise; seek what they sought.20

“There is no method” is a Dzogchen slogan. Dzogchen is unique among branches of Buddhism in offering no path to enlightenment. This may seem paradoxical at first, because Dzogchen offers innumerable methods—probably more than any other Buddhist approach—and is widely considered the most reliable path to enlightenment.

There is no “The Scientific Method,” and science offers no path to truth. That may seem paradoxical at first, because science offers innumerable, excellent methods, and is the most reliable path to truth.

“The Scientific Method” is the central myth of rationalist eternalism. It is scientism’s eternal ordering principle—the magical entity that guarantees truth, understanding, and control. But no one can say what it is—because it does not exist. No one can explain how or why science works in general, nor how to do it.

We can say a lot about how and why specific methods work—and that is critical. Nevertheless, blind faith in any specific method separates you from the reality of what is actually going on. That is the essence of cargo culting.

The kind of upgrade you need to advance from cargo cult airport operation is critically different from the advance beyond cargo cult science:

  • Bureaucratic and technical rationality routinize airport operations, making them reliably good-enough. Conforming to the ritual norms of the ground crew practice community makes you a fully competent ground crew member.
  • Science—and any intellectual work involving innovation—addresses the unknown, and therefore must not be routinized, ritualized, or merely rationalized. Conforming to the ritual norms of a practice community does not produce discovery.

The reason Feynman’s “utter honesty” is not taught is that there is almost nothing to say about it—in general. Epistemic virtues are not methods; they are attitudes, and meta to methods.

Recognizing the limitations of rationalist rituals does not mean abandoning them. You have to use methods, and you also have to relativize them. You need meta-rational competence to recognize when a method is appropriate, and when it is not.21 There is no explicit method for that—but, like riding a bicycle, it can be cultivated as tacit know-how. “Reflection-in-action” describes that meta-level learning process.

For the individual, becoming an actual scientist requires two shifts in identity and membership:

  • First, you become a cargo cultist: a devout member of the community of practice. Acquiring know-how—explicit and tacit—is most of the work here. The way of being of a cargo cult scientist is social conformity.
  • When you have mastered the community’s methods, you see their limitations, and you transcend its epistemic norms, without abandoning them. Developing meta-rational know-how is part of this second shift. However, a shift in your relationship with the scientific community, from mere membership to meta-systematicity, is the key change in the way of being.

Being meta to your community implies critical reflection on its norms. It implies taking responsibility for community development, for upgrading it, while continuing your involvement in it.

Upgrade your community of practice

Despite heroic mythology, lone geniuses do not drive most scientific, cultural, business, or policy advances. Breakthroughs typically emerge from a : an exceptionally productive community of practice that develops novel epistemic norms. Major innovation may indeed take a genius—but the genius is created in part by a .

“Scenius” stands for the intelligence and the intuition of a whole cultural scene. It is the communal form of the concept of the genius.

Individuals immersed in a scenius will blossom and produce their best work. When buoyed by scenius, you act like genius. Your like-minded peers, and the entire environment inspire you.22

There is no systematic method for creating a scene, for improving epistemic norms, for conjuring scenius, or for upgrading a community of practice. These are “human-complete” meta-systematic tasks.

There is no method—but there are methods. There are activities, attitudes, and approaches that encourage scenius. These are available to individuals, institutions, or both. Neither can change a community’s epistemic norms unilaterally, but both can contribute to upgrades.

Kevin Kelly describes some scene features that individuals can contribute to:

  • Mutual appreciation — Risky moves are applauded by the group, subtlety is appreciated, and friendly competition goads the shy. Scenius can be thought of as the best of peer pressure.
  • Rapid exchange of tools and techniques — As soon as something is invented, it is flaunted and then shared. Ideas flow quickly because they are flowing inside a common language and sensibility.
  • Network effects of success — When a record is broken, a hit happens, or breakthrough erupts, the success is claimed by the entire scene. This empowers the scene to further success.

Management theorists describe “learning organizations” that don’t base themselves on fixed structures, principles, and procedures. Rather, they conduct continuous meta-systematic reflection on their own commitments, and revise those accordingly. Such organizations also foster the learning and development of their members so they can take on increasingly challenging, interesting, and valuable responsibilities. There are abstract and concrete steps an organization can take to transform itself from a cargo cult into a dynamically innovating scene.23

As one example, making it easier for members to switch fields would represent a major upgrade out of cargo cultism in universities and other large institutions. It would take enormous institutional reforms to allow that, and enormous resources to support people in transition, and that would require enormous institutional courage—but it may pay off enormously, too. Fields often advance rapidly when they are joined by talented outsiders who bring powerful, different ways of thinking. And, clearing out the deadwood of people who have fallen out of love with their disciplines would allow vigorous new growth in the fields they leave—without requiring funerals!

Recap: For the win!

Too much of life is wasted going through the motions, playing it by the book, acting according to systems no one really believes in and that fail to reflect a volatile, uncertain, complex, and ambiguous world. This is deadening for individuals, and for society a vast loss of opportunities for prosperity and innovation.

The lesson of cargo cult science for all human activity is that fixed systems are inadequate, because they never fully engage with the nebulosity of reality. We can, and must, upgrade to better ways of thinking, acting, and organizing our communities.

As individuals, we acquire basic competence through legitimate peripheral participation in communities of practice. In becoming a member, we absorb the community’s explicit and tacit norms—including ethical, epistemological, and process norms. Some communities of practice have mainly functional norms; some are highly dysfunctional.

Communities can upgrade their norms—the research practices reform movement is my main example in this post—and individuals can contribute such upgrades. Still, acting according to even the best norms can produce only routine performance, and it inhibits fundamental innovation and discovery.

As individuals, innovation and discovery demand meta-systematic competence. Once we have achieved mastery of the methods of a community of practice, we can reflect on how and when and why they do and do not work well. Then we can accurately select, combine, revise, discover, and create methods.

Communities (including, but not only, institutions) can take a meta-systematic view of themselves. They can reflect on their own goals, structure, dynamics, and norms. Such reflection may afford much greater leverage than incremental process optimization.

In plainer words: win big!