As I was looking back over yesterday’s post speculating on the possible causes of the controversies surrounding evolutionary biology’s public reception, a curious and dangerous idea occurred to me. Curious because it’s potentially very interesting. Dangerous because, like most vague hints and shiny instincts, it might not make any sense at first brush. But as it develops into a more detailed approach, this idea might become central to some of my projects.
There seems to be a disconnect between how scientists and most philosophers of science understand what knowledge is, and how everyday people and epistemic philosophers understand what knowledge is. And it all revolves around the quest for certainty.* The first group tends to view knowledge as better when it is complex, subject to doubt, and open to revision. That revision can be a matter of small refinements, or it can be absolutely radical. This is a perspective on knowledge that had thoroughly absorbed the lessons of Thomas Kuhn and Karl Popper. Science doesn’t progress by preserving and adding to an established tradition. It progresses by continually seeking out new fields of study or new techniques to investigate the world, and letting those new discoveries add to, revise, or occasionally completely reorder established scientific models.
|John Dewey, author of The Quest for Certainty, at the time|
of his most badass moustache.
But the general public aren’t taught about science this way unless they go to university, and even at university, they’d have to study philosophy of science where the professor teaches them about Popper, Kuhn, and their intellectual descendants. The most common public insight about knowledge is that it must be certain, not in any circumstance subject to revision unless the world itself changes. So goes the intuitive popular account of what knowledge is: a scientific theory is a dogma like any other, and we either believe it wholesale or drop it all as a falsehood. If a fact doesn’t inspire absolute certainty, it isn’t real knowledge.
It used to be a standard in epistemological philosophy that a belief was genuinely knowledge if it was true and you could justify why you believed it. Edmund Gettier wrote a short paper in 1963 consisting of a couple of hypothetical examples of instances of justified true beliefs whose truth and justification were arrived entirely coincidentally — so justified true belief isn’t necessarily a marker of genuine knowledge, because the believer may just have gotten lucky. Ever since Gettier’s paper dropped, a perennial cause in epistemology has been trying to find a standard for knowledge that was even more certain, more rock-solid, than justified true belief.
Now, the complete structure of a scientific theory is a far larger and more complicated beast than a single person’s belief. But they’re both ultimately about the nature of knowledge, and they’re both totally incompatible. Epistemologists and the general public consider certainty the bedrock standard of what we should call knowledge, and anything less than certain is not worth believing. Philosophers of science, scientists, and I consider the best knowledge to be flexible, functional, always open to revision, always hunting for new facts, new evidence, new reasons for the very revision that for the general public invalidates an institution as knowledge.
Ultimately, I think what conception of knowledge you accept depends on what kind of person you are. If you fear a rudderless world, then certainty will be your standard for knowledge. If I can put it in terms of yesterday’s post, such a person is more amenable to swallowing the pat certainties of religious dogma: you accept these certainties, and you need never have a doubt or ask a question again. If you’re strong enough to live in a world of contingencies, uncertainties, where anything could be open to change, then you’ll understand how science and knowledge actually work.
I’m not entirely sure what to do with this idea, but I think this weekend, I’ll take some long walks and try to figure it out.