What’s your certainty type? Are you a foundationist? A coherentist? A truth tracking evidentialist? Or have you given up on certainty entirely? In today’s blogcast, and in our next few episodes, we’ll sort through what certainty really means, and how much of it we need for the maximization of our awesomeness. Ready?
The world of philosophy sometimes seems remote from the world of decision making. Do we really have to prove we are not surrounded by a computer simulation in the Matrix before pursuing a medical career or a romance? It’s remotely possible that there is an alien plot to impregnate you through the GMOs in that banana you’re about to eat. Are you sure you want to sink your teeth into it?
As I’ve explained before, Pamalogy is a philosophical system. The Pamalogy Society is a non-profit corporation that is dedicated to the maximization of awesomeness. Questioning everything is a philosophical habit. Maximizing awesomeness is the application of that philosophy. If you’ve ever taken a course in philosophy, then you probably know it is full of industry jargon that takes years of getting used to. That’s not going to happen here.
I’m going to assume my readers and listeners are smarter than average, but won’t assume they are schooled in philosophy. Everything you see here is just supplementary to the Pamalogy 101 courses we offer for free on our web site, so you can earn yourself a top hat and start sharing what Pamalogy is with your friends when they ask you where you got that cool hat. It’s all introductory level stuff.
So, let’s talk about it. Probably the most famous philosophical statement ever made was, “I think. Therefore I am.” It was said several hundred years ago by René DesCartes, after performing a thought experiment aimed at deciding what he could be certain of. He imagined there was an evil demon that was smart enough and powerful enough and wicked enough, to utterly deceive him by completely surrounding him with false realities. “What if” … he thought to himself … “all that I feel, taste, see, hear and even think, is all some grand illusion? Can I be certain of anything at all?”
As remote as that possibility seemed, DesCartes had to admit that it was still possible. But then he had an “ah ha!” moment. It occurred to him that even if he was completely deceived about what reality was, even that deception was at least a thought. At the least, he could acknowledge that he was thinking. And if he was thinking, he could also say that he existed, because in order to think, it must be necessary to exist to do the thinking.
Now if you’ve studied mathematics, you might be familiar with some other things Descartes was famous for, like the Cartesian formula. He was a very smart guy. DesCartes’ goal was to base science on a foundation of certainty, after certain former scientific beliefs, like the idea that the Sun rotated around the Earth, had been disproven. I won’t go into his whole thought process here because subsequent philosophers found weak spots in his logic that rendered his system sort of useless and impractical. But I want to start with DesCartes, because he was the founder of a school of philosophy called “foundationism.” And while foundationism grew out of fashion over time, Pamalogy takes a fresh look at it, and uses it as a building block in awesomeology.
Foundationism is all about certainty. It isn’t about testing things through experience. It is about the things that are necessarily true based on pure logic. There is more that can be known based on logic alone, than most people realize. So for Pamalogists, that’s where any discussion of certainty should start. Are you a foundationist? Most people aren’t. They judge based on experience, not logic or pure reason.
I could start this discussion with some disagreements I might have with DesCartes. I think he should have reduced his conclusion to “perception happens” rather than “I think. Therefore I am.” His perception about the concept of “I” or “me” may have been an illusion too. Have you considered that? Maybe not. But on closer examination, nothing in his thought experiment proved that whatever happened to be thinking wasn’t a “they,” a “we” or even a “you.”
Personal identity is something DesCartes didn’t explore. But some have and this has evolved into a branch of philosophy called identity theory.
One of the better known recent identity theorists was a guy named Derek Parfit. Parfit and the identity theorists ask things like what happens when there is a malfunction in Star Trek’s transporter and a copy is made of Captain Kirk instead of a change in location? When he gets copied by mistake, which one is the real Captain Kirk?
This question may seem silly and irrelevant for the real world, but what if, as certain quantum physicists have suggested, there is a multiverse that splits off at the quantum level to every possibility? If that’s true, then what does that do to identity theory? It could be something intrinsic to quantum mechanics and every day physics. But what would that mean for personal identity? It would radically change our perception of what we are.
Indulge me for a minute on this. To keep it simple, let’s start with a single split example. Let’s say that the whole Universe splits the way an embryonic cell splits and multiplies in the womb. In the womb, at one moment there is a single cell organism. Then at some point the single cell duplicates into two cells as a single person grows, but in the case of twinning, the two cells separate, becoming identical twins.
This begs the question: When you think about the first cell, prior to the point of twinning, do you think of two human beings, or one? When the cell duplicates and becomes twins, which human being possessed the original cell? Both? Or were two new human beings formed only after the twinning event? And if so, was the original cell a human being at all? In my judgement, the two human beings shared the same body during the time when there was only one cell.
Similarly, if like a twinning embryo, the whole Universe split into two different time lines, what was in the past was in common for both. So we could call Universe A and Universe B unique Universes with respect to their future, but with respect to their past, we could speak of one Universe that bundled together both Universe A and Universe B. It was two Universes that looked like one.
Derek Parfit can rest assured that when cloned instead of transported, Captain Kirk has one common past, but two separate futures. There was both a Captain Kirk A and Captain Kirk B, bundled up together all along in what we saw. Everyone knew them singularly as plain old Captain Kirk, prior to the duplication, but Captain Kirk was actually a “they,” not a “him” the whole time. Who knew?
It’s a good example because we can imagine Captain Kirk in two different places at once like twins. The splitting of Universes is harder to picture because we think of space as something limited. We only see one timeline, the one we view looking at the past. We think about the future in terms of one possibility, and usually fail to consider that multiple possibilities might become multiple realities. We don’t tend to look at ourselves as a bundle of future people. We see ourselves as “I” or “me” from what we’ve observed in the past. But if it was true, that the Universe, continually breaks off into every possible future, then it would be more accurate to refer to “we” than “me.” And if you wish to refer to me as “they,” as my preferred pronoun, or “you” in the plural sense, feel free. I’ll consider it an acknowledgement of your belief in the multiverse.
It’s something to ponder. For now, I’ll move on from that sort of metaphysical philosophy to applied awesomeology – something not quite so far out. I’ll pick up on the multiverse later, showing you why I think a multiverse is logically necessary and true, not just something to talk about while you listen to Pink Floyd and smoke weed. Also, I’m going to show you why it is actually not true that every possibility occurs. But that’s all for future episodes. For now, let’s get into practical Pamalogy. How do we maximize the awesomeness of any one Universe?
This may sound like a contradiction, given what I just explained, but the first step in applied awesomeology, is to assume that what you see is real. What seems most obvious, is probably true. If you see your hand in front of you, your hand is probably in front of you. There is sensory evidence that this is true. Work with it. If sensory evidence turns out to be false in the long run, your error probably won’t matter much unless you’re sleep walking and about to knock yourself out on a blunt object. What are the odds? Applied awesomeology is probabilistic.
Absolute certainty, in the form of logical necessity and inference, is foundationism. Probablism is calculating but isn’t always tied in with logical deduction and necessity the way foundationism is. It deals just as easily with the world of experience, trusting that it is what it looks like it is. Even if it understands that hypothetically, our experience could be the result of some mad scientist whose put our brain into a big vat at body temperature and is manipulating all of our thoughts, that that’s neither obvious nor likely.
Instead, we form coherent theories about reality. This is what philosophers call “coherentism.” Coherentism pretty much abandons foundationism in favor of a coherent set of ideas. It is satisfied with a lower standard of proof. Basically, it says, here’s all the related evidence for something. Here’s the best explanation for it. So, here’s how things probably are and this is what we can consider to be true. Some things are more certain than others. We can build around the things we’re pretty sure of based on our experience, even if we are relying on our somewhat unreliable senses.
When you think of coherentism, think of a crossword puzzle. Do all of the pieces fit together? If they do, then that is probably the right answer. But have you ever done a crossword puzzle two different ways and both answers correctly filled in all the blanks? It can happen. And in fact, this is why we wind up with different ideologies and worldviews.
Areas of controversy usually center around causes and effects. Our worldview centers around who and what we trust. We often say, “trust the science.” But some of us have learned to say, are you kidding me? Science has become political. The last thing you should do is trust the science! I don’t know if I agree with that. I still do pretty much trust science. But not everybody feels that way.
Karl Popper was a philosopher who had much to say about science and would have agreed. Popper thought that the whole point of science was to disprove theories, not prove them right. Science tells you something is true only to the extent that a theory hasn’t been disproven. The goal of science is to keep testing against theories, not for them. If we are to assume that man-made global warming is true, for instance, Popper might question why so much of our funding goes to proving it is true, rather than attempting to prove that it isn’t. If it’s true, no amount of testing will disprove it. In Popper’s view, assumptions are always theories. In general, the scientific community has been very critical of Popper. In Popper’s world, it might be healthy that science has the antagonism of climate change deniers attempting to counter the theory of man made global warming because it tries to present data that contradicts the theory. A lot of people find climate change denial to be highly dangerous for life on planet Earth. Is that true?
Both deniers and alarmists are coherentists. Coherentism starts with assumptions. If our worldview starts with the assumption that man-made global warming is real and about to doom us, then our opinion about what measures we should take to combat it will be different than if we think the matter still needs to be tested. Assuming it is either true or false, is like completing the crossword puzzle using an ink pen. Maybe we should fill in the blank, but not everybody is so sure about that.
The problem with coherentism is that each person has their own set of coherent assumptions. Some may agree that we have a global warming problem, but not agree that the total cessation of fossil fuel use would make any measurable difference. Some might predict there are only ten years left before the Figi Islands are completely submerged in the Pacific Ocean, while others suppose it won’t happen for centuries to come, or at all. Some may think fossil fuel contributes to global warming. Others may think it is inevitable that the Earth will have warm and cold cycles and fossil fuels have a negligible impact or that we’re responding the wrong way.
The Truth Machine that I’ve invented is designed to hash out differences between coherentists. Although conflicting worldviews are a reality, their plural perspectives can be used to cross-check one another. The common housefly has many eyes for a reason. It helps it survive long enough to reproduce and perpetuate its species. If we reduce our worldview to a singular perspective, we risk getting swatted out into non-existence. The bad policies we create could be just as dangerous as global warming. They could lead us to nuclear war, to Neo-Naziism, to worldwide poverty and disease. Good policies could save us from any of that. But what are they? The truth matters, not just a majority opinion, but reality.
With so much at stake, we can’t risk being wrong. What will we do with these competing coherent worldviews?
Fortunately, the philosophy of certainty moved beyond coherentism during the last half century, and I think we can apply it. Coherentism is a worldview with a lover called evidentialism. Evidentialism is independent from coherentism, which is a sort of set of beliefs people don’t easily change because it is made up of what they think of as foregone conclusions. With evidentialism, if the evidence supports something, maybe it’s true. We have to be willing to abandon our preconceptions.
Then again, maybe evidence is not all it’s made out to be. Any epistemologist will tell you that evidence alone doesn’t mean knowledge in the sense of certainty. In fact, it can be entirely misleading.
Think of Dexter. Dexter Morgan always covers up his murders. It’s all about evidence. The serial killer is good at making it look like somebody else did it, and manages to survive through seven seasons, fake his death and then live on for a bonus season. Evidence can be very misleading. To think of it as creating sufficient certainty would be a big mistake. We need something more.
More respected among philosophers than Dexter, was Edmund Gettier. Epistemologists tend to obsess over Gettier and I wish I had all day to talk about it. Gettier challenged the idea that knowledge was a matter of justified belief in something true. He showed us that even when you think you have all the right reasons to believe, even when it is justified, and even if you are right, you may still lack certainty, because you believe for the wrong reasons.
OK. Here’s one example:
Suppose you are justified in believing your 2 pm train arrives in an hour, seeing the clock at Grand Central Station saying it’s 2 pm and the train is scheduled for 3 pm. But now let’s say the clock isn’t actually working, but only just happened to have stopped at 2 pm last week, which was the same time you happened to have arrived a week later, to look at it. Your belief that the train would arrive in an hour would be true. But it would be based on bad evidence. Clocks generally work, but did you check to see if it did?
Certainty is an important word when it comes to complex political issues that could save the world. Have we checked the clock to make sure we have more than evidence, but reliable evidence, as well?
Another philosopher, Alvin Goldman, introduced two ideas that might help here. First, he thought maybe we could have certainty if we had knowledge of causal connections. Causality shouldn’t be ignored in assessing why evidence looks the way it does, and if we’re wrong about causes, that can really skew our opinion of why we believe things and whether it is truly justified. In the example we just saw, if you had known about the coincidence, then you’d have certainty. Of course, you’d need more evidence to verify that with.
Still, knowing causes is still insufficient for obtaining certainty. There were other Gettier cases, not all proposed by Edmund Gettier, that Goldman’s causality theory didn’t solve. Later, Goldman, realizing this, introduced the idea of reliabilism as well. It’s sort of obvious, but as a condition of obtaining certainty, Goldman said the methods we used and the instruments we used to obtain it, had to be reliable.
So, for instance, when the cop tells the judge you were going 65 mph in a 30 mph zone, your lawyer can ask when the last time the cop had his speed gun checked. If it isn’t up to date, proof of your speeding may be insufficient. And even if it was checked, when was the last time the callibration device was itself checked? This chain of reliability could go on endlessly, and as true as it is that you were probably speeding anyway, that cop didn’t provide the certainty needed to convict you in the eyes of the court.
It may be like a loophole in the law in the case of speeding tickets, but what if the issue was whether or not to issue a nuclear weapon in defense of the whole world? Reliabilism totally kicks in when certainty matters most. But is even that enough?
If there’s anything Oceans 11, Mission Impossible and other high tech suspense movies involving break ins have taught us, it’s that technology can be hacked. While the security guard is looking at a monitor, a video loop can be inserted. This is why a majority of Americans don’t believe that Jeffery Epstein really committed suicide. A prison can have very high security that is reliable and cross checked numerous ways, but are there any vulnerabilities? If there are, then what we think ought to be reliable, and an obvious cause, can leave us deliberately deceived by someone who knows how to get away with a crime. A coroner’s report that would normally help, was ambiguous enough to continue to allow for the break-in conspiracy theory to thrive.
The public believed that there could have been a motive for such a break-in among interested parties in high enough places to be capable of pulling it off. But that doesn’t mean the public is necessary gullible. Quite the opposite, y believing such a conspiracy theory was possible, the public was exercising what some philosophers regard as the next great advancement in determining certainty – truth tracking.
Truth tracking was made famous by the epistemologist Robert Nozick in order to better deal with certain Gettier cases. Truth tracking is just what it sounds like. In the Epstein case, the public, by thinking the truth wasn’t tracked, allows that despite all the reliable methods given for ruling his death a suicide, we can’t be certain. The only way to confront the conspiracy theory would be to track each suspect to make sure they couldn’t have instigated it. Nobody in any high places in society is tracked to that extent. We don’t do that. That level of accountability doesn’t exist. Therefore, a murder plot can’t be ruled out.
There is an obvious flaw in such thinking. Almost any conspiracy theory could be considered viable if the truth had to be tracked to that extent. Even if we had the resources and technology to track the truth to the extent necessary to be certain about everything, according to the high standards of truth tracking, we lack the political will to give up our privacy. To some degree, then, we’ll have to settle for some amount of unknowing. We have to offer our best guesses about life. In the final analysis, it all comes back to probabilities. We can talk about what is most probable. We can include the weaknesses involved in our best guesses as we weigh what is involved in determining those probabilities.
We talked about Captain Kirk, but not Spock. Spock always seemed to be able to calculate the odds of survival in almost any situation. Didn’t he? To calculate what is likely to be true, we use something called Baye’s Theorem. I won’t go into any equations here, but using Baye’s formula will be fair game for researchers using the CounterChecker platform.
Sadly, what this all means is that my truth machine is just a tool. It’ll be helpful, but it will have to settle for answers in terms of probabilities, more often than providing certainties. Reality matters and so do probabilities. Probabilities are a reality for predicting the future. They are a best guess. And when it comes to maximizing our awesomeness, best guesses may have to suffice, more often than not.
So now, lets take a step back and look again at the fact-checking industry. How is it conducted? An organization will give us a rating based on their research. It will tell us whether someone is lying or whether they are wrong. The answer is usually presented to us as either yes or no, or partially true. The result rarely involves admitting that there is any probability that it is wrong.
One of the first things that goes off in the personal BS-o-meter I have inside my head is when people say that some point is “baseless.” Where is the language of probability? Where is their fair calculation? Did we track the methods used to determine what they say is true? If we did, what certain types were involved? Was it probabilistic? Evidentiary? Was the evidence and conclusion from that evidence tied to causes for justification of the conclusion? Did it consider the reliability of the methods used to determine it? To what extent was the truth actually tracked? Reality isn’t as black and white as these organizations tend to make it look. If we really care about the truth, we should be questioning this every step along the way. To what extent are these fact-check organizations actually doing that?
Probably very little. But if the truth matters, and I think it does, then we need to raise the standard. Confidence in big tech will continue to decline until we do. Conspiracy theories and bad information will continue to flourish if we don’t make a change. Please allow me to finance the CounterChecker. We’ll need to raise $1.4 million in grants and donations to get it started. Development of the platform is expected to take about nine months. If you can’t contribute yourself, share our podcasts and blogs with those who might be able to. There are always ways to help. We want volunteers, as well. The Pamalogy Society’s volunteers will help incubate worthy programs and you could be one of them. If you have skills and experience in any specific area, let us know. We’ll help you maximize your impact in the world.
Today, we talked about some basic concepts in discerning probable truth and I let you know that “certainty” and “fact” are nice goals, but not necessarily realistic ones when it comes to applied awesomeology. Maybe we could refer to the CounterChecker as a “Best Guess Machine” instead of “Truth Machine.” Likewise, we should be referring to the existing so-called “fact-check organizations” as “opinion organizations.” I’m sure they believe in their own conclusions, but how often do they produce something we could call “knowledge,” “fact,” or “certainty”? Their designation is really very misleading.
Until we actually start checking their content, we won’t be able to keep them accountable. Next time, I’ll address some of the practical challenges we’ll face in doing just that and in producing results that everyone will agree are fair. This will lead into how the machine is likely to be used. You won’t want to miss it!
Thank you once again for listening. Our blogcast is the Pamalogy Society transcript so if you are a listener, be sure to visit our web site. You’ll enjoy links and data on our web page that you won’t get otherwise. Ciao!
URL for sharing this transcript page: https://pamalogy.com/2021/12/13/certainty/
URL for sharing this podcast: https://player.captivate.fm/episode/d31fae9f-f36a-41b6-8a5d-44fd36417a52
URL for sharing just the audio file: https://podcasts.captivate.fm/media/f5186ca0-0a31-496e-bf4f-65301fd18011/episode4-certainty.mp3
Previous: Words Hurt
Next Up: Probability