Welcome to the Philosophy Forums! If you are not a member, please join the forums now. It's completely free! If you are a member, please log in.

## Search found 4 matches

• Author
• Message

You're talking about inductive reasoning. The idea that a pattern in past observations can be extended to predict future observations. Hypotheses are the patterns, proposed because of past observations.

It's not just used in science. It's the basis on which we get through pretty much every aspect of our lives. On a small everyday scale we constantly have to try to predict future experiences/observations. We do it on the basis of our past experiences. Is it justified? Yes. It is justified because it is has so far been useful. It seems, so far, to have worked. So we might as well keep doing it until it stops working. Utility is the justification.

Of course, this type of reasoning is formally invalid. You cannot say that the antecedent is true because the consequent is.

The antecedent is a hypothesis based on previous observations. It is never asserted to be true with absolute certainty. It is proposed to be probable because the consequent continues the pattern of past observations. That is why scientific theories can never be proved true.
I don't have much experience of the use of Baysian reasoning, but I would say that it is a mathematically formalised and quantified method for applying the principle of inductive reasoning. Inductive reasoning gives you the general qualitative idea that the reliability of a pattern of past observations is strengthened by future observations that fit the pattern and weakened or destroyed by those that don't fit the pattern. Baysian reasoning seems to allow you to actually quantify - to attach probabilities to - the strength of the pattern. But, as I say, my experience of Baysian reasoning is limited. so I may be wrong!

James:

Bayesian reasoning is merely saying, "it is more probable than not, therefore it is true."

This is not my understanding at all. If by "true" you mean "certain" then the incorrectness of your statement is obvious. Your statement would then amount to this: "If probability > 0.5 then probability = 1". Clearly the conclusion does not necessarily follow from the premise.

If you mean something else, you'll have to explain it.

In your example of people being "murdered", do you mean people being executed on flimsy evidence? If so, how does this relate to Baysian reasoning? In a legal setting, there is the concept of "proof beyond reasonable doubt" because the probability of guilt can never be shown to be 1.
James:
It only takes a single false incident for a hypothesis to be invalidated. That is why they have falsification.

That's true in principle. But in practical situations it's not generally possible to be 100% certain that what you have observed is a falsifying event. And the more previous supporting evidence there is for the hypothesis the more certain you have to be that you have indeed witnessed a falsification before abandoning the hypothesis.

The recent supposed measurement of neutrinos travelling slightly faster than light is a great example. They carefully checked their measurements and still seemed to have discovered faster-than-light travel. But Einstein's Relativity still wasn't ditched because it is such a well supported theory. Sure enough, it recently seems to have turned out that there were errors in the measurement process after all.

---

By the way, I did some reading about Bayesian reasoning and I think I know why you mentioned "murder" and the legal system. I didn't realize before now that in some court cases attempts have apparently been made to get the jury members to explicitly use Bayesian methods to work out the probability of the defendant being guilty. No matter how logically sound the methods might be, I can see how this could cause problems! Trials don't use juries because of the mathematical or analytical skills of the general public. They use them out of a democratic sense that we should be judged by our peers.
Wowbagger said:
...Popper was right to notice that there's a big assymmetry, observations that go against your hypothesis have a much stronger (negative) impact on the degree of certainty one should attach to a hypothesis. Because most of the time the prior probability for a plausible hypothesis starts out relatively high already, so there's not much "surprise value" in observations that confirm it. Whereas, if your hypothesis predicts things wrongly, this should drastically change the certainty you assign to it.

I only just read this properly and wanted to make an observation about it.

It seems to me the assymmetry that Popper was concerned about is a natural consequence of the fact the hypothesis, in order to attain the status of hypothesis in the first place, has already been supported, directly or indirectly, by previous observations. So the fact that there is relatively little surprise value in confirming observations is a reflection of the fact that lots of such observations have already been made and packaged up into a prior probability.

Maybe not a particularly interesting or controversial observation, but I thought I'd make it anway!