• If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

View

# The Varieties of Inconsistency

last edited by 13 years, 2 months ago

# The Varieties of Inconsistency - a technical note

Extracted and adapted for the Web from "Value and Belief", a PhD thesis accepted by the University of Bristol, 2003.

Topics: deductive and inductive logic, foundations of probability theory

### Abstract

Bayesians, emboldened by the Cox proof, claim that all non-Bayesian systems of reasoning under uncertainty are inconsistent. Proponents of those competing systems such as orthodox (Neyman-Pearson) statistics might say, "I cannot derive a contradiction within this system, so it isn't inconsistent." This is a short discussion of what logicians really mean by inconsistency.

In deductive logic we are used to the idea that formulae of the form (P¬-P) are contradictions and that a sentence or set of sentences "contains a contradiction" if such a formula can be derived from it. This is only one of the ways in which a formal system of reasoning can exhibit inconsistency. Given the aim of reasoning consistently, we should be equally wary of systems in which one can prove statements of the form (B"P"&B"not-Q"), where the predicate B means "is true in language L" and P and Q are logically equivalent sentences of language L. The formula (B"P"¬-B"Q") where the letters have the same interpretation, would be equally troublesome. The same holds for other interpretations of the predicate B, such as "is impossible when interpreted in language L" or "is incompatible with the axioms and rules of system L".

Some forms of inconsistency cannot be shown in the language of a formal system itself because they are implicit in the semantics of those systems. Take a system of propositional reasoning that always yields A as a conclusion, never yielding not-A. If we are free to give the propositional symbols any meaning we like, then there is an inconsistency here even though no contradictory formula will ever be produced. This is because we could use A to mean "It will rain on Sunday", arriving at the conclusion that it will in fact rain tomorrow, and then in a separate application of the system use A to mean "It will not rain on Sunday", in which case we arrive at a contrary conclusion. Some systems of fuzzy logic, which assume that any two propositions A and B are maximally correlated, have an implicit inconsistency very similar to this (Cheeseman (1986)).

Another form of inconsistency involves different conflicting results from the same information. Imagine that we have a problem, which might be one of inference or of decision, and a body of information expressed as sentences of propositional calculus from which an answer has to be derived. We create a formal procedure that gives us an answer when we feed in the input information. By an "answer," I do not mean that the procedure always gives a solution to the problem. It might tell us that there is not enough information to decide between a range of alternatives, or that, of a set of available actions, we may as well choose any of them.

Now imagine that we make a trivial change to the input, so that the propositional content of the input information is unaffected. For example, we might replace A&B with B&A, or (A or B) with (B or A). If this makes a nontrivial difference to the answer then that shows that our conclusions are dependent on some arbitrary element in addition to the actual input. In so far as this arbitrary feature can be used to draw contrary conclusions from the same consistent set of premises, our system of inference or decision is inconsistent. This is another form of inconsistency that a system can have even when it cannot derive a contradictory formula from consistent premises.

The motivating requirement behind all these interpretations of "inconsistency" is that different procedures for answering the same question, given the same information, should yield the same result. When we consider procedures of reasoning at a purely formal level, this tells us to avoid axioms from which both P and not-P are derivable. When we are reasoning about statements of a language in meta-language, this requirement tells us to avoid axioms from which both B"P" and B"not-P" are derivable for certain predicates B. When we consider the whole process of formal reasoning, including not only the formal rules but the procedure for interpreting the semantics of the formal system, the general requirement of consistency produces the constraint that exchanging the meanings of A and B, or of A and not-A, should make no difference to the interpreted conclusion.

The Cox theorem and related work use considerations of consistency such as these to justify the use of probabilities as a logic of induction.

### References

Cheeseman, P., 1986. Probabilistic versus Fuzzy Reasoning. In: J. F. Lemmer and L. N. Kanal, eds. Uncertainty in Artificial Intelligence. Volume 1. North Holland: Elsevier.