Your continued donations keep Wikipedia running!    

Second-order logic

From Wikipedia, the free encyclopedia

Jump to: navigation, search

In mathematical logic, second-order logic is an extension of first-order logic, which itself is an extension of propositional logic. In addition to the structure of first-order logic, second-order logic also contains variables in predicate positions (rather than only in term positions, as in first-order logic), and quantifiers binding them. The most general second order logic includes these variables and quantifiers for all finitary relations on the domain and for all functions in a finite number of variables taking values in the domain. A more restricted version of second order logic includes only quantifiers and variables for unary relations on the domain.

So:

F F(jones) ∨ ¬F(jones)

which might express the principle of bivalence with respect to Jones: For every property, either Jones has it or he doesn't.

As another example, we can write in first-order logic a sentence like:

x P(x)

but we cannot do the same with the predicate. That is, the following expression:

P P(x)

is not a sentence of first-order logic. But this is a legitimate sentence of second-order logic.

Thus, for example, if the domain is the set of all real numbers, one can assert in first-order logic the existence of an additive inverse of each real number by writing

xy x + y = 0

but one needs second-order logic to assert the least-upper-bound property for sets of real numbers:

AR [...]

and insert in place of the dots a statement that if A is nonempty and has an upper bound in R then A has a least upper bound in R.

In a second order logic that permits quantifying over functions, it is possible to write formal sentences which mean "the domain is finite" or "the domain is of countable cardinality." To say that the domain is finite, use the sentence which says that every injective function on the domain is surjective. To say that the domain has countable cardinality, use the sentence which says that there is a bijection between every two infinite subsets of the domain. It is not possible to characterize finiteness or countability in first-order logic.

(table of mathematical symbols)

Contents

Why second-order logic is not reducible to first-order logic

An optimist might attempt to reduce the second-order theory of the real numbers to the first-order theory in the following way. Expand the domain from the set of all real numbers to a two-sorted domain, with the second sort containing all “sets of” real numbers. Add a new binary predicate to the language: the membership relation. Then sentences that were second-order become first-order, with the former second order quantifiers ranging over the second sort instead. This reduction can be attempted in a one-sorted theory by adding unary predicates that tell whether an element is a number or a set, and taking the domain to be the union of the set of real numbers and the powerset of the real numbers.

But notice that the domain was asserted to include all sets of real numbers. That requirement has not been reduced to a first-order sentence! Might there be some way to accomplish the reduction? The classic Löwenheim-Skolem theorem entails that there is not. That theorem implies that there is some countably infinite subset of R, whose members we will call internal numbers, and some countably infinite set of sets of internal numbers, whose members we will call "internal sets", such that the domain consisting of internal numbers and internal sets satisfies all of the first-order sentences satisfied by the domain of real-numbers-and-sets-of-real-numbers. In particular, it satisfies a sort of least-upper-bound axiom that says, in effect:

Every nonempty internal set that has an internal upper bound has a least internal upper bound.

Countability of the set of all internal numbers (in conjunction with the fact that those form a densely ordered set) necessarily implies that that set does not satisfy the full least-upper-bound axiom. Countability of the set of all internal sets necessarily implies that is not the set of all subsets of the set of all internal numbers (since Cantor's theorem implies that the set of all subsets of a countably infinite set is an uncountably infinite set).

Thus the second order theory of the real numbers cannot be reduced to a first order theory, in the sense that the second order theory of the real numbers has only one model but the corresponding first order theory has many models.

A more extreme example: there is a finite second order theory whose only model is the real numbers if the continuum hypothesis holds and which has no model if the continuum hypothesis does not hold. This theory consists of a finite theory characterizing the real numbers as a complete Archimedean ordered field plus an axiom saying that the domain is of the first uncountable cardinality. This example illustrates that the question of whether a sentence in second order logic is consistent is extremely subtle.

Additional limitations of second order logic are described in the next section.

Second-order logic and metalogical results

It is a corollary of Gödel's incompleteness theorem that one cannot have any notion of provability of second-order formulas given the standard interpretation of the language (or simply a standard semantics) that simultaneously satisfies these three desired attributes:

  • (Soundness) Every provable second-order sentence is universally valid, i.e., true in all domains.
  • (Completeness) Every universally valid second-order formula is provable.
  • ("Effectiveness") There is a proof-checking algorithm. (This third condition is often taken so much for granted that it is not explicitly stated.)

This is sometimes expressed by saying that second-order logic does not admit a complete proof theory.

In this respect second-order logic differs from first-order logic, and this is at least one of the reasons logicians have shied away from its use. (Quine occasionally pointed to this as a reason for thinking of second-order logic as not logic, properly speaking.) As George Boolos has pointed out, though, this incompleteness enters only with polyadic second-order logic: logic quantifying over n-place predicates, for n > 1. But monadic second-order logic, restricted to one-place predicates, is not only complete and consistent but decidable--that is, a proof of every true theorem is not only possible but determinately accessible by a mechanical method. In this respect, monadic second-order logic fares better than polyadic first-order logic: monadic second-order logic is complete, consistent and decidable, but polyadic first-order logic, though consistent and complete, is no longer decidable (See halting problem).

In 1950, Leon Henkin gave adequacy (i.e. completeness and soundness) and compactness proofs for second-order logic with a Henkin semantics. The only difference between standard and Henkin semantics is that in Henkin semantics the domain of predicate variables is an arbitrary set of sets of individuals (of the domain) rather than the set of all sets of individuals (of the domain). His proof proceeds as it does for the first-order functional calculus. Both results were contained in his dissertation.

The history and disputed value of second-order logic

When predicate logic was introduced to the mathematical community by Frege (and independently — and more influentially — by Peirce, who coined the term Second-order logic), he did use different variables to distinguish quantification over objects from quantification over properties and sets; but he did not see himself as doing two different kinds of logic. After the discovery of Russell's paradox it was realized that something was wrong with his system. Eventually logicians found that restricting Frege's logic in various ways—to what is now called first-order logic—eliminated this problem: sets and properties cannot be quantified over in first-order-logic alone. The now-standard hierarchy of orders of logics dates from this time.

It was found that set theory could be formulated as an axiomatized system within the apparatus of first-order logic (at the cost of several kinds of completeness, but nothing so bad as Russell's paradox), and this was done (see Zermelo-Fraenkel set theory), as sets are vital for mathematics. Arithmetic, mereology, and a variety of other powerful logical theories could be formulated axiomatically without appeal to any more logical apparatus than first-order quantification, and this, along with Gödel and Skolem's adherence to first-order logic, led to a general decline in work in second (or any higher) order logic.

This rejection was actively advanced by some logicians, most notably W. V. Quine. Quine advanced the view that in predicate-language sentences like Fx the "x" is to be thought of as a variable or name denoting an object and hence can be quantified over, as in "For all things, it is the case that . . ." but the "F" is to be thought of as an abbreviation for an incomplete sentence, not the name of an object (not even of an abstract object like a property). For example, it might mean " . . . is a dog." But it makes no sense to think we can quantify over something like this. (Such a position is quite consistent with Frege's own arguments on the concept-object distinction). So to use a predicate as a variable is to have it occupy the place of a name which only individual variables should occupy. This reasoning has been rejected by Boolos.

In recent years second-order logic has made something of a recovery, buoyed by George Boolos' interpretation of second-order quantification as plural quantification over the same domain of objects as first-order quantification. Boolos furthermore points to the nonfirstorderizability of sentences such as "Some critics admire only each other" and "Some of Fianchetto's men went into the warehouse unaccompanied by anyone else." which can only be expressed by the full force of second-order quantification. (This is in fact not true as generalized quantification and partially-ordered, or branching, quantification suffices to express a certain class of nonfirstorderizable sentences as well and it does not appeal to second-order quantification.)

Applications to complexity

The expressive power of various forms of second-order logic on finite structures is intimately tied to computational complexity theory. The field of descriptive complexity studies which computational complexity classes can be characterized by the power of the logic needed to express languages in them. In particular:

  • NP is the set of languages expressible by existential second-order logic.
  • co-NP is the set of languages expressible by universal second-order logic.
  • PH is the set of languages expressible by second-order logic.
  • PSPACE is the set of languages expressible by second-order logic with an added transitive closure operator.
  • EXPTIME is the set of languages expressible by second-order logic with an added least fixed point operator.

Relationships among these classes directly impact the relative expressiveness of the logics; for example, if PH=PSPACE, then adding a transitive closure operator to second-order logic does not make it any more expressive.

Power of the existential fragment

The existential fragment (EMSO) of monadic second-order logic (MSO) is second-order logic without universal second-order quantifiers, and without negative occurrences of existential second-order quantifiers. Over words w ∈ Σ*, every MSO formula can be converted into a deterministic finite state machine. This again can be converted into an EMSO formula. Thus EMSO and MSO are equivalent over words. For trees as input, this result holds as well. However, over the finite grid Σ++, this property does not hold any more, since the languages recognized by tiling systems are not closed under complement. Since a universal quantifier is equivalent to a negated existential quantifier, which cannot be expressed, alternations of universal and existential quantifiers generate bigger and bigger classes of languages over Σ++.

Personal tools
In other languages