version history
HOW TO CITE
THIS ENTRY

Stanford Encyclopedia of Philosophy

A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z

This document uses XHTML/Unicode to format the display. If you think special symbols are not displaying correctly, see our guide Displaying Special Characters.
last substantive content change
OCT
7
2003

Modal Logic

A modal is an expression (like ‘necessarily’ or ‘possibly’) that is used to qualify the truth of a judgement. Modal logic is, strictly speaking, the study of the deductive behavior of the expressions ‘it is necessary that’ and ‘it is possible that’. However, the term ‘modal logic’ may be used more broadly for a family of related systems. These include logics for belief, for tense and other temporal expressions, for the deontic (moral) expressions such as ‘it is obligatory that’ and ‘it is permitted that’, and many others. An understanding of modal logic is particularly valuable in the formal analysis of philosophical argument, where expressions from the modal family are both common and confusing. Modal logic also has important applications in computer science.

1. What is Modal Logic?

Narrowly construed, modal logic studies reasoning that involves the use of the expressions ‘necessarily’ and ‘possibly’. However, the term ‘modal logic’ is used more broadly to cover a family of logics with similar rules and a variety of different symbols.

A list describing the best known of these logics follows.

Logic
  Symbols   Expressions Symbolized
Modal Logic
It is necessary that ..

It is possible that ..

Deontic Logic
O
It is obligatory that ..
P
It is permitted that ..

F

It is forbidden that ..

Temporal Logic
G
It will always be the case that ..
F
It will be the case that ..
H
It has always been the case that ..

P

It was the case that..

Doxastic Logic 
Bx
x believes that ..

2. Modal Logics

The most familiar logics in the modal family are constructed from a weak logic called K (after Saul Kripke). Under the narrow reading, modal logic concerns necessity and possibility. A variety of different systems may be developed for such logics using K as a foundation. The symbols of K include ‘~’ for ‘not’, ‘→’ for ‘if...then’, and ‘□’ for the modal operator ‘it is necessary that’. (The connectives ‘&’, ‘or’, and ‘↔’ may be defined from ‘~’ and ‘→’ as is done in propositional logic.) K results from adding the following to the principles of propositional logic.
Necessitation Rule:   If A is a theorem of K, then so is □A.

Distribution Axiom:  □(A→B) → (□A→□B).

(In these principles we use ‘A’ and ‘B’ as metavariables ranging over formulas of the language.) According to the Necessitation Rule, any theorem of logic is necessary. The Distribution Axiom says that if it is necessary that if A then B, then if necessarily A then necessarily B.

The operator ◊ (for ‘possibly’) can be defined from □ by letting ◊A = ~□~A. In K, the operators □ and ◊ behave very much like the quantifiers ∀ (all) and ∃ (some). For example, the definition of ◊ from □ mirrors the equivalence of ∀xA with ~∃x~A in predicate logic. Furthermore, □(A&B) entails □A&□B and vice versa; while □Aor□B entails □(AorB), but not vice versa. This reflects the patterns exhibited by the universal quantifier: ∀x(A&B) entails ∀xA&∀xB and vice versa, while ∀xAor∀xB entails ∀x(AorB) but not vice versa. Similar parallels between ◊ and ∃ can be drawn. The basis for this correspondence between the modal operators and the quantifiers will emerge more clearly in the section on Possible Worlds Semantics.

The system K is too weak to provide an adequate account of necessity. The following axiom is not provable in K, but it is clearly desirable.

(M)  □A→A
(M) claims that whatever is necessary is the case. Notice that (M) would be incorrect were □ to be read ‘it ought to be that’, or ‘it was the case that’. So the presence of axiom (M) distinguishes modal from other logics in the modal family. A basic modal logic M results from adding (M) to K. (Some authors call this system T.)

Many logicians believe that M is still too weak to correctly formalize the logic of necessity and possibility. They recommend further axioms to govern the iteration, or repetition of modal operators. Here are two of the most famous iteration axioms:

(4)  □A→□□A

(5)  ◊A→□◊A

S4 is the system that results from adding (4) to M. Similarly S5 is M plus (5). In S4, the sentence □□A is equivalent to □A. As a result, any string of boxes may be replaced by a single box, and the same goes for strings of diamonds. This amounts to the idea that iteration of the modal operators is superfluous. Saying that A is necessarily necessary is considered a uselessly long-winded way of saying that A is necessary. The system S5 has even stronger principles for simplifying strings of modal operators. In S4, a string of operators of the same kind can be replaced for that operator; in S5, strings containing both boxes and diamonds are equivalent to the last operator in the string. So, for example, saying that it is possible that A is necessary is the same as saying that A is necessary. A summary of these features of S4 and S5 follows.

S4:  □□...□ = □  and  ◊◊...◊ = ◊

S5:  00...□ = □  and  00...◊ = ◊, where each 0 is either □ or ◊

One could engage in endless argument over the correctness or incorrectness of these and other iteration principles for □ and ◊. The controversy can be partly resolved by recognizing that the words ‘necessarily’ and ‘possibly’, have many different uses. So the acceptability of axioms for modal logic depends on which of these uses we have in mind. For this reason, there is no one modal logic, but rather a whole family of systems built around M. The relationship between these systems is diagrammed in Section 8, and their application to different uses of ‘necessarily’ and ‘possibly’ can be more deeply understood by studying their possible world semantics in Section 6.

The system B (for the logician Brouwer) is formed by adding axiom (B) to M.

(B)  A→□◊A

It is interesting to note that S5 can be formulated equivalently by adding (B) to S4. The axiom (B) raises an important point about the interpretation of modal formulas. (B) says that if A is the case, then A is necessarily possible. One might argue that (B) should always be adopted in any modal logic, for surely if A is the case, then it is necessary that A is possible. However, there is a problem with this claim that can be exposed by noting that ◊□A→A is provable from (B). So ◊□A→A should be acceptable if (B) is. However, ◊□A→A says that if A is possibly necessary, then A is the case, and this is far from obvious. Why does (B) seem obvious, while one of the things it entails seems not obvious at all? The answer is that there is a dangerous ambiguity in the English interpretation of A→□◊A. We often use the expression ‘If A then necessarily B’ to express that the conditional ‘if A then B’ is necessary. This interpretation corresponds to □(A→B). On other occasions, we mean that if A, then B is necessary: A→□B. In English, ‘necessarily’ is an adverb, and since adverbs are usually placed near verbs, we have no natural way to indicate whether the modal operator applies to the whole conditional, or to its consequent. For these reasons, there is a tendency to confuse (B): A→□◊A with □(A→◊A). But □(A→◊A) is not the same as (B), for □(A→◊A) is already a theorem of M, and (B) is not. One must take special care that our positive reaction to □(A→◊A) does not infect our evaluation of (B). One simple way to protect ourselves is to formulate B in an equivalent way using the axiom: ◊□A→A, where these ambiguities of scope do not arise.

3. Deontic Logics

Deontic logics introduce the primitive symbol O for ‘it is obligatory that’, from which symbols P for ‘it is permitted that’ and F for ‘it is forbidden that’ are defined: PA = ~O~A and FA = O~A. The deontic analog of the modal axiom (M): OA→A is clearly not appropriate for deontic logic. (Unfortunately, what ought to be is not always the case.) However, a basic system D of deontic logic can be constructed by adding the weaker axiom (D) to K.
(D)  OA→PA
Axiom (D) guarantees the consistency of the system of obligations by insisting that when A is obligatory, A is permissible. A system which obligates us to bring about A, but doesn't permit us to do so, puts us in an inescapable bind. Although some will argue that such conflicts of obligation are at least possible, most deontic logicians accept (D).

O(OA→A) is another deontic axiom that seems desirable. Although it is wrong to say that if A is obligatory then A is the case (OA→A), still, this conditional ought to be the case. So some deontic logicians believe that D needs to be supplemented with O(OA→A) as well.

Controversy about iteration (repetition) of operators arises again in deontic logic. In some conceptions of obligation, OOA just amounts to OA. ‘It ought to be that it ought to be’ is treated as a sort of stuttering; the extra ‘ought’s do not add anything new. So axioms are added to guarantee the equivalence of OOA and OA. The more general iteration policy embodied in S5 may also be adopted. However, there are conceptions of obligation where distinction between OA and OOA is preserved. The idea is that there are genuine differences between the obligations we actually have and the obligations we should adopt. So, for example, ‘it ought to be that it ought to be that A’ commands adoption of some obligation which may not actually be in place, with the result that OOA can be true even when OA is false.

4. Temporal Logics

In temporal logic (also known as tense logic), there are two basic operators, G for the future, and H for the past. G is read ‘it always will be that’ and the defined operator F (read ‘it will be the case that’), can be introduced by FA = ~G~A. Similarly H is read: ‘it always was that’ and P (for ‘it was the case that’) is defined by PA=~H~A. A basic system of temporal logic called Kt results from adopting the principles of K for both G and H, along with two axioms to govern the interaction between the past and future operators:
"Necessitation" Rules:  If A is a theorem then so are GA and HA.

Distribution Axioms:  G(A→B) → (GA→GB)  and  H(A→B) → (HA→HB)

Interaction Axioms:  A→GPA  and  A→HFA

The interaction axioms raise questions concerning asymmetries between the past and the future. A standard intuition is that the past is fixed, while the future is still open. The first interaction axiom (A→GPA) conforms to this intuition in reporting that what is the case (A), will at all future times, be in the past (GPA). However A→HFA may appear to have unacceptably deterministic overtones, for it claims, apparently, that what is true now (A) has always been such that it will occur in the future (HFA). However, possible world semantics for temporal logic reveals that this worry results from a simple confusion, and that the two interaction axioms are equally acceptable.

Note that the characteristic axiom of modal logic, (M): □A→A, is not acceptable for either H or G, since A does not follow from ‘it always was the case that A’, nor from ‘it always will be the case that A’. However, it is acceptable in a closely related temporal logic where G is read ‘it is and always will be’, and H is read ‘it is and always was’.

Depending on which assumptions one makes about the structure of time, further axioms must be added to temporal logics. A list of axioms commonly adopted in temporal logics follows. An account of how they depend on the structure of time will be found in the section Possible Worlds Semantics.

GA→GGA  and  HA→HHA

GGA→GA  and  HHA→HA

GA→FA  and  HA→PA

It is interesting to note that certain combinations of past tense and futuere tense operators may be used to express complex tenses in English. For example, FPA, corresponds to sentence A in the future perfect tense, (as in ‘20 seconds from now the light will have changed’). Similarly, PPA expresses the past perfect tense.

For a more detailed discussion of temporal logic, see the entry on temporal logic.

5. Conditional Logics

The founder of modal logic, C. I. Lewis, defined a series of modal logics which did not have □ as a primitive symbol. Lewis was concerned to develop of logic of conditionals that was free of the so called Paradoxes of Material Implication, namely the classical theorems A→(~A→B) and B→(A→B). He introduced the symbol strict-implication for "strict implication" and developed logics where neither Astrict-implication(~Astrict-implicationB) nor Bstrict-implication(Astrict-implicationB) is provable. The modern practice has been to define Astrict-implicationB by □(A→B), and use modal logics governing □ to obtain similar results. However, the provability of such formulas as (A&~A)strict-implicationB in such logics seems at odds with concern for the paradoxes. Anderson and Belnap (1975) have developed systems R (for Relevance Logic) and E (for Entailment) which are designed to overcome such difficulties. These systems require revision of the standard systems of propositional logic. (For a more detailed discussion of relevance logic, see the entry on relevance logic.)

David Lewis (1973) has developed special conditional logics to handle counterfactual expressions, that is, expressions of the form ‘if A were to happen then B would happen’. (Kvart (1980) is another good source on the topic.) Counterfactual logics differ from those based on strict implication because the former reject while the latter accept contraposition.

6. Possible Worlds Semantics

The purpose of logic is to characterize the difference between valid and invalid arguments. A logical system for a language is a set of axioms and rules designed to prove exactly the valid arguments statable in the language. Creating such a logic may be a difficult task. The logican must make sure that the system is sound, i.e. that every argument proven using the rules and axioms is in fact valid. Furthermore, the system should be complete, meaning that every valid argument has a proof in the system. Demonstrating soundness and completeness of formal systems is a logician's central concern.

Such a demonstration cannot get underway until the concept of validity is defined rigorously. Formal semantics for a logic provides a definition of validity by characterizing the truth behavior of the sentences of the system. In propositional logic, validity can be defined using truth tables. A valid argument is simply one where every truth table row that makes its premises true also makes its conclusion true. However truth tables cannot be used to provide an account of validity in modal logics because there are no truth tables for expressions such as ‘it is necessary that’, ‘it is obligatory that’, and the like. (The problem is that the truth value of A does not determine the truth value for □A. For example, when A is ‘Dogs are dogs’, □A is true, but when A is ‘Dogs are pets’, □A is false. Nevertheless, semantics for modal logics can be defined by introducing possible worlds. We will illustrate possible worlds semantics for a logic of necessity containing the symbols ~, →, and □. Then we will explain how the same strategy may be adapted to other logics in the modal family.

In propositional logic, a valuation of the atomic sentences (or row of a truth table) assigns a truth value (T or F) to each propositional variable p. Then the truth values of the complex sentences is calculated with truth tables. In modal semantics, a set W of possible worlds is introduced. A valuation then gives a truth value to each propositional variable for each of the possible worlds in W. This means that value assigned to p for world w may differ from the value assigned to p for another world w′.

The truth value of the atomic sentence p at world w given by the valuation v may be written v(p, w). Given this notation, the truth values (T for true, F for false) of complex sentences of modal logic for a given valuation v (and member w of the set of worlds W) may be defined by the following truth clauses. (‘iff’ abbreviates ‘if and only if’.)

(~)     v(~A, w)=T  iff  v(A, w)=F.

(→)   v(A→B, w)=T   iff   v(A, w)=F or v(B, w)=T.

(5)     v(□A, w)=T   iff   for every world w′ in W, v(A, w′)=T.

Clauses (~) and (→) simply describe the standard truth table behavior for negation and material implication respectively. According to (5), □A is true (at a world w) exactly when A is true in all possible worlds. Given the definition of ◊, (namely, ◊A = ~□~A) the truth condition (5) insures that ◊A is true just in case A is true in some possible world. Since the truth clauses for □ and ◊ involve the quantifiers ‘all’ and ‘some’ (respectively), the parallels in logical behavior between □ and ∀x, and between ◊ and ∃x noted in section 2 will be expected.

Clauses (~), (→), and (5) allow us to calculate the truth value of any sentence at any world on a given valuation. A definition of validity is now just around the corner. An argument is 5-valid for a given set W (of possible worlds) if and only if every valuation of the atomic sentences that assigns the premises T at a world in W also assigns the conclusion T at the same world. An argument is said to be 5-valid iff it is valid for every non empty set of W of possible worlds.

It has been shown that S5 is sound and complete for 5-validity (hence our use of the symbol ‘5’). The 5-valid arguments are exactly the arguments provable in S5. This result suggests that S5 is the correct way to formulate a logic of necessity.

However, S5 is not a reasonable logic for all members of the modal family. In deontic logic, temporal logic, and others, the analog of the truth condition (5) is clearly not appropriate; furthermore there are even conceptions of necessity where (5) should be rejected as well. The point is easiest to see in the case of temporal logic. Here, the members of W are moments of time, or worlds "frozen", as it were, at an instant. For simplicity let us consider a future temporal logic, a logic where □A reads: ‘it will always be the case that’. (We formulate the system using □ rather than the traditional G so that the connections with other modal logics will be easier to appreciate.) The correct clause for □ should say that □A is true at time w iff A is true at all times in the future of w. To restrict attention to the future, the relation R (for ‘eaRlier than’) needs to be introduced. Then the correct clause can be formulated as follows.

(K)   v(□A, w)=T   iff   for every w′, if wRw′, then v(A, w′)=T.
This says that □A is true at w just in case A is true at all times after w.

Validity for this brand of temporal logic can now be defined. A frame <W, R> is a pair consisting of a non-empty set W (of worlds) and a binary relation R on W. A model <F, v> consists of a frame F, and a valuation v that assigns truth values to each atomic sentence at each world in W. Given a model, the values of all complex sentences can be determined using (~), (→), and (K). An argument is K-valid just in case any model whose valuation assigns the premises T at a world also assigns the conclusion T at the same world. As the reader may have guessed from our use of ‘K’, it has been shown that the simplest modal logic K is both sound and complete for K-validity.

7. Modal Axioms and Conditions on Frames

One might assume from this discussion that K is the correct logic when □ is read ‘it will always be the case that’. However, there are reasons for thinking that K is too weak. One obvious logical feature of the relation R (earlier than) is transitivity. If wRv (w is earlier than v) and vRu (v is earlier than u), then it follows that wRu (w is earlier than u). So let us define a new kind of validity that corresponds to this condition on R. Let a 4-model be any model whose frame <W, R> is such that R is a transitive relation on W. Then an argument is 4-valid iff any 4-model whose valuation assigns T to the premises at a world also assigns T to the conclusion at the same world. We use ‘4’ to describe such a transitive model because the logic which is adequate (both sound and complete) for 4-validity is K4, the logic which results from adding the axiom (4): □A→□□A to K.

Transitivity is not the only property which we might want to require of the frame <W, R> if R is to be read ‘earlier than’ and W is a set of moments. One condition (which is only mildly controversial) is that there is no last moment of time, i.e. that for every world w there is some world v such that wRv. This condition on frames is called seriality. Seriality corresponds to the axiom (D): □A→◊A, in the same way that transitivity corresponds to (4). A D-model is a K-model with a serial frame. From the concept of a D-model the corresponding notion of D-validity can be defined just as we did in the case of 4-validity. As you probably guessed, the system that is adequate with respect to D-validity is KD, or K plus (D). Not only that, but the system KD4 (that is K plus (4) and (D)) is adequate with respect to D4-validity, where a D4-model is one where <W, R> is both serial and transitive.

Another property which we might want for the relation ‘earlier than’ is density, the condition which says that between any two times we can always find another. Density would be false if time were atomic, i.e. if there were intervals of time which could not be broken down into any smaller parts. Density corresponds to the axiom (C4): □□A→□A, the converse of (4), so for example, the system KC4, which is K plus (C4) is adequate with respect to models where the frame <W, R> is dense, and KDC4, adequate with respect to models whose frames are serial and dense, and so on.

Each of the modal logic axioms we have discussed corresponds to a condition on frames in the same way. The relationship between conditions on frames and corresponding axioms is one of the central topics in the study of modal logics. Once an interpretation of the intensional operator □ has been decided on, the appropriate conditions on R can be determined to fix the corresponding notion of validity. This, in turn, allows us to select the right set of axioms for that logic.

For example, consider a deontic logic, where □ is read ‘it is obligatory that’. Here the truth of □A does not demand the truth of A in every possible world, but only in a subset of those worlds where people do what they ought. So we will want to introduce a relation R for for this kind of logic as well, and use the truth clause (K) to evaluate □A at a world. However, in this case, R is not earlier than. Instead wRw′ holds just in case world w′ is a morally acceptable variant of w, i.e. a world that our actions can bring about which satisfies what is morally correct, or right, or just. Under such a reading, it should be clear that the relevant frames should obey seriality, the condition that requires that each possible world have a morally acceptable variant. The analysis of the properties desired for R makes it clear that a basic deontic logic can be formulated by adding the axiom (D) and to K.

Even in modal logic, one may wish to restrict the range of possible worlds which are relevant in determining whether □A is true at a given world. For example, I might say that it is necessary for me to pay my bills, even though I know full well that there is a possible world where I fail to pay them. In ordinary speech, the claim that A is necessary does not require the truth of A in all possible worlds, but rather only in a certain class of worlds which I have in mind (for example, worlds where I avoid penalties for failure to pay). In order to provide a generic treatment of necessity, we must say that □A is true in w iff A is true in all worlds that are related to w in the right way. So for an operator □ interpreted as necessity, we introduce a corresponding relation R on the set of possible worlds W, traditionally called the accessibility relation. The accessibility relation R holds between worlds w and w′ iff w′ is possible given the facts of w. Under this reading for R, it should be clear that frames for modal logic should be reflexive. It follows that modal logics should be founded on M, the system that results from adding (M) to K. Depending on exactly how the accessibility relation is understood, symmetry and transitivity may also be desired.

A list of some of the more commonly discussed conditions on frames and their corresponding axioms along with a map showing the relationship between the various modal logics can be found in the next section.

8. Map of the Relationships Between Modal Logics

The following diagram shows the relationships between the best known modal logics, namely logics that can be formed by adding a selection of the axioms (D), (M), (4), (B) and (5) to K. A list of these (and other) axioms along with their corresponding frame conditions can be found below the diagram.
missing text, please inform
In this chart, systems are given by the list of their axioms. So, for example M4B is the result of adding (M) (4) and (B) to K. In boldface, we have indicated traditional names of some systems. When system S appears below and/or to the left of S′ connected by a line, then S′ is an extension of S. This means that every argument provable in S is provable in S′, but S is weaker than S′, i.e. not all arguments provable in S′ are provable in S.

The following list indicates axioms, their names, and the corresponding conditions on the accessibility relation R, for axioms so far discussed in this encyclopedia entry.

Axiom
Name
Axiom Condition on Frames R is...
(D) □A→◊A ∃u wRu Serial
(M) □A→A wRw Reflexive
(4) □A→□□A (wRv&vRu) ⇒ wRu Transitive
(B) A→□◊A wRv ⇒ vRw Symmetric
(5) ◊A→□◊A (wRv&wRu) ⇒ vRu Euclidean
(CD) ◊A→□A (wRv&wRu) ⇒ v=u Unique
(□M) □(□A→A) wRv ⇒ vRv Shift Reflexive
(C4) □□A→□A wRv ⇒ ∃u(wRu&uRv) Dense
(C) ◊□A → □◊A wRv&wRx ⇒ ∃u(vRu&xRu) Convergent
In the list of conditions on frames, the variables ‘w’, ‘v’, ‘u’, ‘x’ and the quantifier ‘∃u’ are understood to range over W. ‘&’ abbreviates ‘and’ and ‘⇒’ abbreviates ‘if...then’.

9. The General Axiom

The correspondence between axioms and conditions on frames may seem something of a mystery. A beautiful result of Lemmon and Scott (1977) goes a long way towards explaining those relationships. Their theorem concerned axioms which have the following form:
(G) ◊hiA → □jkA
We use the notation ‘◊n’ to represent n diamonds in a row, so, for example, ‘◊3’ abbreviates a string of three diamonds: ‘◊◊◊’. Similarly ‘□n’ represents a string of n boxes. When the values of h, i, j, and k are all 1, we have axiom (C):
(C)  ◊□A → □◊A  =  ◊11A → □11A
The axiom (B) results from setting h and k to 0, and letting j and k be 1:
(B)  A → □◊A  =  ◊00A → □11A
To obtain (4), we may set h and k to 0, set i to 1 and j to 2:
(4)  □A →□□A  =  ◊01A → □20A
Many (but not all) axioms of modal logic can be obtained by setting the right values for the parameters in (G)

Our next task will be to give the condition on frames which corresponds to (G) for a given selection of values for h, i, j, and k. In order to do so, we will need a definition. The composition of two relations R and R′ is a new relation Rcomposed-withR′ which is defined as follows:

wRcomposed-withR′v   iff   for some u, wRu and uR′v.
For example, if R is the relation of being a brother, and R′ is the relation of being a parent then Rcomposed-withR′ is the relation of being an uncle, (because w is the uncle of v iff for some person u, both w is the brother of u and u is the parent of v). A relation may be composed with itself. For example, when R is the relation of being a parent, then Rcomposed-withR is the relation of being a grandparent, and Rcomposed-withRcomposed-withR is the relation of being a great-grandparent. It will be useful to write ‘Rn’, for the result of composing R with itself n times. So R2 is Rcomposed-withR, and R4 is Rcomposed-withRcomposed-withRcomposed-withR. We will let R1 be R, and R0 will be the identity relation, i.e. wR0v iff w=v.

We may now state the Scott-Lemmon result. It is that the condition on frames which corresponds exactly to any axiom of the shape (G) is the following.

(hijk-Convergence)  wRhv & wRju ⇒ ∃x (vRix & uRkx)
It is interesting to see how the familiar conditions on R result from setting the values for h, i, j, and k according to the values in the corresponding axiom. For example, consider (5). In this case i=0, and h=j=k=1. So the corresponding condition is
wRv & wRu ⇒ ∃x (vR0x & uRx).
We have explained that R0 is the identity relation. So if vR0x then v=x. But ∃x (v=x & uRx), is equivalent to uRv, and so the Euclidean condition is obtained:
(wRv & wRu) ⇒ uRv.
In the case of axiom (4), h=0, i=1, j=2 and k=0. So the corresponding condition on frames is
(w=v & wR2u) ⇒ ∃x (vRx & u=x).
Resolving the identities this amounts to:
vR2u ⇒ vRu.
By the definition of R2, vR2u iff ∃x(vRx & xRu), so this comes to:
∃x(vRx & xRu) ⇒ vRu,
which by predicate logic, is equivalent to transitivity.
vRx & xRu ⇒ vRu.
The reader may find it a pleasant exercise to see how the corresponding conditions fall out of hijk-Convergence when the values of the parameters h, i, j, and k are set by other axioms.

The Scott-Lemmon results provides a quick method for establishing results about the relationship between axioms and their corresponding frame conditions. Since they showed the adequacy of any logic that extends K with a selection of axioms of the form (G) with respect to models that satisfy the corresponding set of frame conditions, they provided "wholesale" adequacy proofs for the majority of systems in the modal family. Sahlqvist (1975) has discovered important generalizations of the Scott-Lemmon result covering a much wider range of axiom types.

10. Provability Logics

Modal logic has been useful in clarifying our understanding of central results concerning provability in the foundations of mathematics (Boolos, 1993). Provability logics are systems where the propositional variables p, q, r, etc. range over formulas of some mathematical system, for example Peano's system PA for arithmetic. (The system chosen for mathematics might vary, but assume it is PA for this discussion.) Gödel showed that arithmetic has strong expressive powers. Using code numbers for arithmetic sentences, he was able to demonstrate a correspondence between sentences of mathematics and facts about which sentences are and are not provable in PA. For example, he showed there there is a sentence C that is true just in case no contradiction is provable in PA and there is a sentence G (the famous Gödel sentence) that is true just in case it is not provable in PA.

In provability logics, □p is interpreted as a formula (of arithmetic) that expresses that what p denotes is provable in PA. Using this notation, sentences of provability logic express facts about provability. Suppose that missing text, please inform is a constant of provability logic denoting a contradiction. Then ~□missing text, please inform says that PA is consistent and □A→A says that PA is sound in the sense that when it proves A, A is indeed true. Furthermore, the box may be iterated. So, for example, □~□missing text, please inform makes the dubious claim that PA is able to prove its own consistency, and ~□missing text, please inform → ~□~□missing text, please inform asserts (correctly as Gödel proved) that if PA is consistent then PA is unable to prove its own consistency.

Although provability logics form a family of related systems, the system GL is by far the best known. It results from adding the following axiom to K:

(GL)  □(□A→A)→□A
The axiom (4): □A→□□A is provable in GL, so GL is actually a strengthening of K4. However, axioms such as (M): □A→A, and even the weaker (D): □A→◊A are not available (nor desirable) in GL. In provability logic, provability is not to be treated as a brand of necessity. The reason is that when p is provable in an arbitrary system S for mathematics, it does not follow that p is true, since S may be unsound. Furthermore, if p is provable in S (□p) it need not even follow that ~p lacks a proof (~□~p = ◊p). S might be inconsistent and so prove both p and ~p.

Axiom (GL) captures the content of Loeb's Theorem, an important result in the foundations of arithmetic. □A→A says that PA is sound for A, i.e. that if A were proven, A would be true. (Such a claim might not be secure for an arbitrarily selected sytem S, since A might be provable in S and false.) (GL) claims that if PA manages to prove the sentence that claims soundness for a given sentence A, then A is already provable in PA. Loeb's Theorem reports a kind of modesty on PA's part (Boolos, 1993, p. 55). PA never insists (proves) that a proof of A entails A's truth, unless it already has a proof of A to back up that claim.

It has been shown that GL is adequate for provability in the following sense. Let a sentence of GL be always provable exactly when the sentence of arithmetic it denotes is provable no matter how its variables are assigned values to sentences of PA. Then the provable sentences of GL are exactly the sentences that are always provable. This adequacy result has been extremely useful, since general questions concerning provability in PA can be transformed into easier questions about what can be demonstrated in GL.

GL can also be outfitted with a possible world semantics for which it is sound and complete. A corresponding condition on frames for GL-validity is that the frame be transitive, finite and irreflexive.

11. Quantifiers in Modal Logic

It would seem to be a simple matter to outfit a modal logic with the quantifiers ∀ (all) and ∃ (some). One would simply add the standard (or classical) rules for quantifiers to the principles of whichever propositional modal logic one chooses. However, systems of this kind create problems which have motivated some logicians to abandon classical quantifier rules in favor of the weaker rules of free logic (Garson, 1984). The controversy over whether classical principles should be adopted continues today.

The main points of disagreement can be traced back to decisions about how to handle the domain of quantification. The simplest alternative, the fixed-domain (sometimes called the possibilist) approach, assumes a single domain of quantification that contains all the possible objects. On the other hand, the world-relative (or actualist) interpretation, assumes that the domain of quantification changes from world to world, and contains only the objects that actually exist in a given world.

The fixed-domain approach requires no major adjustments to the classical machinery for the quantifiers. Modal logics that are adequate for fixed domain semantics can usually be axiomatized by adding principles of a propositional modal logic to classical quantifier rules together with the Barcan Formula (BF) (Barcan 1946). (For an account of some interesting exceptions see Cresswell (1995)).

(BF)  ∀x□A→□∀xA.
The fixed-domain interpretation has advantages of simplicity and familiarity, but it does not provide a direct account of the semantics of certain quantifier expressions of natural language. We do not think that ‘Some man exists who signed the Declaration of Independence’ is true, at least not if we read ‘exists’ in the present tense. Nevertheless, this sentence was true in 1777, which shows that the domain for the natural language expression ‘some man exists who’ changes to reflect which men exist at different times. A related problem is that on the fixed-domain interpretation, the sentence ∀y□∃x(x=y) is valid. However, assuming that ∃x(x=y) is read: y exists, ∀y□∃x(x=y) says that everything exists necessarily. However, it seems a fundamental feature of common ideas about modality that the existence of many things is contingent, and that different objects exist in different possible worlds.

The defender of the fixed-domain interpretation may respond to these objections by insisting that on his (her) reading of the quantifiers, the domain of quantification contains all possible objects, not just the objects that happen to exist at a given world. So the theorem ∀y□∃x(x=y) makes the innocuous claim that every possible object is necessarily found in the domain of all possible objects. Furthermore, those quantifier expressions of natural language whose domain is world (or time) dependent can be expressed using the fixed-domain quantifier ∃x and a predicate letter E with the reading ‘actually exists’. For example, instead of translating ‘Some Man exists who Signed the Declaration of Independence’ by

∃x(Mx&Sx),
the defender of fixed domains may write:
∃x(Ex&Mx&Sx),
thus ensuring the translation is counted false at the present time. Cresswell (1991) makes the interesting observation that world-relative quantification has limited expressive power relative to fixed-domain quantification. World-relative quantification can be defined with fixed domain quantifiers and E, but there is no way to fully express fixed-domain quantifiers with world-relative ones. Although this argues in favor of the classical approach to quantified modal logic, the translation tactic also amounts to something of a concession in favor of free logic, for the world-relative quantifiers so defined obey exactly the free logic rules.

A problem with the translation strategy used by defenders of fixed domain quantification is that rendering the English into logic is less direct, since E must be added to all translations of all sentences whose quantifier expressions have domains that are context dependent. A more serious objection to fixed-domain quantification is that it strips the quantifier of a role which Quine recommended for it, namely to record robust ontological commitment. On this view, the domain of ∃x must contain only entities that are ontologically respectable, and possible objects are too abstract to qualify. Actualists of this stripe will want to develop the logic of a quantifier ∃x which reflects commitment to what is actual in a given world rather than to what is merely possible.

However, recent work on actualism tends to undermine this objection. For example, Linksy and Zalta (1994) argue that the fixed-domain quantifier can be given an interpretation that is perfectly acceptable to actualists. Actualists who employ possible worlds semantics routinely quantify over possible worlds in their semantical theory of language. So it would seem that possible worlds are actual by these actualist's lights. By cleverly outfitting the domain with abstract entities no more objectionable than the ones actualists accept, Linsky and Zalta show that the Barcan Formula and classical principles can be vindicated. Note however, that actualists may respond that they need not be commited to the actuality of possible worlds so long as it is understood that quantifiers used in their theory of language lack strong ontological import. In any case, it is open to actualists (and non actualists as well) to investigate the logic of quantifiers with more robust domains, for example domains excluding possible worlds and other such abstract entities, and containing only the spatio-temporal particulars found in a given world. For quantifiers of this kind, a world-relative domains are appropriate.

Such considerations motivate interest in systems that acknowledge the context dependence of quantification by introducing world-relative domains. Here each possible world has its own domain of quantification (the set of objects that actually exist in that world), and the domains vary from one world to the next. When this decision is made, a difficulty arises for classical quantification theory. Notice that the sentence ∃x(x=t) is a theorem of classical logic, and so □∃x(x=t) is a theorem of K by the Necessitation Rule. Let the term t stand for Saul Kripke. Then this theorem says that it is necessary that Saul Kripke exists, so that he is in the domain of every possible world. The whole motivation for the world-relative approach was to reflect the idea that objects in one world may fail to exist in another. If standard quantifier rulers are used, however, every term t must refer to something that exists in all the possible worlds. This seems incompatible with our ordinary practice of using terms to refer to things that only exist contingently.

One response to this difficulty is simply to eliminate terms. Kripke (1963) gives an example of a system that uses the world-relative interpretation and preserves the classical rules. However, the costs are severe. First, his language is artificially impoverished, and second, the rules for the propositional modal logic must be weakened.

Presuming that we would like a language that includes terms, and that classical rules are to be added to standard systems of propositional modal logic, a new problem arises. In such a system, it is possible to prove (CBF), the converse of the Barcan Formula.

(CBF)  □∀xA→∀x□A.
This fact has serious consequences for the system's semantics. It is not difficult to show that every world-relative model of (CBF) must meet condition (ND) (for ‘nested domains’).
(ND)  If wRv then the domain of w is a subset of the domain of v.
However (ND) conflicts with the point of introducing world-relative domains. The whole idea was that existence of objects is contingent so that there are accessible possible worlds where one of the things in our world fails to exist.

A straightforward solution to these problems is to abandon classical rules for the quantifiers and to adopt rules for free logic (FL) instead. The rules of FL are the same as the classical rules, except that inferences from ∀xRx (everything is real) to Rp (Pegasus is real) are blocked. This is done by introducing a predicate ‘E’ (for ‘actually exists’) and modifying the rule of universal instantiation. From ∀xRx one is allowed to obtain Rp only if one also has obtained Ep. Assuming that the universal quantifier ∀x is primitive, and the existential quantifier ∃x is defined by ∃xA =df ~∀x~A, then FL may be constructed by adding the following two principles to the rules of propositional logic

Universal Generalization.  If B→A(y) is a theorem, so is B→∀xA(x).

Universal Instantiation.  (∀xA(x) & En)→A(n)

(Here it is assumed that A(x) is any well-formed formula of predicate logic, and that A(y) and A(n) result from replacing y and n properly for each occurrence of x in A(x).) Note that the principle of universal generalization is standard, but that the instantiation axiom is restricted by mention of En in the antecedent. In FL, proofs of formulas like ∃x□(x=t), ∀y□ ∃x(x=y), (CBF), and (BF) which seem incompatible with the world-relative interpretation, are blocked.

One philosophical objection to FL is that E appears to be an existence predicate, and many would argue that existence is not a legitimate property like being green or weighing more than four pounds. So philosophers who reject the idea that existence is a predicate may object to FL. However in most (but not all) quantified modal logics that include identity (=) these worries may be skirted by defining E as follows.

Et   =df   ∃x(x=t).
The most general way to formulate quantified modal logic is to create FS by adding the rules of FL to a given propositional modal logic S. In situations where classical quantification is desired, one may simply add Et as an axiom to FS, so that the classical principles become derivable rules. Adequacy results for such systems can be obtained for most choices of the modal logic S, but there are exceptions.

A final complication in the semantics for quantified modal logic is worth mentioning. It arises when non-rigid expressions such as ‘the inventor of bifocals’, are introduced to the language. A term is non-rigid when it picks out different objects in different possible worlds. The semantical value of such a term can be given by what Carnap (1947) called an individual concept, a function that picks out the denotation of the term for each possible world. One approach to dealing with non-rigid terms is to employ Russell's theory of descriptions. However, in a language that treats non rigid expressions as genuine terms, it turns out that neither the classical nor the free logic rules for the quantifiers are acceptable. (The problem can not be resolved by weakening the rule of substitution for identity.) A solution to this problem is to employ a more general treatment of the quantifiers, where the domain of quantification contains individual concepts rather than objects. This more general interpretation provides a better match between the treatment of terms and the treatment of quantifiers and results in systems that are adequate for classical or free logic rules (depending on whether the fixed domains or world-relative domains are chosen).

Bibliography

An excellent bibliography of historical sources can be found in Hughes and Cresswell (1968).

Other Internet Resources

Related Entries

actualism | logic: classical | logic: deontic | logic: free | logic: intensional | logic: provability | logic: relevance | logic: temporal | possible worlds

Copyright © 2003
James Garson
jgarson@uh.edu

A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z

Stanford Encyclopedia of Philosophy