In the episode, we argued that we can boil semantics down to two basic primitives: entities that we can point to in the world, and the truth value, or whether a proposition is true or false. But those two aren’t enough to form a complete picture of how a sentence’s meaning gets built up out of its parts. Specifically, when it comes to verbs like “believe” or “hope” that embed whole other sentences, it isn’t always useful for them to know whether the sentence they combine with is true or false. The sentence “Will was found in the canyon” might really be false, but the sentence “Everyone believes Will was found in the canyon” can still end up true, as long as everyone’s mistaken about what happened.
(1) “Will was found in the canyon” = False
(2) “Everyone believes Will was found in the canyon” = True
What matters more are people’s beliefs about the world, not the way the world actually is. If someone — say, Joyce — discovers that Will wasn’t found in the canyon after all, then it’s no longer true that everyone believes it.
(3) “Everyone believes Will was found in the canyon” = False
In this case, even though the embedded sentence isn’t true, what matters more than its relationship to the actual world is its content — the underlying idea that it represents. To deal with this, we introduced possible worlds into our theory of meaning. These worlds are just hypothesized spaces that can be almost exactly like the real world, or radically different. And so instead of simply being true or false, sentences represent connections between a possible world and a truth value. In the case of (1) above, when the sentence is applied to the world in which Will wasn’t found in the canyon, it spits out “False.”
To put this in terms of types: a completed sentence like “Jonathan cries” can be thought of as combining with a world and producing a truth value, making the sentence type <st> (not <wt>, as you might expect; sadly, the standard notation isn’t as transparent as it could be).
At this point, though, you might notice a bit of an anomaly: we’ve similarly claimed that a verb like “cry” combines with an entity like “Jonathan” to produce a sentence with a truth value, making the verb type <et>. How can a sentence end up as type <st> when its main verb is supposed to have produced a sentence that’s either true or false, with no possible worlds in sight? Where does this extra type come from?
One way is to actually build possible worlds into the meanings of verbs, so that something like “cry” would be type <e<st>>, meaning it combines with an entity that acts as its subject, and then outputs a sentence of type <st> — a function from possible worlds to truth values, which can then either stand on its own, or go on to easily combine with more words to form even bigger sentences.
Doing things this way is a bit messy, though. The meanings of many different kinds of words and phrases, beyond just sentences, can vary according to the world they’re applied to. A phrase like “President of the United States” can apply to someone like Ronald Reagan in our own world, or Walter Mondale, in a world where Reagan lost the 1984 election. In other words, the physical entity that such a phrase picks out can actually vary from world to world, making it type <se> — what’s known as an individual concept. But when we keep going like this, slapping an “s” onto the front of everything, we run into problems combining all of our meanings (e.g., a verb of type <e<st>> would have a hard time combining with a subject of type <se>, since their types are incompatible).
An alternative approach that’s a bit cleaner is to keep the meanings of all of our words and phrases as they are, and only treat sentences as type <st> when we need to. In other words, certain verbs, like “believe” and “promise” and “claim,” will trigger a new rule, which in some circumstances replaces our old way of applying functions to inputs: Intensional Functional Application. Put simply, under the right circumstances, instead of applying a word like “believe” to the truth value of a sentence, we apply it to that sentence’s intension, which is the core idea that the sentence represents. In all other cases, sentences are just true or false.
If we add this new rule, we keep all that we’ve gained with our semantic system so far, and add only the tools we absolutely need to handle what we couldn’t make sense of before.
So how about it? What do you all think? Let us know below, and we’ll be happy to talk with you about type theory and possible worlds. There’s a lot of interesting stuff to say, and we want to hear what interests you!
Previous Topic: Next Topic:
The Optimal Solution Coming soon!