Quick Summary:

In our first look at language and logic, we discussed **sentential logic**, and the interpretations that we can come up with when we look at sentences as the basic unit to reason from. prwe know intuitively that this isn't going to be enough to capture everything about language, because there's a lot going on inside of sentences, too. So here, we introduce **predicate logic** so that we can gain more power to explain what's happening. Predicates are most often verbs, but they can also be adjectives or nouns that describe a state of being. Predicates come into the sentence needing a certain number of elements to be used correctly, and so they take elements from around them to fill their requirements. We can use logical connectives in predicate logic, much like in sentential logic, and also quantifiers, which operate over groups of possibilities. With predicate logic, we're much closer to the semantics of real languages than just with the tools we had before, with sentential logic.

Extra Materials:

In this episode, we argued that the tools of sentential logic weren’t powerful enough to capture the complexities of natural, human language. This was because sentential logic doesn’t give us the ability to relate one atomic sentence to another. For instance, we know there’s some relationship between (1) and (2) below; Angel and Darla have something in common with each other.

(1) Angel is a vampire

(2) Darla is a vampire

But this connection between (1) and (2) is lost in sentential logic; different sentences must be represented with different symbols.

(3) A

(4) D

Once we introduce predicates, which are like stand-ins for the more intuitive, real-world idea of properties, we boost the **expressive power** of the language — we increase the number of kinds of things that the language can represent and describe. Now, we can see the relationship.

(5) Va

(6) Vd

Of course, predicate logic introduces quantifiers as well; they help to account for the relationships between quantities of things. For example, if you knew that “every vampire drinks blood”, you also know that “at least one vampire drinks blood”. Represented in sentential logic, it would look something like this:

(7) E

(8) O

Those symbols don’t tell us much about the connection between the sentences they’re supposed to be talking about. Even predicates don’t help much, since you would need to list every individual vampire and say that that vampire drinks blood, in order to get the point of the sentence in (7) across. This task becomes impossible when the individuals under discussion form a large (or even infinite) group.

(9) Ba∧Bb∧Bc∧Bd∧Be∧ . . .

With symbols for the words “every” (∀) and “some” (∃), along with some variables (e.g., x, y, z) to stand in for any individuals we want them to, expressing all these ideas becomes a piece of cake. Or maybe that should be blood pudding.

(10) ∀x(Bx)

(11) ∃x(Bx)

Any logical system with this degree of expressive power, the power to quantify over individuals, is known as **first-order logic**. Since sentential logic has no quantifiers at all, it is sometimes dubbed **zeroth-order logic**.

As linguists and philosophers continued to explore language, it became apparent that even first-order logic wouldn’t cut it, and that **higher-order logics** were needed to convey the sorts of ideas and relationships that human languages are capable of communicating.

We’ll be exploring the limitations of first-order logic, and why we need more powerful tools, in future episodes. For now, let’s take a brief glimpse at couple of different sorts of logic and what they can do that predicate logic — a first-order logic — can’t.

We’ll start one step beyond predicate logic with **second-order logic**. Second-order logic still contains all of the same symbols as predicate logic, including the logical connectives and quantifiers. On top of these, however, it introduces a new set of quantifiers which can quantify over predicates; we can represent these with ∀P and ∃P, which roughly correspond to “for all predicates” and “there exists a predicate” respectively.

To get a sense of how these work, imagine you wanted to deploy this system to express something like the sentence in (12); you might use the symbolization in (13).

(12) Fred and Wesley have something in common

(13) ∃P(Pf∧Pw)

Translated back into English, (13) would sound something like “there exists some property P such that the property is true of both Fred and Wesley”. This sentence would be true, as long as you could find some specific property that did, in fact, apply of both of them — like the fact that they’re both bookish.

It’s pretty easy to come up with something obviously false, too. If we wanted to say that all properties were true of everything, we would use the symbolization in (14).

(14) ∀P∀x(Px)

This sentence can’t actually be true, though, since we can think of some things that aren’t true of every individual — for example, being human, since vampires aren’t. Still, it’s a nice example of a second-order statement.

Another extension to logic that can be made is to introduce a new kind of operator — symbols which modify the overall meaning of a sentence — which can express different sorts of **modality**. In plain English, we can introduce symbols which represent the ideas of either **possibility** or **necessity**. Any logical system which does this can be called a **modal logic**.

To keep things simple, let’s add a couple of modal operators to sentential logic. If we wanted to say that “it’s possible to defeat Jasmine”, we could use modal logic to symbolize that idea in (15).

(15) ◇J

That diamond-shaped operator expresses ‘possibility.’ To express ‘necessity’, we use a square-shaped symbol. The sentence in (16) says that “for there to be good, there must necessarily also be evil.

(16) □(G→E)

Translated back into English, it says something like “it is necessarily the case that if there is good, then there is evil”.

And these operators can be combined with the logical connectives to represent more complex ideas relating to modality, like the idea that defeating Jasmine is **impossible** (17), or that it’s **contingent** and therefore things could go either way (18).

(17) ¬◇J

(18) ◇J∧◇¬J

Now, defining the truth of statements in modal logic can get pretty fancy, because it introduces the idea of **possible worlds** into the mix (if you imagine changing one small detail in the real world, like your own height or weight, you’ve just imagined one of many possible worlds). Logicians treat the necessity and possibility operators a lot like the universal and existential quantifiers, except they quantify over possible worlds instead of individuals. This way, (15) actually ends up meaning something like “there exists at least one possible world where Jasmine is defeated”, while (16) says something like “in all possible worlds, if there is good then there is evil”. It really is a bit like something out of science fiction!

But as invaluable as possible worlds and modal logic have become to linguists, they only really show up when you start doing very high level semantics work. Still, as we continue to explore meaning in natural language, we’ll quickly realize that our tools have to be enriched to handle even some of the most common expressions.

Discussion:

So how about it? What do you all think? Let us know below, and we’ll be happy to talk with you about the different kinds of logic that underlie language. There’s a lot of interesting stuff to say, and we want to hear what interests you!

Previous Topic: Next Topic: