Quick Summary:

We've talked in the past about predicate logic and set theory, and how we can use them to represent the mathematics that underlies the meanings that we derive from language. And predicate logic does a good job of handling some of the quantifiers that crop up in natural language, like all or some. But what about the other ones that we encounter, like few or many or most? It turns out that predicate logic can't handle those: no matter how you try to combine the symbols, it won't capture our intuitions about how they should work. So we need to set up something else.

Fortunately, set theory does a good job with these quantifiers: we can treat words like all, some, no, most, and the rest as creating comparisons between sets. It lets us make a generalized quantifier theory. And beyond that, when we use sets instead of predicate logic, we can see that all the quantifiers in natural human language share some properties, like conservativity, in which a quantifier only cares about elements in the first set they combine with. So they ignore anything in the second set that’s not already in the first. It's not that we can't come up with non-conservative quantifiers; they're easy enough to define. But we never find them in language, and when kids are presented with non-conservative quantifiers, they don't seem to acquire them properly. So it really looks like it's something hard-wired into our semantic system!

 

Extra Materials:

In the episode, we argued that the kind of logic we had been working with up until now wasn’t quite powerful enough to represent the sorts of things we’re able to say in a natural language. So, in order to move forward, we need set theory to really get a grip on the meanings of words like “most”, “all”, “no”, and others.

But, in moving from predicate logic over to set theory, we haven’t actually tossed logic out the airlock. All of those predicates and connectives and quantifiers that we’ve been talking about, taken together, make up first-order logic. In rejiggering quantifiers into the sorts of things that compare sets, we’ve quietly entered the realm of second-order logic. (See this blog entry for a handy, 1-page overview of the different sorts of logic.)

Remember that in first-order logic, we can apply predicates to individuals. So, if we had the symbol “d” stand in for a person named Duane Dibbley, and we wanted to say something about him —for example, that he’s a bit goofy — we could write it out like this:

     (1) Gd

But recall from our episode all about set theory that the word “goofy”, which is an adjective, can also be treated like a set — the set of all things goofy, of which the element d happens to be a part.

     (2) G = {x | x is goofy}

Because of this tight connection between first-order logic and set theory, when we consider a sentence like (3), we can either look at it like a quantified noun phrase followed by a predicate, or we can look at it like a quantified noun phrase followed by a set.

     (3) Some cat is goofy

If we think of it in that second way, that quantified noun phrase can be analyzed as a second-order predicate, applying not to an individual, but rather a set of individuals. To keep track of which predicates are first-order and which are second, we can use that double struck “C” for “some cat”, while keeping “G” for “goofy”.

     (4) ℂG

Really, this is just another side of the same coin. But thinking about quantified sentences in this way (i.e., as cases of second-order predication) opens up the field for some interesting questions. Like if the whole quantified noun phrase “some cat” is a second order predicate, and the noun “cat” by itself can be thought of as yet another set, what does that make the quantifier on its own? We know it compares sets and spits out either “true” or “false”, but how does it do that? And what about all the words we’ve been ignoring up until now — those little ones with barely any meaning, like “is” or “to”? Or bigger ones, like modals and adverbs? We’ll have to leave this story at a bit of a cliffhanger, but now we’ve got nearly everything we need to resolve it.


As a follow-up to the idea that we don’t see non-conservative quantifiers in natural language, it’s important to address some apparent exceptions. If you think about the words “only” and “even”, they seem to pose some pretty serious problems. It looks like, in both cases, we can’t get away with only paying attention to the set they directly combine with; we’ve got to widen our scope of interest. Notice that the sentence in (5) below can’t really be reworded as the sentence in (6).

     (5) Only maintenance bots can fix the ship’s computer

     (6) Only maintenance bots are maintenance bots that can fix the ship’s computer

If you think about it a little, that first sentence in (5) means that no one else besides the maintenance bots can fix the computer. That second sentence in (6) doesn’t seem to rule out the possibility that, for example, some member of the crew could also fix the computer. In fact,  (6) is on the verge of being hollow: of course only maintenance bots are maintenance bots!

In (5) above, the word “only” (this also applies to “even”) has to consider not just the maintenance bots, but also a whole bunch of other potential candidates for who might be able to fix the ship’s computer. It looks like a genuine counterexample to our supposed universal.

However, on closer inspection, these words don’t quite fit the profile of a quantifier. For one, they show up where quantifiers usually don’t:

     (7a) Lister only asked for fresh mango juice

     (7b) *Lister many/few asked for fresh mango juice

     (8a) Lister asked for only fresh mango juice

     (8b) *Lister asked for many/few fresh mango juice

In fact, they’re distributed more like adverbs than anything:

     (9a) Lister repeatedly asked for fresh mango juice

     (9b) Lister asked for impossibly fresh mango juice

Because they show up before verbs and adjectives, linguists have instead classified words like “only” and “even” more as adverbs than quantifiers, ultimately rescuing the universal conservativity of natural language quantifiers.

 

Discussion:

So how about it? What do you all think? Let us know below, and we’ll be happy to talk with you about relevance theory and how it handles pragmatics. There’s a lot of interesting stuff to say, and we want to hear what interests you!

blog comments powered by Disqus

Previous Topic:                                                                                                                                                                                                            Next Topic:

Operation Relevance                                                                                                                                                                                             Coming soon!