Our conversations may not always behave entirely logically, but we still have rules to follow. In this episode, we talk about the Cooperative Principle and the different Conversational Maxims that give us the rails our interactions run down.
Semantic Scope Ambiguity
Why do people interpret the same sentence multiple ways? What is it about semantics that leads us to more than one meaning? We take on semantic scope, and talk about how the most innocent-seeming words in your sentence are fighting it out to bestow upon you an interpretation where they come out on top.
Implicatures, Entailments, and Presuppositions
How do we puzzle out what meanings lie beyond our sentences? What different flavours of extra meanings are there? In this episode, we talk about implicatures, entailments, and presuppositions: how to define them, what the differences are between them, and how they enrich our understanding of the world.
How much does logic structure our sentences, and what kind of logic should we use? In this episode, we talk about sentential logic: where it came from, how we connect things up systematically, and in what ways language looks like it moves away from pure logic.
What kind of logic can we find inside sentences? How do we calculate the meaning from what we hear? In this episode, we talk about predicate logic: why we need it, how it differs from sentential logic, and how we can combine it with quantifiers to capture the full meaning of our language.
Set Theory and Adjectives
How do we form sets of different elements? How do those collections contribute to meaning? In this episode, we talk about set theory: the basics of how it works, how it connects to adjectives, and how it informs the way we build up larger meanings from individual words.
How do we work with the people we're talking with to make conversation flow smoothly? What tools can we use to show we're engaged? In this episode, we talk about common ground: how we build it, whether it differs for face-to-face vs. online communications, and how memes and new turns of phrase can help conversations and communities along.
How can we tell what's relevant when we try to work out what other people mean? What can experiments tell us about how much we'll consider when puzzling out meaning? In this episode, we talk about relevance theory: how it can help us more scientifically approach relevance in our discussions, how it interacts with the rest of our understanding of the rules of conversations, and how we can play with relevance in experiments to make people more or less likely to behave in logical ways.
How can we tell what words like "few" and "many" do in our sentences? What's the right way to represent these words in our minds? In this episode, we talk about generalized quantifier theory: what the math for quantifiers should look like, what properties natural language quantifiers seem to all share, and what that means for how kids can learn them.
How can we capture the meanings of transitive sentences? How do we match our syntax trees to our semantics? In this episode, we talk about lambda calculus: why we need it to explain what our other semantic machinery can't, how to work out its math, and what it can show us about how words move around in sentences.
Why can't we just use "ever" or "at all" in any sentence we want? What do we have to change about how a sentence works to let words like those in? In this episode, we talk about negative polarity items, or NPIs: when they can show up, why their name is misleading, and how changing what a sentence entails changes everything for these little terms.
How do we focus on crucial information in our conversations? What methods do we have for moving things into the center of discussion? In this episode, we talk about information structure: how we build up the common ground in discussion, what we do to bring up topics and signal our focus, and how different languages use varying strategies to bring new ideas to the fore.
How do we combine words to build full propositions? How do we account for what people believe, not just what's definitely true? In this episode, we talk about type theory: how we can define terms by how they relate to the world and each other, what the difference is between sense and reference, and how we can use possible worlds to work out what people believe.
The Semantics and Pragmatics of Presupposition
What's the difference between "thinking" and "knowing"? What rules do we follow for adjusting our conversational worlds? In thisepisode, we delve into the semantics and pragmatics of presuppositions: which words come equipped with them, how presuppositions depend on the situation and our mental worlds, and what antipresuppositions can tell us about the mechanics of interpreting sentences.
How do we capture the meaning of "may" or "can"? What kinds of linguistic math do we need to understand them? In this episode, we take a look at modality: where words like "must" fit in our meanings; how we consider many ways the world could be to account for their semantics; and how the same string of sounds can have a lot of flavours.
How do events factor into our mental linguistics? How can we adjust our logic to capture different sentence permutations? In this episode, we take a look at event semantics: what problems they're meant to solve, how they help us limit time and place in our sentences, and what evidence we have that events are real.
How do we know who "he" is? And how does "he" differ from "himself" when we interpret it? In this episode, we talk about the syntax and semantics of pronouns: how we can place them in sentences, how they link up to variables, and the role of context in how we interpret them.
What's in our minds when we throw an if/then sentence out there? How do we work out what worlds we may be talking about? In this episode, we talk about the semantics of conditionals: what an "if" looks like logically, why a simple logical arrow isn't enough to capture the complexities of conditionals, and how we change what possibilities we allow ourselves to think of based on what our "if" clause holds.