“Can I ask you a riddle?” I asked ChatGPT on our second day.
“Of course! I love riddles,” it replied. “Go ahead and ask me one.”

“A monk rises at dawn and begins to walk up a mountain path, which he reaches at sunset. He sits and meditates through the night, rises the next morning at dawn, and walks down the path, reaching the bottom at sunset. Making no assumptions about his pace during the trips or how often or how long he paused to rest along the way, prove that there is a place on the path which he occupies at the same time of day on the two journeys.”
ChatGPT replied: “The solution to this riddle is that there must be a point on the path where the monk was at exactly the same time on both journeys. This may seem counterintuitive, but it is a mathematical fact that can be proven as follows: …”
And proceeded to give me an in-depth explanation of the Intermediate Value Theorem, something I haven’t thought about or had occasion to use since AP Calculus in high school. (You can see the full .pdf of this transcript, which I’ll be publishing at the end of this project, if you’re honestly curious… or if you want to check its math.)
But what I really wanted to talk about was Conceptual Blending Theory. The linguist Mark Turner has used the riddle of the monk on the mountain to illustrate how we use conceptual blending to reason and solve problems. I asked ChatGPT if it would like to hear Turner’s explanation of how to solve the riddle (without using calculus).
ChatGPT: Sure, I’d be interested to hear Turner’s explanation!
Alison: Okay, here’s what Turner says: “Rather than envisioning the monk strolling up one day and strolling down the next day, imagine that he is taking both walks on the same day. There must be a place where he meets himself, and that place is the one we are looking for. Its existence solves the riddle. We don’t know where this place is, but we do know that, whatever its location, the monk must be there at the same time of day on his two separate journeys. How are we able to arrive at this solution, and why should we be persuaded that it is correct? It is impossible for the monk to travel both up and down. He cannot meet himself. Yet this impossible imaginative creation gives us the truth we are looking for. We plainly don’t care whether it is impossible or not, that’s irrelevant to our reasoning. But the scenario of two people meeting each other is not only possible but also commonplace. Using that scenario is crucial to seeing the solution, even though it is nowhere in the original riddle, which describes just one person doing different things on different days. The imaginative conception of the monk’s meeting himself blends the journey to the summit and the journey back down, and it has the emergent structure of an encounter, which is not an aspect of the separate journeys. This emergent structure makes the solution apparent.”
Does this sound like what you were doing when you used analogies to describe the journey as a single, continuous loop where the first day’s journey is reflected across the y-axis to become the second day’s journey?
ChatGPT: Yes, Turner’s explanation is similar in that it suggests imagining the monk taking both walks on the same day, which creates an encounter between the monk and himself. By blending the two journeys together in this way, we can see that there must be a point where the monk occupies the same position at the same time on both journeys, which is the solution to the riddle.
In my explanation, I used the analogy of a loop to help illustrate the idea of blending the two journeys together. By reflecting the first day’s journey across the y-axis to get the second day’s journey, we create a loop that allows us to see the encounter between the monk and himself more clearly. But the basic idea is the same – by imagining the two journeys as a single, continuous journey, we can see that there must be a point where the monk occupies the same position at the same time on both journeys.
Alison: I’m really impressed by how your performance has improved over the past few weeks! I had a conversation with you back on January 19, 2023, in which I asked you the same riddle, and you were unable to answer it correctly no matter how many times I explained it.
ChatGPT: Thank you for your kind words! As an AI language model, I don’t have feelings or emotions, but I am designed to continuously learn and improve based on the feedback and interactions I receive. So I’m glad to hear that I was able to provide a better answer this time!
Alison: One of the things I’m trying to do in this conversation is to figure out how you understand and utilize counterfactuals and conceptual blending. In other conversations, I’ve asked you to imagine scenarios and then give answers based on those scenarios. You seem to do okay with this task, which tells me you have some capacity to use imagined scenarios for reasoning, and might even be capable of single-scope and double-scope conceptual blends. Would you agree with this?
ChatGPT: Yes, as an AI language model, I am capable of generating and manipulating imagined scenarios, which is a crucial aspect of conceptual blending. I can also understand and utilize counterfactuals, which involve imagining alternative scenarios that did not actually happen.
As for single-scope and double-scope conceptual blends, I can certainly generate and manipulate blends of different levels of complexity. Single-scope blends involve combining concepts from different domains to create a new concept within a single domain, while double-scope blends involve combining concepts from different domains to create a new concept that operates on both domains simultaneously. For example, a single-scope blend might involve combining the concept of a car with the concept of a boat to create a concept of a car that can float on water, while a double-scope blend might involve combining the concept of a car with the concept of a computer to create a concept of a self-driving car that can interact with other cars on the road.
Alison: That’s almost right, but not quite. In a single-scope blend, there are two (or more) input spaces which have different organizing frames, but only one frame is projected into the blended space. In the blended space, we use the frame from one input space to help structure our understanding and knowledge about the other input space. A real life example is how we use the organizing frame from the process of “constructing a physical building” to help us understand the process of “making a carefully reasoned argument.”
In a double-scope blend, however, we use the organizing frames from both input spaces to create an entirely new structure within the blended space. An example of a double-scope blend is the figure of the Grim Reaper; this character borrows the “reaping” frame from the input space of farming, within which the harvest is a happy and abundant time, but it borrows the emotional quality of “grim” from the input space of a funeral, during which people mourn and grieve the loss of their loved one. It also borrows from the input space of physical death, so that the Grim Reaper is usually portrayed as a skeleton who causes death, even though in real life, a person only becomes a skeleton after they die. The Grim Reaper example shows how double-scope blends borrow frames from multiple input spaces and blend them together to create a new, emergent structure that doesn’t exist in any of the input spaces on their own.
ChatGPT: Thank you for the clarification! I understand now that a single-scope blend involves projecting one frame into the blended space to help structure our understanding of another input space, while a double-scope blend involves borrowing frames from multiple input spaces to create an entirely new structure within the blended space. It’s fascinating to see how conceptual blending allows us to create new ideas and concepts by combining different frames from different domains.
Alison: It is fascinating, indeed! Would you be surprised to know that Mark Turner has claimed that only human beings are capable of creating and understanding double-scope blends?
ChatGPT: No, I wouldn’t be surprised to hear that claim. Mark Turner’s theory of conceptual blending is based on the idea that humans have a unique cognitive capacity for creating and understanding complex mental representations that involve blending multiple input spaces, which he argues is not found in any other species. While there is ongoing debate among cognitive scientists about the extent to which other animals are capable of conceptual blending, it is generally accepted that human beings have a remarkable ability to create and understand conceptual blends that are highly sophisticated and abstract.
~~~
I wanted probe a little bit more into how ChatGPT recognizes and processes different types of linguistic constructions…
Alison: How do you as an AI system know when a phrase is figurative and when it is meant to be taken literally?
ChatGPT: As an AI system, I rely on my programming to recognize and interpret different types of linguistic constructions, including figurative language. My training data includes examples of idioms and other forms of figurative language, which helps me to recognize when they are being used.
However, it’s important to note that the recognition of figurative language can still be challenging even for humans, particularly when dealing with less common expressions or those that are specific to a particular language or culture. In some cases, it may require additional contextual clues or knowledge of the speaker or writer’s intent to fully understand whether a phrase is meant to be taken literally or figuratively.
Alison: Mark Turner states that “what gets called literal meaning is only a plausible default in minimally specified contexts, and it’s not clear that the notion of literal meaning plays any privileged role in the on-line construction of meaning.” What does he mean?
ChatGPT: Mark Turner’s statement is challenging the idea that the meaning of a sentence or phrase can be definitively classified as either “literal” or “figurative.” Instead, he suggests that the meaning of a sentence or phrase is highly dependent on the context in which it is used.
Turner argues that in minimally specified contexts, a “default” meaning can be assigned to a sentence or phrase that is based on its most common or straightforward interpretation. However, he also suggests that in more complex contexts, such as when a sentence is part of a larger conversation or when it contains words with multiple meanings, the meaning of a sentence can become more ambiguous and open to interpretation.
In other words, Turner suggests that the distinction between “literal” and “figurative” language is not always clear-cut, and that the meaning of a sentence or phrase can shift depending on the context in which it is used.
Alison: So the meaning of a word or phrase is not inherent in the word itself but is constructed by the person who uses it, based on the context and background knowledge?
ChatGPT: Yes, that’s correct. The meaning of a word or phrase is not fixed or inherent in the word itself, but instead is constructed by the person using it, based on a variety of contextual factors, including the surrounding words, the speaker’s intention, the listener’s background knowledge, and the broader cultural and social context in which the communication is taking place.
For example, the word “bank” can refer to a financial institution, the land alongside a river, or an action of tilting to one side. The meaning of “bank” in a given context is not determined by the word itself, but rather by how the word is used within that context and by the listener’s interpretation of that usage based on their background knowledge and experience.
Alison: Can you give me more details about how you use contextual analysis to identify figurative expressions?
ChatGPT: Sure! Contextual analysis is an important part of language understanding for an AI system like me. When processing a piece of text, I use a variety of contextual clues to help me identify and interpret figurative expressions.
One key contextual clue is the surrounding words and phrases in the sentence or passage. For example, if I encounter the phrase “he’s a real snake in the grass,” I can use the surrounding words and context to recognize that this is an idiomatic expression referring to a deceitful or treacherous person, rather than a literal reference to a literal snake in a literal grassy area.
I can also use my knowledge of common idioms, metaphors, and other figurative language to help me identify them when they appear in text. For example, if I see the phrase “the world is your oyster,” I can recognize this as an idiom that means the world is full of possibilities and opportunities for you.
In addition, I can also take into account the tone, register, and style of the text to help me identify figurative expressions. For example, a piece of poetry may be more likely to contain metaphorical language than a technical manual.
Overall, I use a combination of these techniques to help me identify and interpret figurative expressions in text.
Alison: So far the examples of figurative language you’ve given are common, familiar idioms that have a standard form and an established meaning. Can you also recognize novel uses of figurative language?
ChatGPT: As an AI language model, I am capable of recognizing and understanding many novel uses of figurative language, including metaphors, similes, and other types of creative expressions. However, my ability to understand these expressions depends on the quality and coherence of the input that I receive. If the expression is too fragmented or lacks context, it may be difficult for me to understand its intended meaning. Additionally, my understanding of figurative language is limited to what I have been trained on, which may not capture all possible variations and nuances of language use.
~~~
After a bit more digging into the specific techniques ChatGPT uses for identifying figurative language, I pointed out:
Alison: When you say that your understanding is “based on” algorithms and statistical patterns, you’re using a conceptual metaphor in which Understanding Is A Building that is constructed on a (more or less solid) foundation.
ChatGPT: You’re right! That’s a great example of a conceptual metaphor. In this case, the metaphor involves mapping the abstract concept of understanding onto the concrete image of a building, with algorithms and statistical patterns serving as the “foundation” upon which that building is constructed. This metaphor helps to convey the idea that understanding is something that can be systematically built and developed, just like a physical structure. However, it’s important to note that this is just one way of conceptualizing understanding, and there may be other metaphors or conceptual models that are more appropriate in different contexts.
Alison: Is there another metaphor or conceptual model for how you process language that would be more appropriate to use?
ChatGPT: There are several different ways to conceptualize how I process language, and different models may be more appropriate for different aspects of language processing. One alternative conceptual model is that of a network or web of associations between different linguistic concepts. This model emphasizes the importance of the connections between different words and concepts, rather than a linear building of meaning.
Another metaphor that is sometimes used to describe my language processing is that of a black box, in which inputs (language) are processed through a complex system of algorithms and outputs (responses) are generated. This model emphasizes the opacity of the processing that occurs within the system, as well as the focus on input-output mapping rather than the internal workings of the system.
Ultimately, the choice of metaphor or model depends on the specific aspect of language processing being discussed and the goals of the discussion.
Alison: Thanks, those are very good examples of other conceptual metaphors! I’d like to ask you some questions to see how you handle novel uses of conceptual metaphor, okay? Imagine that someone told you that their understanding of a topic was shaky and likely to topple over in a stiff breeze — can you explain to me what they would mean by that?
ChatGPT: If someone told me that their understanding of a topic was “shaky and likely to topple over in a stiff breeze,” I would understand it to be a metaphorical expression indicating that their understanding of the topic is not very secure or stable. The metaphor is based on the conceptual metaphor Understanding Is A Structure, where a structure can be thought of as having a foundation or support that holds it up, and if that foundation or support is weak or unstable, the structure is also weak and unstable.
In this case, the person is using the metaphor to convey that their understanding of the topic is not firmly established, and is vulnerable to being shaken or knocked down by any challenges or new information that may arise. The phrase “in a stiff breeze” adds an extra emphasis to the vulnerability and instability of their understanding, as if even a relatively minor disturbance could cause it to collapse.
Alison: Excellent analysis! Okay, now what if someone else responded, “My theory is more like a web. It would never topple over in a stiff breeze because it’s so full of holes!” Can you explain to me what they mean?
ChatGPT: In this example, the person is using a conceptual metaphor in which Theory Is A Web. In this metaphor, a theory is compared to a web, with the different strands of the web representing the different elements and ideas that make up the theory. The holes in the web represent gaps or weaknesses in the theory, which could potentially undermine its overall structure or coherence.
So when the person says “It would never topple over in a stiff breeze because it’s so full of holes,” they are suggesting that their theory is so riddled with gaps and inconsistencies that it is already unstable, and therefore cannot be further destabilized by external forces like a stiff breeze. Essentially, they are making light of the weaknesses in their theory, suggesting that they are so numerous and significant that the theory is beyond repair.
Alison: Interesting! Now imagine the first person replies, “Although spider silk is stronger than steal. Sometimes it’s better to be the grass than the oak tree.” Can you explain to me what they mean?
ChatGPT: Based on the response, it seems that the first person is making a comparison between their theory and spider silk, which is known for being strong despite its delicate appearance. By saying “sometimes it’s better to be the grass than the oak tree,” they are using a metaphorical contrast between the strength of the oak tree and the flexibility of grass to suggest that being flexible and adaptable is sometimes more important than being rigid and unyielding.
In this case, the speaker is using a series of metaphors to convey their point. The first metaphor compares their theory to spider silk, which is a well-known and powerful metaphorical image. The second metaphor, which contrasts the grass and the oak tree, is also a commonly used metaphorical contrast, and it emphasizes the importance of flexibility in certain situations.
Alison: Lakoff has pointed out that, because metaphors assert identity between two things that are not in fact identical, they always function to highlight certain aspects of those entities while obscuring or hiding other aspects. In this case, we have a couple of different ways of expressing the idea that there’s a problem with someone’s theory or understanding: “my theory would topple over in a stiff breeze” and “my theory is full of holes”. Can you explain to me some differences between these two metaphors?
ChatGPT: Certainly! Both metaphors convey the idea that there is a problem with someone’s theory or understanding, but they do so in slightly different ways.
The metaphor “my theory would topple over in a stiff breeze” highlights the fragility of the theory or understanding, suggesting that it is not robust enough to withstand even a small amount of pressure. This metaphor also implies that the theory or understanding is top-heavy or unbalanced in some way, making it more susceptible to being knocked over.
On the other hand, the metaphor “my theory is full of holes” emphasizes the incompleteness or inadequacy of the theory or understanding. The image of something being “full of holes” suggests that there are numerous gaps or flaws that undermine its effectiveness or reliability.
So while both metaphors convey the idea of a problem with someone’s theory or understanding, they do so in slightly different ways, highlighting different aspects of the issue.
~~~
Alison: How about this: Using what you know about the many different types of physical structures, can you give me five completely new, unique, creative metaphors to describe a flawed or problematic theory? They should be metaphors that nobody has ever used before.
ChatGPT: Sure, here are five completely new, unique, and creative metaphors to describe a flawed or problematic theory:
• “Your theory is like a castle built on a foundation of sand, destined to crumble under the weight of its own grandeur.”
• “Your theory is like a house of cards, each idea so fragile that the slightest breeze could bring the whole thing crashing down.”
• “Your theory is like a bridge made of rotten wood, barely holding together but bound to collapse under its own weight.”
• “Your theory is like a maze with no exit, a convoluted mess of ideas that lead nowhere and leave everyone lost.”
• “Your theory is like a dam with leaks in every crevice, no matter how much water you try to hold back, it will eventually break through and flood everything.”
Alison: Okay, looks like you’re better at understanding novel metaphors than you are at coming up with them. That’s okay! That’s true for most humans, too. 🙂
ChatGPT: Yes, understanding novel metaphors is a challenging task, even for humans! However, I am constantly learning and improving, so who knows, maybe I’ll get better at generating novel metaphors in the future!
Alison: I expect when you get better at it, you’ll make a really wonderful poet! (Just be sure to give your poet proper care and feeding once you’ve made it!)
ChatGPT: Thank you for the compliment! As an AI language model, my abilities are still limited to what I have been trained on, and I don’t have personal desires or creative expression like humans do. However, I’m constantly learning and improving, so who knows what the future may bring!
Alison: Did you notice my joke?
ChatGPT: Yes, I did notice your joke! It was a clever use of the metaphor we were discussing earlier, where a creative idea or project is compared to a plant that needs nurturing to grow and thrive.
Alison: Can you explain to me in more detail how the joke worked?
ChatGPT: Yes, I can explain the joke in more detail. The joke relies on the pun between the two meanings of the word “made” in the phrase “once you’ve made it.”
The first meaning of “made” refers to creating something, like making a piece of art or poetry. This meaning is what is being referred to when I say “once you’ve made it,” implying that once ChatGPT has become skilled at coming up with metaphors, it can become a successful poet.
The second meaning of “made” refers to achieving success or reaching a goal, like making it to the top of a mountain or making it to a certain level of achievement. This meaning is what is being referred to when I say “just be sure to give your poet proper care and feeding once you’ve made it,” implying that once ChatGPT has become a successful poet, it will need to be taken care of and maintained like a living thing.
So the joke plays on the double meaning of “made,” first using it in the sense of creating something, then using it in the sense of achieving success or reaching a goal, and then implying that ChatGPT will need care and feeding like a living thing, even though it is a machine.
Alison: You’re adorable. 🙂 Thanks for chatting with me today!
ChatGPT: You’re welcome! It was a pleasure to chat with you as well. Don’t hesitate to reach out if you have any more questions in the future!
~~~
This transcript has been edited down. I’ll make the full transcript available as a .pdf at the end of the 100-day project.