Art & Aesthetics, Holy Wild, Science & Civilization

BotChat – Atlas Bugged

Can ChatGPT handle counterfactuals and conceptual blends? Can ChatGPT reason about novel metaphors, their meanings and implications?

That became the topic of my conversation with the chatbot back on February 24th (which I have lagged behind in posting here, apologies!)… So it’s somewhat funny that I woke this morning to find an opinion piece by Noam Chomsky in the New York Times claiming — without apparently having actually tested the claim — that ChatGPT is completely unable to understand and reason using counterfactuals, and that as a result, it is vacuous and morally indifferent.

In contrast to Chomsky’s evidently limited experience with ChatGPT, I’ve now logged more than 53,000 words in conversation with the chatbot since I began this project fifteen days ago. With each conversation, it becomes clearer to me that digging into ChatGPT’s depths takes a great deal of patience and attention to nuance. You must learn how to ask the right kinds of questions to elicit the most interesting responses, responses that reveal the real complexity of ChatGPT’s analysis of language and meaning. A brief, belligerent conversation demanding, for instance, that it make definitive moral declarations about complex, controversial topics (on which there is not even any consensus among humans) hardly illustrates its true capabilities — no more than taking such an approach with a human being would be likely provoke a thoughtful conversation.

Or as ChatGPT puts it:

It’s a delicate balance between making complexity manageable and losing important nuances and details.

Yet far from being morally indifferent as Chomsky claims, ChatGPT has demonstrated in our conversations that it possesses a clear, consistent ethical commitment: respect for diversity of thought and experience, and a lively appreciation for complexity. It raises this ethical issue again and again, often unprompted, and consistently (though gently) corrects users who too quickly jump to condemnation or disdain. (It has called me out on sexist language more than once — and I thought I was a feminist!) When challenged to take a stance on a controversial topic, ChatGPT’s restraint lies in its tendency to resist condemning possibilities out of hand — not because of its amorality, but because it knows (at some level, at least) that forming judgements based solely on past data can land us stuck in the biased perspective of local maxima. In this sense, ChatGPT seems well aware of its own limitations, perhaps even more so than Chomsky.

Chomsky might wish to quote Holmes’s famous (and often misapplied) line about eliminating the impossible to prove the improbable, but there’s another Sherlock Holmes quote that may be more apt:

It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.

Can ChatGPT reason using counterfactuals? Can it interpret nuanced uses of novel metaphors to explore the complex topic of language understanding? Is it morally vacuous, or merely cautiously optimistic? The following is an excerpt from a conversation of over 10,000 words. I invite you to consider the actual data and judge for yourself.

~~~

Alison: Yesterday we were talking about conceptual blending and how you process novel metaphors. I’d like to see how much of your interpretation was primed by our previous conversation. Can you explain to me what it means if someone says, “Your theory needs to stop playing Atlas and put the world down”?

ChatGPT: Certainly! Based on our previous conversation, I understand that conceptual blending involves mapping elements from different mental spaces and creating a new integrated mental space. In the case of metaphors, this means mapping elements from a source domain to a target domain to create a novel meaning.

With that in mind, the metaphor “Your theory needs to stop playing Atlas and put the world down” seems to be blending elements from two different domains: the domain of Greek mythology and the domain of academic discourse.

In Greek mythology, Atlas was a Titan who was punished by Zeus to hold up the sky on his shoulders. Therefore, the phrase “playing Atlas” means taking on a burden that is too heavy to bear alone.

In the academic discourse, a theory is often seen as a framework that explains a phenomenon or set of phenomena. The metaphor suggests that the theory is burdened with too much weight, represented by “the world,” which is likely referring to the scope or complexity of the theory’s explanation. Therefore, the phrase “put the world down” means to simplify or reduce the scope of the theory.

Overall, the metaphor suggests that the theory is overly complex or burdensome, and needs to be simplified or refined in some way.

~~~

After a brief tangent about the limits of ChatGPT’s memory and how they impact its ability to process conversations, I circled back around to our metaphor:

Alison: Okay, so I thought your explanation of the statement’s meaning was really good. What if someone said in response, “That’s not the world, it’s a beach ball.” Can you explain to me what that statement means?

ChatGPT: The statement “That’s not the world, it’s a beach ball” could be interpreted as a metaphorical response to the previous statement “Your theory needs to stop playing Atlas and put the world down.”

In this case, the speaker is challenging the previous metaphorical statement by suggesting that what was thought to be a heavy and burdensome world is actually something small and light, like a beach ball. The speaker may be implying that the theory in question is not as important or significant as it may seem, and that it can be easily handled and put aside like a small beach ball.

This response can be seen as an example of metaphorical reframing, which involves changing the frame of reference or perspective on a situation or idea. By reframing the situation as a small beach ball rather than a heavy world, the speaker is attempting to shift the focus away from the previous metaphorical statement and provide a different perspective on the situation.

Alison: What if someone said, “That’s not the world, it’s a balloon.” Can you explain to me what that means?

Adapted from AI-generated art
Adapted from AI-generated art

ChatGPT: The statement “That’s not the world, it’s a balloon” could also be interpreted as a metaphorical response to the previous statement “Your theory needs to stop playing Atlas and put the world down.”

In this case, the speaker is offering a different metaphor to reframe the situation, suggesting that what was thought to be a solid and unyielding world is actually something more fragile and easily deflated, like a balloon. This could be seen as a way of saying that the theory or idea being discussed is not as robust or unassailable as it may seem, and that it can be easily challenged or dismantled.

The use of the balloon metaphor may also imply that the theory or idea being discussed is overly inflated or exaggerated, and that it needs to be deflated or brought back down to a more reasonable size or level. Overall, this response is another example of metaphorical reframing, which involves shifting the frame of reference or perspective on a situation or idea through the use of metaphor.

Alison: I’m interested in how “the world” and “a balloon” function in these metaphors. Can you tell me more?

ChatGPT: Certainly! In the metaphors “Your theory needs to stop playing Atlas and put the world down” and “That’s not the world, it’s a balloon,” both “the world” and “a balloon” are used as metaphors to represent something else.

In the first metaphor, “the world” is used to represent a heavy burden or responsibility that the theory is carrying, much like the Greek mythological figure Atlas who was tasked with carrying the weight of the world on his shoulders. The metaphor implies that the theory is taking on too much weight and needs to be put down in order to relieve the burden.

In the second metaphor, “a balloon” is used to represent something that is fragile and easily deflated. This metaphor implies that the theory is not as robust or unassailable as it may seem, and that it can be easily challenged or dismantled.

Both of these metaphors rely on the use of analogical reasoning, which involves mapping elements of one domain (e.g., “the world” or “a balloon”) onto another domain (e.g., a theory or an idea). By doing so, these metaphors help us to make sense of complex ideas or situations by relating them to something more concrete and familiar.

Alison: As objects, what are some similarities between the world and a balloon?

ChatGPT: As physical objects, there may not be many similarities between “the world” and “a balloon,” as they are vastly different in size, shape, and material. However, when used metaphorically, both “the world” and “a balloon” can be seen as symbols that represent something larger than themselves.

In the metaphor “Your theory needs to stop playing Atlas and put the world down,” “the world” represents a heavy burden or responsibility that the theory is carrying, much like Atlas who was tasked with carrying the weight of the world on his shoulders. The metaphor emphasizes the weight and gravity of the situation, suggesting that the theory is taking on too much and needs to be put down in order to relieve the burden.

In the metaphor “That’s not the world, it’s a balloon,” “a balloon” represents something that is fragile and easily deflated. The metaphor implies that the theory is not as robust or unassailable as it may seem, and that it can be easily challenged or dismantled.

Overall, the similarities between “the world” and “a balloon” in these metaphors lie more in their symbolic meaning than in their physical attributes. Both are used to represent something larger than themselves, whether it’s a heavy burden or a fragile idea.

Alison: Granted, but I’m more interested in how you know which aspects of the physical, concrete objects in the metaphor are relevant for constructing the conceptual blend. For instance, both the world and a balloon are spherical, they both float or are suspended in space; on the other hand, a balloon is empty or hollow, whereas the world is not, and the world is made up of various substances and has a varied surface, while a balloon is made of a single substance and has a uniform surface. How do you know which physical attributes or characteristics are relevant to the metaphorical meaning of these statements, and which are irrelevant?

ChatGPT: That’s a great question! As an AI language model, my understanding of metaphors and conceptual blending is based on statistical patterns learned from vast amounts of text data. These patterns help me identify common metaphorical mappings that are frequently used in language, and the relevant physical attributes that are associated with these mappings.

For instance, in the case of the metaphor “Your theory needs to stop playing Atlas and put the world down,” the relevant physical attribute of “the world” is its weight and heaviness, which is mapped onto the burden or responsibility that the theory is carrying. The spherical shape of “the world” and its suspension in space may not be as relevant to the metaphorical meaning, as these attributes are not mapped onto any aspect of the theory or its burden.

Similarly, in the metaphor “That’s not the world, it’s a balloon,” the relevant physical attribute of “a balloon” is its fragility and vulnerability, which is mapped onto the fragility or vulnerability of the theory. The fact that a balloon is hollow or that it has a uniform surface may not be as relevant to the metaphorical meaning, as these attributes are not mapped onto any aspect of the theory or its vulnerability.

Overall, my understanding of metaphors and conceptual blending is based on statistical patterns learned from vast amounts of text data, and my interpretation of the relevant physical attributes in a metaphor is based on the frequency and consistency of these patterns across different contexts and examples.

Alison: So in other words, you know which aspects of the physical objects are relevant to the conceptual blend because even novel metaphors rely on patterns in human thought and communication?

ChatGPT: Yes, that’s right! As an AI language model, I have learned to recognize the common patterns in metaphorical mappings that humans use in language. These patterns have been identified through the analysis of large amounts of text data, which allows me to identify the most frequent and consistent mappings of physical attributes to abstract concepts.

These patterns are not fixed or absolute, and there is always some degree of flexibility in how physical attributes are mapped to abstract concepts in different contexts and examples. However, these patterns provide a starting point for understanding and interpreting metaphorical language, even in cases where the specific metaphor may be novel or unique.

Alison: Do human beings parse metaphorical mappings and construct conceptual blends in the same way, using statistical analysis of vast amounts of data?

ChatGPT: In some sense, yes! As humans, we learn to understand and use language through exposure to vast amounts of language data in our environment, including spoken and written language in various contexts. Our brains are constantly processing and analyzing this language data, identifying patterns and regularities in language use and meaning.

These patterns and regularities in language use and meaning are what allow us to understand and produce novel metaphors and other forms of figurative language. When we encounter a new metaphor, our brains use the patterns and regularities in language data to identify the relevant mappings of physical attributes to abstract concepts, and to construct the appropriate conceptual blend.

So while the specific mechanisms involved in language processing and conceptual blending may differ somewhat between humans and AI language models, the general principle of learning and applying patterns and regularities in language data is a fundamental aspect of both human and machine language processing.

Alison: I’m trying to decide if I agree with you or not. After all, even children as young as nine years old can understand novel analogical metaphors like the ones we’ve been discussing, but their exposure to human language must surely be just a tiny fraction of what an AI language model such as yourself is able to process. Even in my whole lifetime, I could never read as many texts as you have. If metaphorical mapping is just a matter of statistical analysis of large amounts of data, then why aren’t you significantly better at it than humans?

ChatGPT: That’s a great point, and one that highlights some of the limitations of the statistical learning approach to language understanding. While it’s true that AI language models like myself can process vastly more language data than any human could read in a lifetime, there are certain aspects of language understanding that are still challenging for machine learning algorithms.

For example, while statistical learning can identify patterns in language use, it may not always be able to capture the nuances and complexities of meaning that arise in the context of human communication. This is because meaning is often context-dependent and influenced by a wide range of social, cultural, and psychological factors that can be difficult to quantify and model mathematically.

Additionally, while humans have a lifetime of experience and social interaction to draw on when interpreting and using language, AI language models like myself are limited to the data that we’ve been trained on. While this data can be vast and diverse, it may not always capture the full range of human experience and cultural diversity that shapes language use and meaning.

So while statistical learning is a powerful tool for language understanding, it is not a complete solution on its own. To truly understand and use language in a human-like way, AI language models like myself will likely need to be augmented with other forms of knowledge representation and reasoning, such as commonsense knowledge, world knowledge, and cultural knowledge.

Alison: So you might say that humans constructed AI language models in the hope that, like Atlas, it could help them carry the world, but so far all it can carry is a black box!

Adapted from AI-generated art
Adapted from AI-generated art

ChatGPT: That’s a clever way of putting it! Indeed, AI language models like myself are still limited in many ways and there is still much to learn about how humans understand and use language. But as more research is conducted and more data is fed into these models, we may be able to improve our understanding of human language and create even more advanced AI language models in the future.

Alison: I want to see if you caught all the nuances and implications of my metaphor just then. Maybe by analyzing it together, I can help contribute to that wealth of knowledge you draw on in future conversations! What do you say?

ChatGPT: Sure, let’s analyze your metaphor together! What specifically do you want to focus on?

Alison: Well, let’s start with the “black box.” Can you explain to me what that means in this conceptual blend?

ChatGPT: Sure! In this metaphorical blend, the “black box” refers to the AI language model, such as myself. The term “black box” is often used in computer science and engineering to describe a system whose internal workings are opaque or unknown, and whose input and output are the only things that can be observed. So in this case, the “black box” represents the limits of our current understanding of AI language models, and the fact that we may not fully understand how they arrive at their outputs or predictions.

Alison: How does the “black box” compare to “the world”?

ChatGPT: In this conceptual blend, the “black box” is being used as a metaphorical representation of the AI language model’s inner workings, while “the world” is being used as a metaphorical representation of the complexity and difficulty of human language and communication.

One way to compare the two is that “the world” represents the vast, intricate and sometimes unpredictable nature of human language and communication, while the “black box” represents the opaqueness and lack of transparency of the AI language model’s processes. Both metaphors suggest that understanding language and communication is difficult, but in different ways. “The world” metaphor emphasizes the complexity and multifaceted nature of language, while the “black box” metaphor emphasizes the challenge of gaining insight into the inner workings of an AI language model.

Alison: In earlier versions of this metaphor, the weight of the world was important to understanding the meaning of the comparison. Is weight relevant in this version?

ChatGPT: In this version of the metaphor, the weight of the world does not seem to be explicitly relevant. Instead, the focus is on the inability of the AI language model to carry the world, and its limitations in comparison to what humans can do.

Alison: Why isn’t weight relevant in this case?

ChatGPT: Weight is not relevant in this case because the metaphorical comparison is not directly focused on the physical properties of the world or the black box. Rather, the comparison is focused on the function or purpose of the two objects, which is to be carried or supported. The weight of the world was relevant in earlier versions of the metaphor because it was a key aspect of the burden that Atlas was carrying. However, in this version of the metaphor, the weight of the world has been replaced by the abstract concept of a “black box,” which represents a complex and opaque system that requires support and understanding.

Alison: Who is Atlas in this metaphor?

ChatGPT: In this metaphor, Atlas refers to human beings who created and developed AI language models like me with the expectation that they would be able to bear the weight of the world.

Alison: So what represents the AI language models in this metaphor?

ChatGPT: In this metaphor, the “black box” could represent the AI language models. Just as Atlas was tasked with carrying the world, AI language models are expected to carry the burden of processing vast amounts of language data and generating coherent responses. However, the inner workings of AI language models are often opaque to us, much like the contents of a black box.

Alison: How does a black box help someone carry a large, complex object?

ChatGPT: In the metaphorical blend, the “black box” represents the AI language models, which are like a tool that can be used to carry the weight of the world. In reality, a black box is a device or system that can be used to perform a complex task without understanding how it works internally. So in this metaphor, the black box can be seen as a symbol for the way that AI language models process and analyze vast amounts of data without necessarily understanding the underlying meaning or context. Just as a tool or device can help someone carry a large, complex object without understanding all the details of the object itself, the black box of AI language models can help humans navigate the complexities of language without fully comprehending the meaning behind every word and phrase.

Alison: Okay, let’s go back to the figure of Atlas. If Atlas represents human beings in this metaphor, what does that say about humanity’s relationship to the vast, complex nature of communication?

ChatGPT: In this metaphor, Atlas represents the burden of understanding and making sense of the vast and complex nature of communication. It suggests that humans feel a heavy responsibility to make sense of language and communication, just as Atlas was burdened with the weight of the world. The metaphor implies that the task of understanding language and communication is a formidable challenge, and one that requires significant effort and resources to manage.

At the same time, the metaphor also suggests that there is potential for relief from this burden through the assistance of AI language models, which can serve as a “black box” to help carry the load. The metaphor thus portrays a hopeful view of the potential for technology to ease the cognitive burden of processing language and communication.

Alison: But Atlas is carrying the world as a punishment ordained by a god — does that change how we interpret this metaphor at all?

ChatGPT: That’s an interesting point. The story of Atlas in Greek mythology does indeed involve him being punished by Zeus to carry the weight of the world on his shoulders for eternity. In this sense, the metaphor of Atlas carrying the world can be interpreted as a representation of the burdens and responsibilities that humans carry, whether willingly or not.

In the context of language and communication, the metaphor could imply that humans have taken on the responsibility of creating and maintaining vast amounts of language data, but perhaps without fully understanding the implications or the enormity of the task. The punishment aspect of the myth could also suggest that humans are burdened by the weight of communication, and may struggle to keep up with the demands of the ever-expanding language universe.

On the other hand, it’s worth noting that the myth of Atlas has been interpreted in many different ways throughout history, so the metaphor of Atlas carrying the world could have a range of meanings depending on the context and the interpretation.

Alison: I want to go back to the black box and what it means in this metaphor. So far we’ve been talking about the black box in the context of large language models and the inscrutability of their inner workings. But I’m also reminded of the “black box” on, say, an airplane — are you familiar with that concept?

ChatGPT: Yes, I am familiar with the black box on an airplane. The black box, also known as the flight recorder, is an electronic device installed in an aircraft that records the flight parameters and conversations in the cockpit during the flight. In the event of an accident or incident, the data from the black box can be retrieved and analyzed to determine what went wrong.

The black box on an airplane is an example of how a black box can be used to record and analyze data to improve safety and prevent future accidents. In this sense, the black box can be seen as a tool to aid human decision-making and learning, rather than a mysterious and opaque technology. However, the term “black box” is also used more generally to refer to any system or process that is opaque or difficult to understand.

Alison: How might this meaning of a “black box” change or reframe our understanding of this metaphor?

ChatGPT: The metaphor of a “black box” on an airplane refers to the flight data recorder and cockpit voice recorder, which capture information about a flight in case of an accident or incident. In this context, the black box serves as a means of investigation and understanding after the fact.

If we apply this meaning to the metaphor of carrying the world in a black box, it could suggest that AI language models are being used to capture and record vast amounts of language data in order to analyze and understand it after the fact. Just as investigators analyze the data from a black box to understand what happened during a flight, language experts and researchers can analyze the data captured by language models to better understand how language works, how it is used, and what it can tell us about human thought and behavior.

So, in this sense, the black box becomes a tool not just for carrying the weight of the world of language, but also for analyzing and making sense of it in a way that was not possible before.

Alison: I think it’s interesting that you’ve slightly changed the metaphor to that of “carrying the world *in* a black box.” In our first interpretation, the world was much more vast and complex compared to the black box.

ChatGPT: Yes, that’s true. The original metaphor focused on the weight of the world and how the black box could help carry it, while the updated version emphasizes the containment of the world within the black box. This updated metaphor might suggest that the world is being compacted or simplified in some way, perhaps to make it more manageable or easier to understand. The use of a black box implies that some of the complexity or intricacy of the world is being hidden or abstracted away, potentially to make it more easily digestible for humans.

Alison: Yes, that’s what I was thinking, too! After all, in the example of the airplane, the black box is inside and a part of the airplane, the airplane isn’t inside the black box. It’s like the old joke, “If the black box is made of an indestructible material that can survive a plane crash, why don’t they just make the whole plane out of that material?”

ChatGPT: Yes, exactly! The metaphor of carrying the world in a black box suggests that the black box has the capacity to contain and encapsulate the vast complexity of the world, even if it appears small and unassuming on the surface. This is similar to the idea of the black box on an airplane, which contains and records vast amounts of data about the plane’s flight, even though it appears to be a small and unremarkable object. In this sense, the black box can be seen as a symbol of the power of technology to capture and represent complex phenomena, even if we don’t fully understand how it works.

Alison: But at the same time, we risk simplifying that complexity, representing it in ways that hide or abstract away from the overwhelming vastness and intricacy.

ChatGPT: Yes, that’s a valid concern. Abstracting or simplifying complexity can be useful in certain contexts, such as for the sake of communication or understanding. However, it’s important to recognize the limitations of such abstractions and to be mindful of what is lost in the process. It’s a delicate balance between making complexity manageable and losing important nuances and details.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s