After discussing speech production, Dennett shifts his focus to reporting and expressing our thoughts. The problem with reporting is that it could undo the parallel models that Dennett worked to prove in previous chapters. Multiple drafts, or collaborative demons, could become another Cartesian Theatre where we choose to express or report one thought from many candidates; because there has to be a final choice. But who chooses? A central decision maker?
Dennett’s virtual critic ‘Otto’ suggests: “My conscious life is private, but I can choose to divulge certain aspects. I can decide to tell you various things about my current or past experience.” For Otto, words can be woven together to make reports of a mental state, and it can seem like there is a decision maker. This happens while different processes cause multiple words to occur to us, even when only one is produced.
Dennett explains our possible expressions: beliefs about beliefs, beliefs about desires, hopes about fears, and so on. Beliefs are dispositional states, and thoughts are events or episodic states. We could be in pain, which is the first level, then there could be a report, and finally, our expression of that report in language to say, “I am in pain”. Then, there are higher orders of thought, like thinking about thinking. However, all thoughts and beliefs form a system of common sense of saying what you think, and reporting is the first order of thoughts.
Before we go further, it is worth mentioning Dennett’s discussion about the hierarchy of thoughts, beliefs and expression with the American philosopher David M. Rosenthal, who has talked about folk psychology. According to Rosenthal, a mental state is: “… conscious in virtue of there being another mental state, namely, a thought to the effect that one is in the first state”. Rosenthal introduced the theory of higher-order thought of consciousness. For Rosenthal, mental states that we are aware of are ones of the higher order, and consciousness happens when lower-order thoughts get attached to higher-order ones. Lower-order thoughts are unconscious thoughts for Rosenthal. But is there a conflict between the higher-order thought of consciousness theory and Dennett’s theory of consciousness?
Rosenthal’s order of thought might mean that there is a strict delineation between what we are conscious of and what we are not. It might also mean there is a consciousness threshold where hypothetical zombies or robots become as conscious as humans. Speaking robots like Dennett’s ‘Shakey’ have internal, unconscious mental states. Shakey can ‘find himself with things to say’, as per Dennett, rather than figuring out and expressing what he wants to say. The same argument could be applied to us, but the difference is we have a greater array of things to say, without knowing why we want to say them. Then, what if Shakey, or Zimbo (Dennett’s hypothetical zombie), had higher-order states? What if there was a more complex zombie or robot that could pass the Turing test? Should we consider them conscious in the same way we are, based on Rosenthal’s order?
Having Zimbo pass the Turing test doesn’t make it conscious like us, even if it appears that way. This is the second problem with Rosenthal’s theory. We experience the illusion of Zimbo’s consciousness in the same way we do when we witness the complexity of a computer and its power because we might not have a true understanding of how it works. Today, we might experience a similar illusion with ChatGPT. This order of thought is also an illusion we have about our minds. If reporting in robots like Mark II CADBLIND doesn’t need to have higher-order thoughts or a Cartesian theatre, then we don’t need this illusion either. Dennett’s virtual machines metaphor shows us that we don’t have much access to what goes on in our brains.
It would be interesting to check Dennett’s view on robots after the development of language models over 2010-2020. In an article in The Atlantic and an interview with Tufts Now, Dennett seems to be more concerned with the vandalism that AI can do to the trust between humans than the scenarios of AI taking over or controlling us.
There’s another reason to reject the order of thoughts theory; the generation of this order of thoughts isn’t unique to what has been reviewed so far in the book. The speech model in chapter eight already highlights this process of producing speech and listening to thoughts. It then generates the reflective system of thoughts and replaces any possible central observers that are assumed by psychologists like Rosenthal. Talking to yourself seems to be creating this order. However, Dennett raises doubts about this idea for two reasons: what subsystems are talking to each other, as we are not describing a central “self” here, and is “language” our only way for self-manipulation?
Errors and other possibilities of connection between the different stages of thoughts and beliefs highlight other flaws in Rosenthal’s model. What if we make a mistake in our expression? What if we think of something else rather than the thought that would be spawned following our subjective experience (like pain, pain report, and the final expression of the report in language)? Also, there are far more layers between the subjective experience and their linguistic expression than just two or three, and these layers all have the possibility of error, or they may not happen at all (maybe someone wouldn’t express the thought he has, or maybe he would express something else). All this might make us think twice about the theory. Stalinesque and Orwellian memory errors show how we don’t report the states of our mind but report what we remember. Then how can these orders of thought work here? Hierarchies of higher-ordered states and beliefs show us what people believe they think, not what they think. Our minds aren’t necessarily that ordered. We don’t need this hierarchy, but rather an event system that looks into the content of the entities, and ways of expressing them.
0 Comments