Mindblown

A blog about the mind, artificial intelligence, technology, science and much more!

The Computational Theory of Cognition

The Computational Theory of Cognition

Materialism concerning the philosophy of mind is often viewed as having a division between its reductive and non-reductive theories. Non-reductive materialism considers that whilst there are no concrete individuals in the world other than material particles (Kim, 1993), some physical things exhibit properties that cannot be reduced to physical properties.

The primary motivation of the CTC is to provide a representational account of the mind. Non-reductive materialism states that there are some properties that cannot be reduced, and these non-reducible properties are what the non-reductive materialist would label mental properties. Any successful non-reductive materialism theory would therefore have to demonstrate that there is indeed something ‘mental’ over and above something ‘physical’ and that these mental properties are able to make a causal contribution to what physically occurs in the world. The CTC explains this through the wetware of the brain. The CTC claims that the mind contains ‘symbols’ or representations’ which are composed of the components of the brain such as neurons. The idea is that the symbols that are composed of this wetware represent states of affairs, such as ‘it is snowing outside’ or ‘John loves Mary’. Under the CTC, mental properties are higher-order properties, concerning the functional organisation of the lower-order physical properties. Therefore, mental states are “functional relations to mental symbols”, and mental processes are just “computational processes defined over the mental symbols.” (Antony, 2007) This maintains the defence of the token identity of the mental and physical that the non-reductive materialist requires, via utilisation of the explanatory reduction distinction. (Kim, 1993, Crane, 2001) Explanatory reduction means that when I reduce from X to Y I gain a better understanding of X; this supports the CTC in arguing for the explanatory relation between mental and physical properties whilst maintaining that there is no type-identity reduction between the two. (Antony, 2007)

What can be seen here is that the CTC regards itself as a representational theory. For us to consider the CTC successful, we will deduce i) why being a representational theory is a successful endeavour, and ii) if the CTC is a success in being a representational theory. The explanatory budget of our mental life (Rey, 1991) indicates that there are four features of such a life that require an explanation by any theory of cognition. 

1. Intentional existence; when we desire, we as agents can imagine that thing that would satisfy our desire, even if that thing does not exist yet or at all.

2. Opacity; any and all of our actions reflect the way that we believe the world to be rather than the way the world actually is because I represent the world’s features to myself.

3. Reasoning and deliberation; reflection on our desires combined with reflection on our beliefs to logically determine our next course of action, and through reflection on what we believe we sometimes form new beliefs. 

4. Predictive power; the attribution of mental states allows us to accurately predict things that we otherwise could not such as the actions of others.

Representations allow us to explain these criteria. The non-reductive materialist would argue that because we as agents have the cognitive capacity for intentional existence and opacity, we must be able to represent states of affairs. To be able to represent entities that do not exist, or to represent the world in ways personal to myself, I must have access to representation. It also follows that representations have causal powers that would match their semantic properties, allowing the causal power from the mental to the physical to come about, leading to an explanation for reasoning, deliberation and our innate predictive power. Therefore, to be a successful theory of mind it is desirable to be a representational theory. 

The CTC is representational by appealing to computational representations; physical representations that therefore have physical properties. Computational representations have syntactic properties but also have semantic properties based on their relations with each other. This form of representation allows for an advantage for the CTC in causation. We want to allow for causation when we talk about cognition because it seems as though our mental and physical states can causally impact each other. The semantic properties of computational representations are altered when the syntactic properties of the components are changed, allowing for the causal powers of these representations to become apparent. In terms of the CTC, the brain will change the neural representations and their syntactic properties, and the representation will mirror that change in its semantic properties, causally affecting the mental representation. 

Let us question this, however; how can neurons represent anything at all? Let’s appeal to an example of a representation. I am a man, standing in the middle of a field. To my right, a dog approaches, and to my front, a woman on a bike passes from left to right. We can describe all these components in different ways, for instance, I am a collection of atoms, the field is a flat mass of grass and earth etc. All of these descriptions are regarding the syntactic properties of the entities. Let’s imagine a second system; a pencil is standing on a piece of cardboard, to its right an eraser approaches, and to its front, a pencil sharpener on top of a stapler crosses from left to right. The first engagement can be represented by the second, an engagement with different syntactic properties; it doesn’t matter what the representation is composed of, the semantic content still comes across; it wouldn’t matter that the symbol is composed of a collection of neurons and their corresponding electrical signals; whether it is composed of neurons, chocolate, or only the purple fruit pastilles, what matters is that it can still semantically represent a mental state. It is evident that the CTC achieves its primary motivation of being a representational theory through the use of computational representations.

The secondary motivation for the CTC is that of using a symbolic system to undertake this representation. When we investigate any form of Turing-style computation, we find symbolic systems; this is a result of the fact that Turing machines (such as the mind under the CTC) execute symbolic computation. However, a symbolic system, whilst heavily correlated, isn’t required for a representational theory; it’s perfectly plausible to have a set of rules for a representation that isn’t reliant on symbols being that which represent. Let’s examine why it’s of note that the CTC uses the systematic symbol approach. Horst (2005) suggests that when it comes to representational theories, a symbolic system accommodates productivity and systematicity. Productivity of thought is the idea that human beings are able to envision and entertain a potentially infinite list of different propositions. A system that doesn’t allow for symbols, instead only allowing for individual neuronal machine states or expressions, would be limited in the list of corresponding mental states that are available. A symbolic system, such as the CTC, can form a potential infinity of complex expressions that compose the representations used (Block & Fodor, 1972). An agent under a symbolic system is therefore capable of entertaining a potentially infinite list of different propositions. Systematicity is detailed by Block & Fodor (1972); when we as agents entertain a proposition, we are able to entertain correlated propositions. Horst (2005) explains that if I entertain the proposition ‘John loves Mary’ then I tend to also entertain the systematically related proposition that ‘Mary loves John’. Again, a symbolic system provides a good explanation of this; by virtue of being a system of set rules my belief in the proposition ‘John loves Mary’ must stand in some close relation to my belief in the proposition ‘Mary loves John’ as the similarity in the set rules required and content of the propositions would allow for the states to be systematically related in my mind. As Horst explains, the ability of the symbolic system of the CTC or other representational theories to explain productivity and systematicity signals a great strength of the symbolic system.

Now we shall look at the most major issue with the CTC. Proponents of the CTC find themselves unable to provide an explanation as to how exactly the neural components get the semantic content that the theory relies on. There are rough ideas that we can appeal to but currently, there is no full explanation sufficient in nullifying this criticism. The critics would like to say that because there is no way to explain how this semantic content comes about, we should reject this theory completely, but I argue that this is not the case. Simply because it cannot explain such a phenomenon does not mean that it cannot accommodate it. Supporters of non-reductive materialism are still theorising how it could be, and until they come to an answer or not the point stands that the theory is not incorrect. In a similar way the big bang theory, neither the show nor the scientific theory, has no account for what came before the big bang, but we still widely accept it because it isn’t wrong, it just has no explanation for a component. In a different example, alchemy as a theory is no longer considered successful because it was categorically proven wrong, rather than just couldn’t explain one component of it. The case of the CTC is clearly that of being unable to explain a component of the mind, but being an accurate theory concerning the vast majority of explanations. As far as we have discussed within the theory of mind, computational representations or representations of any kind are the most successful working mechanism by which we can understand the mind currently. In my opinion, it would be foolish to disregard the most accurate theory that we have when it isn’t even technically incorrect. I argue that it is foolish to categorise the CTC as unsuccessful because of this.

To summarise, by virtue of being a non-reductive materialistic stance, the CTC is motivated by its desire to provide a representational account of the mind and achieves this through computational representation. We also see that through this notion of computational representation combined with the complexity and functional higher-order nature of mental states, the CTC appeals to a symbolic system to provide its representations. Simply judging its motivations within the confines of a non-reductive materialist scope, the CTC is highly successful; it provides its representational account for mental properties, it has the ability to explain all four components of Rey’s explanatory budget, and the syntactic/semantic distinction allows for an explanation for mental causation. It also demonstrates great strength in accommodating both productivity and systematicity of thought. Its major objection being that it cannot explain one component of the theory is insufficient in its rebuke as it provides no concrete reason as to why we should reject a theory which is more accurate than any we have. In evaluation, it is clear that the CTC is a widely successful and accurate theory that in time will hopefully be able to explain the attribution of semantic content to the neural framework.

References

Antony, L. (2009). Everybody Has Got It: A Defense of Non-Reductive Materialism. In: Contemporary Debates in Philosophy of Mind. John Wiley & Sons.

Block, N.J. and Fodor, J.A. (1972). What Psychological States are Not. The Philosophical Review, 81(2), pp.159–181. doi:10.2307/2183991.

Crane, Tim (2001). Elements of Mind: An Introduction to the Philosophy of Mind. Oxford: Oxford University Press.

Horst, S. (2005) The Computational Theory of Mind. The Stanford Encyclopedia of Philosophy. Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/fall2020/entries/computational-mind/>.

Kim, Jaegwon (1993). Supervenience and Mind: Selected Philosophical Essays. Cambridge University Press.

Rey, Georges (1991). An explanatory budget for connectionism and eliminativism. In Terence E. Horgan & John L. Tienson (eds.), Connectionism and the Philosophy of Mind. Kluwer Academic Publishers. pp. 219–240.

Leave a Reply

Your email address will not be published. Required fields are marked *

Next Post

The Modular Mind

Tue Jan 17 , 2023
This post will be critically evaluating the massive modularity hypothesis, the view that the mind is composed ONLY of a collection of evolved cognitive ‘modules’. It is important to note that the theory does not state that the mind has cognitive modules, but that these are the ONLY thing that the mind is comprised of. […]

Recent Comments

No comments to show.