Henry Berry, Author at Arcing Mind Blog https://arcingmind.com/author/admin/ Tue, 17 Jan 2023 15:02:38 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 214468854 The Modular Mind https://arcingmind.com/2023/01/17/the-modular-mind/ https://arcingmind.com/2023/01/17/the-modular-mind/#respond Tue, 17 Jan 2023 14:33:59 +0000 https://arcingmind.com/?p=21 This post will be critically evaluating the massive modularity hypothesis, the view that the mind is composed ONLY of a collection of evolved cognitive ‘modules’. It is important to note that the theory does not state that the mind has cognitive modules, but that these are the ONLY thing that the mind is comprised of. […]

The post The Modular Mind appeared first on Arcing Mind Blog.

]]>

This post will be critically evaluating the massive modularity hypothesis, the view that the mind is composed ONLY of a collection of evolved cognitive ‘modules’. It is important to note that the theory does not state that the mind has cognitive modules, but that these are the ONLY thing that the mind is comprised of. The theory itself has evolved over time, with each iteration having subtle differences that I will detail throughout this post. Before reading my post, I highly recommend Woodward and Cowie’s 2004 paper ‘The Mind is Not (Just) a System of Modules Shaped (Just) by Natural Selection’ as a superb criticism of the theory overall.

The modular mind hypothesis broadly states that the mind is composed of several distinct ‘modules’ or ‘departments’, each of which has come about through the process of evolution for the purposes of solving a particular problem. For example, there might be a particular module whose purpose is to process and solve the problem of someone in a team not pulling their weight, or a module whose purpose is to solve complex issues such as who gets to take the last biscuit from the tin. Every module has a specific problem that it evolved to solve. We start with this point because it already leads us to an issue; what exactly is a module? Fodor (1983) defines his modules by the following criteria. As long as a system has most of these features ‘to some interesting extent’, then we should examine such a system as a module. 

  1. Domain specificity
  2. Mandatory operation
  3. Limited central accessibility
  4. Fast processing
  5. Informational encapsulation
  6. ‘Shallow’ outputs
  7. Fixed neural architecture
  8. Characteristic and specific breakdown patterns
  9. Characteristic ontogenetic pace and sequencing

Information encapsulation is the most notable of these criteria, being necessary for every instance of module that Fodor describes. A cognitive system is informationally encapsulated if its processing is affected only by the inputs given to it and whatever information the system already contains. This means that modules, under Fodor’s view, cannot access information stored elsewhere in the mind, and have only the limited input they are given to be able to function. Information encapsulation can also be seen to exist in certain illusions. To a modular mind theorist, the Müller-Lyer illusion demonstrates that, even after additional information is given to an observer, they are unable to alter their module to fix their perception of the illusion. 

Information encapsulation as a criterion is unstable. Fodor maintains that his view of modularity must be a ‘modest’ one, meaning that high-level cognitive processes such as reasoning or planning must be non-modular, and must therefore have potential access to every piece of information accessible to an agent (Fodor, 1983). To Fodor, modularity only occurs in low-level processes such as those that facilitate perception or language. No matter whether we take Fodor’s module criteria as true or not, we need to understand how systems get their information at all. This leads Fodor to the Frame Problem; how is it that any cognitive system decides what information is relevant to which problem? Must it be the case that those cognitive processes that are non-modular under Fodor’s view have no regulation or identification process for the information that is relevant to them? Carruthers (2004) identifies another issue; “any processor that had to access the full set of the agent’s background beliefs…would be faced with an unmanageable combinatorial explosion” meaning that the system would become informationally overloaded. It also seems intuitively wrong to argue for information encapsulation (Woodward and Cowie, 2004). Systems of the mind appear to have access to lots of information, such as the system of perception. Prinz (2006) details several instances where information encapsulation cannot be the case for the modules of the mind. For example, the double-flash illusion demonstrates that observers viewing one light flash with two auditory beeps recorded unanimously as seeing two light flashes, showing how the module associated with perceiving light flashes became confused when non-relevant external information ‘leaked’ into it. Even the Müller-Lyer illusion that was first used to support informational encapsulation has in the last decade been shown to vary based on alternative factors such as age. 

Domain specificity is the idea that the module must be restricted in its subject matter, answering only (or at least mostly) those questions which are relevant to the problem that it evolved to solve. In the 1980s, at the height of the psychological-neuroscientific boom, pure domain specificity would have been heavily supported by the findings of the scientific community at the time. However, more recent studies such as Barrett and Bar (2009) have demonstrated that there are significant portions of the brain that are not domain-specific such as the amygdala. It is agreed upon now that one part of the brain can function and process many different tasks, meaning domain specificity as a criterion is incorrect (Woodward and Cowie, 2004).

Overall, it seems the only way forward for the modular mind is to reject Fodor’s modest modularity and his module criteria and formulate an alternate proposal. 

Post-Fodorian modularity centres around one idea; full modularity of the mind. Massive modularity claims that the mind is fully modular, even the high-level systems that process reasoning and planning must be modular. Most importantly, we must still define what we mean by a ‘module’. Carruthers (2006) harnesses Fodor’s criteria for a module and (at first) reduces it down to the following:

  1. Domain specificity
  2. Mandatory operation
  3. Limited central accessibility
  4. Dissociability
  5. Neural localisability

As we have already discussed, domain specificity is not an acceptable criterion. Carruthers quickly accepts this in later chapters in his 2006 piece, and reevaluates his list of 5, removing not only domain specificity but also mandatory operation and neural localisability as well. For Carruthers (2006), the final form of his module criteria is simply:

  1. Limited central accessibility
  2. Dissociability

It is clear that Carruthers has rejected information encapsulation, the feature that Fodor had considered the most important to his description of a module. But we still need to limit the input of each module to avoid information overload due to the Frame Problem. To Carruthers, the modules of the mind are many functionally isolable processing systems. He argues that the modules must be functionally isolable so as to avoid the Frame Problem but to still allow them to have access to a substantial amount of information. There is scientific data that supports the functional isolability (and dissociability) of modular systems described in massive modularity from examples such as dissociation data. Dissociation data details exactly what regions of the brain are affected due to different mental or genetic disorders. The data shows us which regions when damaged, result in these different disorders. For example, individuals who are high on the Autistic spectrum have impaired social ability but other faculties e.g. general learning are unaffected. This can be explained by massive modularity; the module(s) that processes social ability might be damaged or developed incorrectly, but the module(s) that processes general learning might be completely typical. Not only does dissociation data support the modular mind, but it also highlights an evolutionary advantage in the system described; if natural selection changes some faculty of a species’ mind, it is not the case that every cognitive system must change also. For example, if the modular system that handles social ability is changed by natural selection then only this modular system must change; this is a far more realistic process than a non-isolated cognitive system where potentially the whole cognitive framework would change.

Carruthers (2006) appeals to three main arguments in support of massive modularity;

The Argument from Design

  1. Biological systems are constructed incrementally, composed of subsystems.
  2. When complex, these systems need to be organised as a collection of separately modifiable modules to achieve their function based on their subsystem components.
  3. The human mind is a complex biological system.
  4. Therefore, the human mind is massively modular in its organisation.

Carruthers’ argument from design does not work. The jump required from this conclusion that the human mind is massively modular in its organisation to the conclusion of the massive modularity argument that the mind is ONLY composed of modules is missing a clear progression. Separate modifiability entails the criterion of dissociability that Carruthers suggests, as each module must be able to change independently of others, but there is no reason why separate modifiability entails his other criterion of limited central accessibility. Further, it seems wrong to say that all complex biological systems must be separately modifiable. Woodward and Cowie (2004) demonstrate that there are biological traits that are not independently modifiable from others, detailing the example of the human two-lung system; to change this feature of this biological system we must change specifically the gene code that codes for bilateral symmetry in humans, which in turn would cause changes to so many other features. Amongst other examples, this shows that independent modification is clearly not a considerable feature of complex biological systems, so why would we apply it to the complex biological system of the mind? 

The Argument from Animals

  1. Animal minds are massively modular.
  2. Human minds are incremental extensions of animal minds.
  3. Therefore, the human mind is massively modular. 

This argument fails at the first step. There is insufficient evidence for us to conclude that animal minds are massively modular in the first place; Carruthers (2006) attempts to do so, but the majority of his arguments for such a conclusion centre around animal learning mechanism and their domain specificity. As we know, Carruthers later abandons domain specificity as a requirement for modularity, so this argument provides no more weight.

The Argument from Computational Tractability

  1. The mind is computationally realised.
  2. All computational mental processes must be suitably tractable (easy to control).
  3. Only processes that are informational encapsulated are suitably tractable.
  4. Therefore, the mind must consist entirely of encapsulated computational systems.
  5. (Implied) Therefore, the mind is massively modular.

Again, this argument seems to have more issues than strengths. Firstly, we must add an implied conclusion that a system that consists entirely of encapsulated computational systems is massively modular to Carruthers, otherwise, this argument achieves nothing in the way of supporting massive modularity. The issue with this is that Carruthers does not include encapsulation of any form in his criteria for modularity. Upon examination of the chapter in which Carruthers details this argument, there is nothing that could be discerned as a reason why this argument actually supports massive modularity. Not only that but there remains significant disagreement that suitable tractability is a characteristic only of encapsulated processes. 

Overall, these three arguments provide little weight to the massive modularity hypothesis. However, this is not to say that there are no strengths of massive modularity. 

As we have discussed, modules under the massive modularity theory are individuated by their functional roles, with the idea of a module being analogous to modules from biological systems. To further this, Barrett (2005) suggests an enzymatic representation of modules. The enzymatic model highlights structures within our minds which are representations of information that exist external to us, in the world. These structures are located by corresponding modules, which then process this information thereby changing the representation’s structure in the same way an enzyme alters the structure of a biological substrate. With every change to the structure of the representation, information can be added to it, meaning that an alternative module with the correspondingly structured receptor can then process the same substrate as the previous module. Multiple modules are able to share the same physical substrate as it develops and more information is added. To further the enzymatic model used by massive modularity proponents, Carruthers suggests a ‘bulletin-board model’. The representations are ‘posted’ onto a bulletin board, the location and identification of which is a contained set of information within each module. Each module is therefore able to scan the board for physical structures or markers that match their substrate receptor. The module is then able to identify exactly which information is relevant to the module, and then will alter this representation and return the changed representation back onto the board. Other modules are then ‘activated’ by the changed representation. There are many immediately obvious successes of this overall model. For one, multiple modules are able to access the same bulletin board at the same time, meaning that a vastly varied collection of modules are able to interact with and process a particular representation at the same time. Not only can each module interact with the same representation, but each module is still functionally isolable from the rest even despite this, which is required for Carruthers’ defence against the Frame Problem. Barrett (2005) calls this ‘access generality with processing specificity’, detailing the distinction between each module’s function. Further, there can be a potentially limitless number of bulletin boards throughout the mind. There could be a bulletin board for taste modules, a bulletin board for hunting modules, and a bulletin board for modules entirely focused on sitting down. Different modules can access multiple boards, and new boards can be formed by modules that evolved to solve the problem of not having specific bulletin boards. Arguably its greatest strength is its malleable description, Carruthers does not need to identify each board or how they came to be but just that they could exist. 

A final topic that needs addressing is the idea that these cognitive modules come about via natural selection. This is a central idea to all modularity, not just massive modularity, as most if not all of the scientific evidence that Carruthers provides stems from the notion of natural selection. Of course I find no issue with the idea that our minds evolved via natural selection, our minds must have evolved by natural selection as this is the only process by which they could have evolved. My issue is with the way in which Carruthers and other evolutionary psychologists claim to use evidence of natural selection to support massive modularity. Evolutionary theories about the body tend to find some physical trait of a fossil, compare this to a similar modern feature (or lack thereof), and then posit some problem that caused that adaptation to arise. Evolutionary psychology cannot employ such a strategy, as brains do not leave fossil records, so study tends to work by taking a modern cognitive feature and then positing some problem from the past that caused this ‘solution’ to arise. The important note here is that the evolutionary psychologist immediately assumes that the cognitive feature must be a solution to some problem. Immediately, this strikes a disagreement; why must the cognitive feature be a solution to a problem, and how can one obtain sufficient or compelling evidence in support of this conclusion, therefore surely any conclusion in evolutionary psychology is pure conjecture and cannot be accepted as fact. There is no doubt that our minds arose by natural selection, but this reverse engineering tactic with little to no fossil evidence provides no sustainable foundation for any evolutionary psychology theory (Woodward and Cowie, 2004).

Carruthers does not touch on this criticism of evolutionary psychology methodology, but both he and Barrett suggest a relevantly similar developmental approach to modularity, the idea that modules are as developed as they are through them being learning systems. Modules have basic templates, and adapt and evolve over time based on an organism or species’ collective interactions with their environment. For instance, the module to identify prey might look very different within a group of primates relative to that of an eagle, as a result not only of different requirements but also through different respective interactions with their environments. Even within that same group of primates, each organism might have slightly different modules for prey identification as a result of their individual interactions. Whilst perhaps viewed as a strength of massive modularity, this does not seem enough to reject criticism regarding evolutionary psychology methodology.

Overall, Carruthers and Barrett do solve a few problems that faced Fodor’s modest modularity and provide some intuitive discussions and analogies that seem to work with specific scientific findings given some leniency. However, even despite scientific data that supports their arguments there exists far more that goes against them. Much of the evidence used to support massive modularity is either outdated and/or misused in its meaning, and in turn, most of their proposed arguments seem to fail at the first step in their logic or reasoning, let alone on the truth of their premises. Not only that, but the constantly changing criteria for what constitutes a module leads to confusion which can be seen even throughout Carruther’s 2006 piece. Evolutionary psychology and its findings do not doubt that the human mind is a highly modular and complex biological system, and the evidence used by both sides demonstrates this, but after critically evaluating the arguments for the massive modularity theorist’s claims, it simply is not a supported conclusion to say that it must be the case that the mind is just a collection of evolved cognitive modules. 

References

Barrett, H. (2005). Enzymatic computation and cognitive modularity. Mind and Language, 20, 259 – 287. 

Barrett, L., & Bar, M. (2009). See it with feeling: affective predictions during object perception. Philosophical Transactions of the Royal Society B: Biological Sciences, 364, 1325 – 1334. 

Carruthers, P. (2004). The mind is a system of modules shaped by natural selection. In C. Hitchcock (Ed.), Contemporary Debates in Philosophy of Science (pp. 293 – 311). Blackwell. 

Carruthers, P. (2006). The Architecture of Mind. Oxford University Press.

Carruthers, P. (2011) Dissociation Data, The Opacity of Mind: An Integrative Theory of Self-Knowledge, pp. 293–324. Available at: https://doi.org/10.1093/acprof:oso/9780199596195.003.0010.

Cowie, F., & Woodward, J. (2004). The mind is not (just) a system of modules shaped (just) by natural selection. In C. Hitchcock (Ed.), Contemporary Debates in Philosophy of Science (pp. 312 – 334). Blackwell.

Fodor, J.A. (1983) The Modularity of Mind. Cambridge, MA: The MIT Press. Prinz, J. J. (2006). Is the Mind Really Modular? In R. J. Stainton (Ed.), Contemporary debates in cognitive science (pp. 22–36). Blackwell Publishing.

The post The Modular Mind appeared first on Arcing Mind Blog.

]]>
https://arcingmind.com/2023/01/17/the-modular-mind/feed/ 0 21
The Computational Theory of Cognition https://arcingmind.com/2023/01/17/the-computational-theory-of-cognition/ https://arcingmind.com/2023/01/17/the-computational-theory-of-cognition/#respond Tue, 17 Jan 2023 14:29:46 +0000 https://arcingmind.com/?p=17 Materialism concerning the philosophy of mind is often viewed as having a division between its reductive and non-reductive theories. Non-reductive materialism considers that whilst there are no concrete individuals in the world other than material particles (Kim, 1993), some physical things exhibit properties that cannot be reduced to physical properties. The primary motivation of the […]

The post The Computational Theory of Cognition appeared first on Arcing Mind Blog.

]]>

Materialism concerning the philosophy of mind is often viewed as having a division between its reductive and non-reductive theories. Non-reductive materialism considers that whilst there are no concrete individuals in the world other than material particles (Kim, 1993), some physical things exhibit properties that cannot be reduced to physical properties.

The primary motivation of the CTC is to provide a representational account of the mind. Non-reductive materialism states that there are some properties that cannot be reduced, and these non-reducible properties are what the non-reductive materialist would label mental properties. Any successful non-reductive materialism theory would therefore have to demonstrate that there is indeed something ‘mental’ over and above something ‘physical’ and that these mental properties are able to make a causal contribution to what physically occurs in the world. The CTC explains this through the wetware of the brain. The CTC claims that the mind contains ‘symbols’ or representations’ which are composed of the components of the brain such as neurons. The idea is that the symbols that are composed of this wetware represent states of affairs, such as ‘it is snowing outside’ or ‘John loves Mary’. Under the CTC, mental properties are higher-order properties, concerning the functional organisation of the lower-order physical properties. Therefore, mental states are “functional relations to mental symbols”, and mental processes are just “computational processes defined over the mental symbols.” (Antony, 2007) This maintains the defence of the token identity of the mental and physical that the non-reductive materialist requires, via utilisation of the explanatory reduction distinction. (Kim, 1993, Crane, 2001) Explanatory reduction means that when I reduce from X to Y I gain a better understanding of X; this supports the CTC in arguing for the explanatory relation between mental and physical properties whilst maintaining that there is no type-identity reduction between the two. (Antony, 2007)

What can be seen here is that the CTC regards itself as a representational theory. For us to consider the CTC successful, we will deduce i) why being a representational theory is a successful endeavour, and ii) if the CTC is a success in being a representational theory. The explanatory budget of our mental life (Rey, 1991) indicates that there are four features of such a life that require an explanation by any theory of cognition. 

1. Intentional existence; when we desire, we as agents can imagine that thing that would satisfy our desire, even if that thing does not exist yet or at all.

2. Opacity; any and all of our actions reflect the way that we believe the world to be rather than the way the world actually is because I represent the world’s features to myself.

3. Reasoning and deliberation; reflection on our desires combined with reflection on our beliefs to logically determine our next course of action, and through reflection on what we believe we sometimes form new beliefs. 

4. Predictive power; the attribution of mental states allows us to accurately predict things that we otherwise could not such as the actions of others.

Representations allow us to explain these criteria. The non-reductive materialist would argue that because we as agents have the cognitive capacity for intentional existence and opacity, we must be able to represent states of affairs. To be able to represent entities that do not exist, or to represent the world in ways personal to myself, I must have access to representation. It also follows that representations have causal powers that would match their semantic properties, allowing the causal power from the mental to the physical to come about, leading to an explanation for reasoning, deliberation and our innate predictive power. Therefore, to be a successful theory of mind it is desirable to be a representational theory. 

The CTC is representational by appealing to computational representations; physical representations that therefore have physical properties. Computational representations have syntactic properties but also have semantic properties based on their relations with each other. This form of representation allows for an advantage for the CTC in causation. We want to allow for causation when we talk about cognition because it seems as though our mental and physical states can causally impact each other. The semantic properties of computational representations are altered when the syntactic properties of the components are changed, allowing for the causal powers of these representations to become apparent. In terms of the CTC, the brain will change the neural representations and their syntactic properties, and the representation will mirror that change in its semantic properties, causally affecting the mental representation. 

Let us question this, however; how can neurons represent anything at all? Let’s appeal to an example of a representation. I am a man, standing in the middle of a field. To my right, a dog approaches, and to my front, a woman on a bike passes from left to right. We can describe all these components in different ways, for instance, I am a collection of atoms, the field is a flat mass of grass and earth etc. All of these descriptions are regarding the syntactic properties of the entities. Let’s imagine a second system; a pencil is standing on a piece of cardboard, to its right an eraser approaches, and to its front, a pencil sharpener on top of a stapler crosses from left to right. The first engagement can be represented by the second, an engagement with different syntactic properties; it doesn’t matter what the representation is composed of, the semantic content still comes across; it wouldn’t matter that the symbol is composed of a collection of neurons and their corresponding electrical signals; whether it is composed of neurons, chocolate, or only the purple fruit pastilles, what matters is that it can still semantically represent a mental state. It is evident that the CTC achieves its primary motivation of being a representational theory through the use of computational representations.

The secondary motivation for the CTC is that of using a symbolic system to undertake this representation. When we investigate any form of Turing-style computation, we find symbolic systems; this is a result of the fact that Turing machines (such as the mind under the CTC) execute symbolic computation. However, a symbolic system, whilst heavily correlated, isn’t required for a representational theory; it’s perfectly plausible to have a set of rules for a representation that isn’t reliant on symbols being that which represent. Let’s examine why it’s of note that the CTC uses the systematic symbol approach. Horst (2005) suggests that when it comes to representational theories, a symbolic system accommodates productivity and systematicity. Productivity of thought is the idea that human beings are able to envision and entertain a potentially infinite list of different propositions. A system that doesn’t allow for symbols, instead only allowing for individual neuronal machine states or expressions, would be limited in the list of corresponding mental states that are available. A symbolic system, such as the CTC, can form a potential infinity of complex expressions that compose the representations used (Block & Fodor, 1972). An agent under a symbolic system is therefore capable of entertaining a potentially infinite list of different propositions. Systematicity is detailed by Block & Fodor (1972); when we as agents entertain a proposition, we are able to entertain correlated propositions. Horst (2005) explains that if I entertain the proposition ‘John loves Mary’ then I tend to also entertain the systematically related proposition that ‘Mary loves John’. Again, a symbolic system provides a good explanation of this; by virtue of being a system of set rules my belief in the proposition ‘John loves Mary’ must stand in some close relation to my belief in the proposition ‘Mary loves John’ as the similarity in the set rules required and content of the propositions would allow for the states to be systematically related in my mind. As Horst explains, the ability of the symbolic system of the CTC or other representational theories to explain productivity and systematicity signals a great strength of the symbolic system.

Now we shall look at the most major issue with the CTC. Proponents of the CTC find themselves unable to provide an explanation as to how exactly the neural components get the semantic content that the theory relies on. There are rough ideas that we can appeal to but currently, there is no full explanation sufficient in nullifying this criticism. The critics would like to say that because there is no way to explain how this semantic content comes about, we should reject this theory completely, but I argue that this is not the case. Simply because it cannot explain such a phenomenon does not mean that it cannot accommodate it. Supporters of non-reductive materialism are still theorising how it could be, and until they come to an answer or not the point stands that the theory is not incorrect. In a similar way the big bang theory, neither the show nor the scientific theory, has no account for what came before the big bang, but we still widely accept it because it isn’t wrong, it just has no explanation for a component. In a different example, alchemy as a theory is no longer considered successful because it was categorically proven wrong, rather than just couldn’t explain one component of it. The case of the CTC is clearly that of being unable to explain a component of the mind, but being an accurate theory concerning the vast majority of explanations. As far as we have discussed within the theory of mind, computational representations or representations of any kind are the most successful working mechanism by which we can understand the mind currently. In my opinion, it would be foolish to disregard the most accurate theory that we have when it isn’t even technically incorrect. I argue that it is foolish to categorise the CTC as unsuccessful because of this.

To summarise, by virtue of being a non-reductive materialistic stance, the CTC is motivated by its desire to provide a representational account of the mind and achieves this through computational representation. We also see that through this notion of computational representation combined with the complexity and functional higher-order nature of mental states, the CTC appeals to a symbolic system to provide its representations. Simply judging its motivations within the confines of a non-reductive materialist scope, the CTC is highly successful; it provides its representational account for mental properties, it has the ability to explain all four components of Rey’s explanatory budget, and the syntactic/semantic distinction allows for an explanation for mental causation. It also demonstrates great strength in accommodating both productivity and systematicity of thought. Its major objection being that it cannot explain one component of the theory is insufficient in its rebuke as it provides no concrete reason as to why we should reject a theory which is more accurate than any we have. In evaluation, it is clear that the CTC is a widely successful and accurate theory that in time will hopefully be able to explain the attribution of semantic content to the neural framework.

References

Antony, L. (2009). Everybody Has Got It: A Defense of Non-Reductive Materialism. In: Contemporary Debates in Philosophy of Mind. John Wiley & Sons.

Block, N.J. and Fodor, J.A. (1972). What Psychological States are Not. The Philosophical Review, 81(2), pp.159–181. doi:10.2307/2183991.

Crane, Tim (2001). Elements of Mind: An Introduction to the Philosophy of Mind. Oxford: Oxford University Press.

Horst, S. (2005) The Computational Theory of Mind. The Stanford Encyclopedia of Philosophy. Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/fall2020/entries/computational-mind/>.

Kim, Jaegwon (1993). Supervenience and Mind: Selected Philosophical Essays. Cambridge University Press.

Rey, Georges (1991). An explanatory budget for connectionism and eliminativism. In Terence E. Horgan & John L. Tienson (eds.), Connectionism and the Philosophy of Mind. Kluwer Academic Publishers. pp. 219–240.

The post The Computational Theory of Cognition appeared first on Arcing Mind Blog.

]]>
https://arcingmind.com/2023/01/17/the-computational-theory-of-cognition/feed/ 0 17