This post will be critically evaluating the massive modularity hypothesis, the view that the mind is composed ONLY of a collection of evolved cognitive ‘modules’. It is important to note that the theory does not state that the mind has cognitive modules, but that these are the ONLY thing that the mind is comprised of. The theory itself has evolved over time, with each iteration having subtle differences that I will detail throughout this post. Before reading my post, I highly recommend Woodward and Cowie’s 2004 paper ‘The Mind is Not (Just) a System of Modules Shaped (Just) by Natural Selection’ as a superb criticism of the theory overall.
The modular mind hypothesis broadly states that the mind is composed of several distinct ‘modules’ or ‘departments’, each of which has come about through the process of evolution for the purposes of solving a particular problem. For example, there might be a particular module whose purpose is to process and solve the problem of someone in a team not pulling their weight, or a module whose purpose is to solve complex issues such as who gets to take the last biscuit from the tin. Every module has a specific problem that it evolved to solve. We start with this point because it already leads us to an issue; what exactly is a module? Fodor (1983) defines his modules by the following criteria. As long as a system has most of these features ‘to some interesting extent’, then we should examine such a system as a module.
- Domain specificity
- Mandatory operation
- Limited central accessibility
- Fast processing
- Informational encapsulation
- ‘Shallow’ outputs
- Fixed neural architecture
- Characteristic and specific breakdown patterns
- Characteristic ontogenetic pace and sequencing
Information encapsulation is the most notable of these criteria, being necessary for every instance of module that Fodor describes. A cognitive system is informationally encapsulated if its processing is affected only by the inputs given to it and whatever information the system already contains. This means that modules, under Fodor’s view, cannot access information stored elsewhere in the mind, and have only the limited input they are given to be able to function. Information encapsulation can also be seen to exist in certain illusions. To a modular mind theorist, the Müller-Lyer illusion demonstrates that, even after additional information is given to an observer, they are unable to alter their module to fix their perception of the illusion.
Information encapsulation as a criterion is unstable. Fodor maintains that his view of modularity must be a ‘modest’ one, meaning that high-level cognitive processes such as reasoning or planning must be non-modular, and must therefore have potential access to every piece of information accessible to an agent (Fodor, 1983). To Fodor, modularity only occurs in low-level processes such as those that facilitate perception or language. No matter whether we take Fodor’s module criteria as true or not, we need to understand how systems get their information at all. This leads Fodor to the Frame Problem; how is it that any cognitive system decides what information is relevant to which problem? Must it be the case that those cognitive processes that are non-modular under Fodor’s view have no regulation or identification process for the information that is relevant to them? Carruthers (2004) identifies another issue; “any processor that had to access the full set of the agent’s background beliefs…would be faced with an unmanageable combinatorial explosion” meaning that the system would become informationally overloaded. It also seems intuitively wrong to argue for information encapsulation (Woodward and Cowie, 2004). Systems of the mind appear to have access to lots of information, such as the system of perception. Prinz (2006) details several instances where information encapsulation cannot be the case for the modules of the mind. For example, the double-flash illusion demonstrates that observers viewing one light flash with two auditory beeps recorded unanimously as seeing two light flashes, showing how the module associated with perceiving light flashes became confused when non-relevant external information ‘leaked’ into it. Even the Müller-Lyer illusion that was first used to support informational encapsulation has in the last decade been shown to vary based on alternative factors such as age.
Domain specificity is the idea that the module must be restricted in its subject matter, answering only (or at least mostly) those questions which are relevant to the problem that it evolved to solve. In the 1980s, at the height of the psychological-neuroscientific boom, pure domain specificity would have been heavily supported by the findings of the scientific community at the time. However, more recent studies such as Barrett and Bar (2009) have demonstrated that there are significant portions of the brain that are not domain-specific such as the amygdala. It is agreed upon now that one part of the brain can function and process many different tasks, meaning domain specificity as a criterion is incorrect (Woodward and Cowie, 2004).
Overall, it seems the only way forward for the modular mind is to reject Fodor’s modest modularity and his module criteria and formulate an alternate proposal.
Post-Fodorian modularity centres around one idea; full modularity of the mind. Massive modularity claims that the mind is fully modular, even the high-level systems that process reasoning and planning must be modular. Most importantly, we must still define what we mean by a ‘module’. Carruthers (2006) harnesses Fodor’s criteria for a module and (at first) reduces it down to the following:
- Domain specificity
- Mandatory operation
- Limited central accessibility
- Dissociability
- Neural localisability
As we have already discussed, domain specificity is not an acceptable criterion. Carruthers quickly accepts this in later chapters in his 2006 piece, and reevaluates his list of 5, removing not only domain specificity but also mandatory operation and neural localisability as well. For Carruthers (2006), the final form of his module criteria is simply:
- Limited central accessibility
- Dissociability
It is clear that Carruthers has rejected information encapsulation, the feature that Fodor had considered the most important to his description of a module. But we still need to limit the input of each module to avoid information overload due to the Frame Problem. To Carruthers, the modules of the mind are many functionally isolable processing systems. He argues that the modules must be functionally isolable so as to avoid the Frame Problem but to still allow them to have access to a substantial amount of information. There is scientific data that supports the functional isolability (and dissociability) of modular systems described in massive modularity from examples such as dissociation data. Dissociation data details exactly what regions of the brain are affected due to different mental or genetic disorders. The data shows us which regions when damaged, result in these different disorders. For example, individuals who are high on the Autistic spectrum have impaired social ability but other faculties e.g. general learning are unaffected. This can be explained by massive modularity; the module(s) that processes social ability might be damaged or developed incorrectly, but the module(s) that processes general learning might be completely typical. Not only does dissociation data support the modular mind, but it also highlights an evolutionary advantage in the system described; if natural selection changes some faculty of a species’ mind, it is not the case that every cognitive system must change also. For example, if the modular system that handles social ability is changed by natural selection then only this modular system must change; this is a far more realistic process than a non-isolated cognitive system where potentially the whole cognitive framework would change.
Carruthers (2006) appeals to three main arguments in support of massive modularity;
The Argument from Design
- Biological systems are constructed incrementally, composed of subsystems.
- When complex, these systems need to be organised as a collection of separately modifiable modules to achieve their function based on their subsystem components.
- The human mind is a complex biological system.
- Therefore, the human mind is massively modular in its organisation.
Carruthers’ argument from design does not work. The jump required from this conclusion that the human mind is massively modular in its organisation to the conclusion of the massive modularity argument that the mind is ONLY composed of modules is missing a clear progression. Separate modifiability entails the criterion of dissociability that Carruthers suggests, as each module must be able to change independently of others, but there is no reason why separate modifiability entails his other criterion of limited central accessibility. Further, it seems wrong to say that all complex biological systems must be separately modifiable. Woodward and Cowie (2004) demonstrate that there are biological traits that are not independently modifiable from others, detailing the example of the human two-lung system; to change this feature of this biological system we must change specifically the gene code that codes for bilateral symmetry in humans, which in turn would cause changes to so many other features. Amongst other examples, this shows that independent modification is clearly not a considerable feature of complex biological systems, so why would we apply it to the complex biological system of the mind?
The Argument from Animals
- Animal minds are massively modular.
- Human minds are incremental extensions of animal minds.
- Therefore, the human mind is massively modular.
This argument fails at the first step. There is insufficient evidence for us to conclude that animal minds are massively modular in the first place; Carruthers (2006) attempts to do so, but the majority of his arguments for such a conclusion centre around animal learning mechanism and their domain specificity. As we know, Carruthers later abandons domain specificity as a requirement for modularity, so this argument provides no more weight.
The Argument from Computational Tractability
- The mind is computationally realised.
- All computational mental processes must be suitably tractable (easy to control).
- Only processes that are informational encapsulated are suitably tractable.
- Therefore, the mind must consist entirely of encapsulated computational systems.
- (Implied) Therefore, the mind is massively modular.
Again, this argument seems to have more issues than strengths. Firstly, we must add an implied conclusion that a system that consists entirely of encapsulated computational systems is massively modular to Carruthers, otherwise, this argument achieves nothing in the way of supporting massive modularity. The issue with this is that Carruthers does not include encapsulation of any form in his criteria for modularity. Upon examination of the chapter in which Carruthers details this argument, there is nothing that could be discerned as a reason why this argument actually supports massive modularity. Not only that but there remains significant disagreement that suitable tractability is a characteristic only of encapsulated processes.
Overall, these three arguments provide little weight to the massive modularity hypothesis. However, this is not to say that there are no strengths of massive modularity.
As we have discussed, modules under the massive modularity theory are individuated by their functional roles, with the idea of a module being analogous to modules from biological systems. To further this, Barrett (2005) suggests an enzymatic representation of modules. The enzymatic model highlights structures within our minds which are representations of information that exist external to us, in the world. These structures are located by corresponding modules, which then process this information thereby changing the representation’s structure in the same way an enzyme alters the structure of a biological substrate. With every change to the structure of the representation, information can be added to it, meaning that an alternative module with the correspondingly structured receptor can then process the same substrate as the previous module. Multiple modules are able to share the same physical substrate as it develops and more information is added. To further the enzymatic model used by massive modularity proponents, Carruthers suggests a ‘bulletin-board model’. The representations are ‘posted’ onto a bulletin board, the location and identification of which is a contained set of information within each module. Each module is therefore able to scan the board for physical structures or markers that match their substrate receptor. The module is then able to identify exactly which information is relevant to the module, and then will alter this representation and return the changed representation back onto the board. Other modules are then ‘activated’ by the changed representation. There are many immediately obvious successes of this overall model. For one, multiple modules are able to access the same bulletin board at the same time, meaning that a vastly varied collection of modules are able to interact with and process a particular representation at the same time. Not only can each module interact with the same representation, but each module is still functionally isolable from the rest even despite this, which is required for Carruthers’ defence against the Frame Problem. Barrett (2005) calls this ‘access generality with processing specificity’, detailing the distinction between each module’s function. Further, there can be a potentially limitless number of bulletin boards throughout the mind. There could be a bulletin board for taste modules, a bulletin board for hunting modules, and a bulletin board for modules entirely focused on sitting down. Different modules can access multiple boards, and new boards can be formed by modules that evolved to solve the problem of not having specific bulletin boards. Arguably its greatest strength is its malleable description, Carruthers does not need to identify each board or how they came to be but just that they could exist.
A final topic that needs addressing is the idea that these cognitive modules come about via natural selection. This is a central idea to all modularity, not just massive modularity, as most if not all of the scientific evidence that Carruthers provides stems from the notion of natural selection. Of course I find no issue with the idea that our minds evolved via natural selection, our minds must have evolved by natural selection as this is the only process by which they could have evolved. My issue is with the way in which Carruthers and other evolutionary psychologists claim to use evidence of natural selection to support massive modularity. Evolutionary theories about the body tend to find some physical trait of a fossil, compare this to a similar modern feature (or lack thereof), and then posit some problem that caused that adaptation to arise. Evolutionary psychology cannot employ such a strategy, as brains do not leave fossil records, so study tends to work by taking a modern cognitive feature and then positing some problem from the past that caused this ‘solution’ to arise. The important note here is that the evolutionary psychologist immediately assumes that the cognitive feature must be a solution to some problem. Immediately, this strikes a disagreement; why must the cognitive feature be a solution to a problem, and how can one obtain sufficient or compelling evidence in support of this conclusion, therefore surely any conclusion in evolutionary psychology is pure conjecture and cannot be accepted as fact. There is no doubt that our minds arose by natural selection, but this reverse engineering tactic with little to no fossil evidence provides no sustainable foundation for any evolutionary psychology theory (Woodward and Cowie, 2004).
Carruthers does not touch on this criticism of evolutionary psychology methodology, but both he and Barrett suggest a relevantly similar developmental approach to modularity, the idea that modules are as developed as they are through them being learning systems. Modules have basic templates, and adapt and evolve over time based on an organism or species’ collective interactions with their environment. For instance, the module to identify prey might look very different within a group of primates relative to that of an eagle, as a result not only of different requirements but also through different respective interactions with their environments. Even within that same group of primates, each organism might have slightly different modules for prey identification as a result of their individual interactions. Whilst perhaps viewed as a strength of massive modularity, this does not seem enough to reject criticism regarding evolutionary psychology methodology.
Overall, Carruthers and Barrett do solve a few problems that faced Fodor’s modest modularity and provide some intuitive discussions and analogies that seem to work with specific scientific findings given some leniency. However, even despite scientific data that supports their arguments there exists far more that goes against them. Much of the evidence used to support massive modularity is either outdated and/or misused in its meaning, and in turn, most of their proposed arguments seem to fail at the first step in their logic or reasoning, let alone on the truth of their premises. Not only that, but the constantly changing criteria for what constitutes a module leads to confusion which can be seen even throughout Carruther’s 2006 piece. Evolutionary psychology and its findings do not doubt that the human mind is a highly modular and complex biological system, and the evidence used by both sides demonstrates this, but after critically evaluating the arguments for the massive modularity theorist’s claims, it simply is not a supported conclusion to say that it must be the case that the mind is just a collection of evolved cognitive modules.
References
Barrett, H. (2005). Enzymatic computation and cognitive modularity. Mind and Language, 20, 259 – 287.
Barrett, L., & Bar, M. (2009). See it with feeling: affective predictions during object perception. Philosophical Transactions of the Royal Society B: Biological Sciences, 364, 1325 – 1334.
Carruthers, P. (2004). The mind is a system of modules shaped by natural selection. In C. Hitchcock (Ed.), Contemporary Debates in Philosophy of Science (pp. 293 – 311). Blackwell.
Carruthers, P. (2006). The Architecture of Mind. Oxford University Press.
Carruthers, P. (2011) Dissociation Data, The Opacity of Mind: An Integrative Theory of Self-Knowledge, pp. 293–324. Available at: https://doi.org/10.1093/acprof:oso/9780199596195.003.0010.
Cowie, F., & Woodward, J. (2004). The mind is not (just) a system of modules shaped (just) by natural selection. In C. Hitchcock (Ed.), Contemporary Debates in Philosophy of Science (pp. 312 – 334). Blackwell.
Fodor, J.A. (1983) The Modularity of Mind. Cambridge, MA: The MIT Press. Prinz, J. J. (2006). Is the Mind Really Modular? In R. J. Stainton (Ed.), Contemporary debates in cognitive science (pp. 22–36). Blackwell Publishing.