Socio-technical systems are complex adaptive systems. Therefore, in order to attempt initiating and steering system innovations, we must understand what complex adaptive systems are and how they do behave.
Defining complex systems is not an easy task. As a starting point, complex systems are what simple systems are not. The major distinguishing characteristics of simple systems are predictable behaviour, small number of components with few interactions among them, centralised decision-making and decomposability (Casti, 1986). Therefore, through negation of these characteristics, the major characteristics of complex systems are identified as unpredictable behaviour, large number of components with many interactions among them, decentralised decision-making and limited or no decomposability. A distinction between complicated and complex systems is also useful here. Cilliers (1998) argues that if a system has a very large amount of components but yet can still be fully analysed, the system is complicated rather than complex. A complex system, on the contrary to a complicated one, has intricate sets of non-linear feed-back loops so that it can only be partially analysed at a time. In this sense a machine of any kind with large quantity of parts is complicated whereas a human being or an ecosystem is complex.
Funtowicz and Ravetz (1994) classify complex systems as ordinary and emergent. They argue that ordinary complex systems tend to remain in a dynamic stability until the system in overwhelmed by perturbations such as direct assaults like fire or invaders. Conversely, in emerging complex systems there is continuous novelty and these systems cannot be fully explained mechanistically or functionally since some of their elements possess individuality, intention, purpose, foresight and values. Any system involving society is thus an emergent complex system.
Hjorth and Bagheri (2006) state that complex systems cannot be fragmented without losing their identities and purposefulness. Similarly, Linstone (1999) refers to the general illusion or misassumption that we can break complex systems into parts and study these parts in isolation. He calls this as ‘a crucial assumption of reductionism (p.15)’ and points to the fact that such implied linearity is not a characteristic of complex systems. Indeed, in complex systems, the complexity is not determined by the characteristics of the components of the system but rather the relationships and the interaction between the components (Manson, 2001). The interaction between the components is not necessarily physical but can be in the form of information exchange as well (Cilliers, 1998). Mant (1997) gives an illustrative example of irreducibility of complex systems in his frog and bike analogy. One can dismantle a bicycle, carry out maintenance and reassemble it. The bicycle is still a bicycle and works perfectly. Nevertheless, if you separate a part of frog for any reason and keep on breaking it apart, the frog will perform unpredictable adjustments to survive until a time comes and the system (i.e. frog) tips over into collapse. Therefore, it is not possible to study complex systems meaningfully by breaking them into their components. At times when there is a need to define system boundaries, this should be done acknowledging how the part under study relates to the rest of the system.
In addition to irreducibility and emergent behaviour, the other characteristics of complex systems are self-organisation, continuous change, sensitivity to initial conditions, learning, irreducible uncertainty, and contextuality (Cilliers, 1998; Gallopín, Funtowicz, O’Connor & Ravetz, 2001; Manson, 2001; Cooke-Davies, Cicmil, Crawford & Richardson, 2007). Complex systems in general are hierarchic or have multiple-levels and each element is a subsystem and each system is part of a bigger system (Casti, 1986; Gallopín et al. 2001; Holling, 2001; Gallopín, 2004). Hierarchical structures have adaptive significance (Simon, 1974). This adaptive significance is not due to a top-down authoritative control but rather due to the formation of semi-autonomous levels which interact with each other and pass on material and/or information to the higher and slower levels (Holling, 2001).
It is impossible for an analyst to understand a complex system totally and correctly. However, some requirements can be extracted with references to characteristics counted above. First, emergent behaviour, sensitivity to initial conditions and learning which takes place by system components imply time-dependency of complex systems. This time-dependency is two-fold; both history of the system and the particular moment the analysis is undertaken will affect the outcome. Since context is important to understand adaptive systems, and there are multiple-levels in a system, an analysis should include more than one level as well as the different perspectives present in the system (Gallopín et al. 2001; Gallopín, 2004). For an effective analysis, the analyst needs to oversee the (sub)system being analysed from a vantage point. This vantage point should be at a higher or preferably meta-level to identify a context specific perspective while still acknowledging the interconnections between the (subsystem) being analysed and the rest (Espinosa, Harnden & Walker, 2008).
The three major subsystems of the meta-system (i.e. ecology, economy, society) and most of the sub-systems of these components (e.g. evolutionary processes, market operations, individual animals, companies, etc.) are classified under a special category of complex systems terminologically known as complex adaptive systems (CAS). The distinguishing feature of CAS is that ‘they interact with their environment and change in response to a change (Clayton & Radcliffe, 1996, p.23)’. They are resilient; therefore, they ‘can tolerate certain levels of stress or degradation (p. 31)’. As a result, sustainability of a CAS can be achieved if the adaptive capacity of it is not destroyed.
The sustainability of a single entity is dependent on and determined by sustainability of the other components with which that single entity has interactions. Together all these components form a system, and therefore, sustainability can only be achieved using non-reductionist, dynamic systems thinking. The subsystems of a system should be adaptable to changes which occur both in the other subsystems, and as a result, in the entire system. The subsystems must co-evolve to render sustainability possible.
The term co-evolution was first coined by Ehrlich and Raven in 1964 to explain the mutual evolutionary processes of plants and butterflies (Ehrlich & Raven, 1964). Even though the term first emerged in the area of evolutionary biology, it spread in other, especially interdisciplinary, domains studying interactions between natural and human-made systems (Norgaard, 1984, 1995; Winder, McIntosh, & Jeffrey, 2005; Rammel, Stagl, & Wilfing, 2007). Some of the other domains which use the co-evolutionary approach to explain, analyse and manage interacting natural and social systems include technology studies, organisational science, environmental and resource management, ecological economics and policy studies (Rammel et al., 2007; Kallis, 2007a).
It is important here to note that, despite many similarities between biological evolution and social, cultural, technological and economic change, there are differences as well (Rammel & Van Den Bergh, 2003; Kallis, 2007b). In the wider context of sustainable development, co-evolutionary change does not necessarily happen on a reactionary basis as generally happens in ecosystems. Rather, in socio-economic or socio-technical levels, it can also be deliberately aimed at both the individual and collective levels by system components in accordance with changing system conditions (Holling 2001; Cairns Jr, 2007; Kemp, Loorbach, & Rotmans, 2007). Co-evolution is reflexive and refers to the mutual change of all system components. During this mutual change, one component may or may not dictate a change over other(s).
References used in this post:
Cairns Jr, J. (2007). Sustainable co-evolution. International Journal of Sustainable Development and World Ecology, 14(1), 103-108.
Casti, J. L. (1986). On system complexity: identification, measurement and management. In J. L. Casti & A. Karlquist (Eds.), Complexity, Language and Life: Mathematical Approaches (pp. 146-173). Berlin: Springer-Verlag.
Cilliers, P. (1998). Complexity and postmodernism: understanding complex systems. London; New York: Routledge.
Clayton, A. M. H., & Radcliffe, N. J. (1996). Sustainability: a systems approach. London: Earthscan.
Cooke-Davies, T., Cicmil, S., Crawford, L., & Richardson, K. (2007). We’re not in Kansas Anymore, Toto: Mapping the Strange Landscape of Complexity Theory, and Its Relationship to Project Management. Project Management Journal, 38(2), 50-61.
Ehrlich, P. R., & Raven, P. H. (1964). Butterflies and Plants: A Study in Coevolution. Evolution, 18(4), 586-608.
Espinosa, A., Harnden, R., & Walker, J. (2008). A complexity approach to sustainability – Stafford Beer revisited. European Journal of Operational Research, 187(2), 636-651.
Funtowicz, S., & Ravetz, J. R. (1994). Emergent complex systems. Futures, 26(6), 568-582.
Gallopín, G. C., Funtowicz, S., O’Connor, M., & Ravetz, J. (2001). Science for the twenty-first century: From social contract to the scientific core. International Social Science Journal, 53(168), 219-229.
Gallopín, G. (2004). Sustainable Development: Epistemological Challenges to Science and Technology. presented at the meeting of the Workshop on Sustainable Development: Epistemological Challenges to Science and Technology, Santiago, Chile.
Hjorth, P., & Bagheri, A. (2006). Navigating towards sustainable development: A system dynamics approach. Futures, 38(1), 74-92.
Holling, C. S. (2001). Understanding the complexity of economic, ecological, and social systems. Ecosystems, 4(5), 390-405.
Kallis, G. (2007a). Socio-environmental co-evolution: some ideas for an analytical approach. International Journal of Sustainable Development and World Ecology, 14, 4-13.
Kallis, G. (2007b). When is it coevolution? Ecological Economics, 62(1), 1-6.
Kemp, R., Loorbach, D., & Rotmans, J. (2007). Transition management as a model for managing processes of co-evolution towards sustainable development. International Journal of Sustainable Development and World Ecology, 14(1), 78-91.
Linstone, H. A. (1999). Decision Making for Technology Executives : Using Multiple Perspectives to Improved Performance. Norwood, Mass.: Artech House.
Manson, S. M. (2001). Simplifying complexity: A review of complexity theory. Geoforum, 32(3), 405-414.
Mant, A. (1997). Intelligent leadership. St. Leonards, N.S.W.: Allen & Unwin.
Norgaard, R. B. (1984). Coevolutionary Development Potential. Land Economics, 60(2), 160-173.
Norgaard, R. B. (1995). Development Betrayed: The End of Progress and a Coevolutionary Revisioning of the Future. London; New York: Routledge.
Rammel, C., Stagl, S., & Wilfing, H. (2007). Managing complex adaptive systems — A co-evolutionary perspective on natural resource management. Ecological Economics, 63(1), 9-21.
Rammel, C., & Van Den Bergh, J. C. J. M. (2003). Evolutionary policies for sustainable development: Adaptive flexibility and risk minimising. Ecological Economics, 47(2-3), 121-133.
Simon, H. A. (1974). The organization of complex systems. In Pattee, H. H. (Ed.), Hierarchy theory: the challenge of complex systems. New York: Braziller. p. 3-27.
Winder, N., McIntosh, B. S., & Jeffrey, P. (2005). The origin, diagnostic attributes and practical application of co-evolutionary theory. Ecological Economics, 54(4), 347-361.