Ana səhifə

Frontiers of economics in the post-neoclassical era


Yüklə 137 Kb.
səhifə1/2
tarix25.06.2016
ölçüsü137 Kb.
  1   2
FRONTIERS OF ECONOMICS IN THE POST-NEOCLASSICAL ERA
J. Barkley Rosser, Jr.

James Madison University



rosserjb@jmu.edu

June, 2008




I. Introduction

The most important fact about 21st century economics is that it is the post-neoclassical era in terms of the frontiers of economic research. One can still find orthodox, neoclassical theory in most textbooks, especially those at the upper undergraduate level. However, this no longer reflects the reality of how economists at the cutting edge of economics are thinking, including those who are in the mainstream of the profession. The intellectual orthodoxy of neoclassicism has died (Colander, 2000) and the current thrust of research at the cutting edge of the frontier is the search for the appropriate alternative to replace it.

It is useful at this point to distinguish between certain categories, following Colander et al. (2004), notably “orthodoxy,” “mainstream,” and “heterodoxy.” In their view, now gaining acceptance among many observers (Bateman, 2008), orthodoxy is an intellectual category, an established structure of ideas or a paradigm, with the neoclassical orthodoxy being characterized by the trinity of rationality, greed, and equilibrium. Mainstream is a sociological category, the set of leading individuals in the profession who control the leading universities, research institutions, journals, and other centers of power and control. However, heterodoxy is both, with the heterodox both opposing the dominant intellectual orthodoxy and also operating sociologically on the fringes or margins of the profession, alienated from the mainstream leaders, sometimes even to the point of persecution or suppression.1 The crucial point here is that during a period of upheaval when an established orthodoxy has died and is being replaced, the mainstream can and will step aside from it and pursue this effort, not being tied to any particular paradigm necessarily. During such periods, they may be open to drawing on ideas from heterodox schools or approaches, if not always fully acknowledging doing so, and the cutting edge or frontier of the profession may well be near the interface between the mainstream and the heterodox.2

Now it must be recognized that many economists, quite possibly most economists, do not recognize that neoclassical economics has died. However, I argue that the case is easily made that it has for those at the cutting edge of the frontier, even if no single new paradigm has clearly arisen to replace it. It is this latter fact that probably serves most to continue the illusion for many that neoclassical economics has not died (along with its persistence in lower level textbooks, where change is slow). Another issue involves both when it died and how.

Colander (2000) pushes the date too far into the past for my taste, arguing in effect that neoclassical economics was simply the marginalist economics that became prominent in the German, French, and English language traditions starting in the 1870s (with the Germans having been ahead of the others), and which culminated with the publication of Samuelson’s (1947) Foundations of Economic Analysis. In this view, such subsequent developments as Nash’s (1951) non-cooperative game theory and the closely related Arrow-Debreu (1954) general equilibrium theory represented a move beyond neoclassicism proper to something else. Certainly the case can be made for this view, and it is arguably more consistent from a history of economics perspective. However, for most economists the efforts carried out at such institutions as MIT to marry this earlier marginalist Samuelsonianism with the later developments was successful as epitomized by such widely used graduate textbooks as Varian (1992). In this latter view, the highwater mark of neoclassical orthodoxy came in the 1970s and early 1980s with the spread in macroeconomics of New Classical micro-founded models assuming homogeneous (or at least representative) agents possessing rational expectations. These models assumed Walrasian general equilibrium to hold,3 with the macro-model simply reflected at large the micro-solution for this representative agent, thereby overcoming the aggregation problems long posed by such observers as Keynes (1936) with the well-known fallacy of composition.4 In this view it would be a series of external shocks and events that would break this orthodoxy down, ranging from the unexpected stock market crash of 1987 through the collapse of the Soviet Union and its command socialist empire, through to a further series of financial crises in the late 1990s and at the turn of the century, accompanied by a series of intellectual discoveries and breakthroughs that would undermine this orthodoxy, including perhaps most importantly, the repetition of results and findings in experimental economics that served to ultimately undermine this all-knowing, rational individual agent, even if the difficult aggregation problems could be overcome.

While many experiments would be done that would contribute to the undermining of the orthodoxy of rationality, greed, and equilibrium (or at least the first two), arguably the most stunning and influential was the ultimatum game experiment by Güth et al. (1982). In this experiment, an agent has a sum of money to offer to divide with another agent. The first agent suggests a division. If the second agent accepts it, they each get the agreed-upon amounts. However, if the second agent rejects the (ultimatum) offer, then neither gets anything. The Nash equilibrium is for the first agent to offer the minimum possible amount, and for the second agent to accept the offer.5 In fact, it is now well established that generally smaller offers are rejected, and that in the US the median offers are around 40% of the total, with the modal offer being an even split, and with most offers less than 20% being rejected, supposedly “irrationally.” Explaining this result has been the foundation for an enormous amount of ongoing research that has brought in other disciplines such as psychology into economics.

Another strand of economics that had been developing through the latter part of the 20th century was complexity economics (Rosser, 1999), which also implied serious limits to the ability of individuals to rationally assess the economic environment.6 The combination of these results has led to the revival of an older idea of economics, bounded rationality, first proposed by Herbert Simon (1957). This combination has also led to a broader trend that has emerged strongly since the turn of the century, an ever-increasing influence of other disciplines in economics, ranging from other social sciences such as psychology and sociology to natural sciences such as physics and biology, as well as the emerging computer sciences. All of this has led to a push for higher-order transdisciplinary approaches.

The rest of this paper will focus on these three main themes with some sub-themes in each section. First, we shall consider more seriously developments in experimental and behavioral economics, with neuroeconomics being an especially recent and promising area arising from these two. Second we shall look at various ideas associated with the complexity movement, such as agent-based modeling, computational economics, and econophysics. Implications for this for a successor to the New Classical approach to macroeconomics will be examined. Then, we shall consider more biologically based approaches drawing on evolution and ecology, suggesting a revival of sorts of institutional economics, but in new forms as major areas of focus at the current cutting edge of the frontiers of economics. Finally, there will be a brief consideration of the role of the older schools of heterodox schools of thought in the newer developments.


II. Experimental and Behavioral Economic Research

  1. Market Mechanisms and a Partial Defense of Orthodoxy

While we have already noted that there are many results coming out of economic experiments that question the traditional vision of the neoclassical homo oeconomicus, some strands of experimental economic research have produced results that can be argued to support the ability of properly structured free markets to reach reasonably efficient equilibria despite these apparent findings about individuals. The key person arguing this point has been the “father of experimental economics,” Vernon Smith ever since his finding in 1962 that double auction markets converge very rapidly to equilibria, with this result being reinforced and found to be efficient and robust by many later studies (Friedman, 1974). Smith has since become a strongly pro-free market advocate and has been involved in setting up many actual auction markets, and the double auction has spread widely into many financial markets (Smith, 2008). Walrasian general equilibrium may have serious problems, but individual markets, even ones linked with another market or two, can be set up to function quite well in terms of equilibrating.

Even as Smith has argued this point for decades, he has not argued against the findings of all sorts of irrationalities or peculiarities in individual behavior, nor has he argued that all markets everywhere behave well or efficiently. Thus, he was one of the first to show that speculative bubbles and crashes rather easily happen in experimental markets, even if learning tends to reduce them (Smith et al. 1988). This result was found to still hold even when experimental subjects had to provide their own money for the market activity (Porter and Smith, 1995). Indeed, even though double auctions work well, many real-world auctions are not easy to set up to follow them, and while Smith and his allies have argued for simpler rules for large-scale auctions (Banks et al., 2003), many have not followed this advice, such as in setting up the US power spectrum auctions.

This raises what has been an increasing concern for Smith and his coauthors over time: how to reconcile this apparent irrationality of individual agents with their ability in at least certain kinds of situations to act together in an apparently rational and efficient way. In this he has increasingly fallen back on arguments of Adam Smith and Friedrich Hayek regarding markets as emergent spontaneous orders, and the idea that even though people have bounded rationality, they are able to intuitively operate in market settings to achieve efficient results, even as their conscious minds are not carrying out the necessary calculations. People’s brains and minds have evolved to do these sorts of things, even if unconsciously (Smith, 2008) as part of more generalized reciprocity, which is evolutionarily advantageous.

In this regard, while arguing that boundedly rational people can achieve efficient outcomes in the right institutional setups through intuition or rules of thumb, Smith and his coauthors have directly confronted traditional game theory that posits high levels of rationality to individual decisionmakers. Thus, Aumann (1995) has long argued for assuming common knowledge, with its implication that people regularly use backward induction to decide how to behave strategically when interacting with others. Such backward induction depends on believing that others also are thinking ahead and calculating the best possible responses of other agents into the distant future in response to possible actions by oneself. As it is, the experimental evidence suggests that most people do not think very far ahead in such situations (Stahl and Wilson, 1995; Camerer, 2003), and that backward induction is thus largely a purely theoretical construct.

The accumulation of this evidence has put game theory on the defensive. Major developments coming out of this has been the emergence of both evolutionary game theory (Binmore and Samuelson, 1999), with its greater emphasis on random forces pushing agents into different basins of attraction over time and behavioral game theory that recognizes that agents are not fully rational or perfectly informed (Camerer, 2003).


  1. Social Preferences and Other Complications

While there had been a long series of experiments documenting the ubiquity of “anomalies” that undercut the standard model of the rational economic agent, particularly the expected utility model of decisionmaking under risk (Strotz, 1955; Kahneman and Tversky, 1979; Thaler, 1981), it was probably ultimately the ultimatum game experiments mentioned above and the repeated verification of their findings that most seriously drove the nail into the coffin of the greedily individualistic, rational economic agent. People were regularly willing to bear costs to punish each other for violating norms of fairness (Rabin, 1993). This argued for the serious introduction of psychological and sociological ideas into economics, something which had been regularly done before the middle of the 20th century, but which had fallen out of fashion as it was thought that the rational agent model of economics offered the superior paradigm for the social sciences more broadly. In particular Rabin and others argued that people have a “taste for fairness” that must be accounted for by economists and which goes against the standard model of simple greedy maximization.

Ironically this has set up a somewhat curious debate over how best to deal with this matter. On the one hand is a group led by Rabin, more or less, that argues against the conventional approach by asserting the reality of the taste for fairness, which may somehow be viewed as a kind of selfless altruism.7 However, the approach taken by Rabin is in another sense highly conventional; he assumes a conventional sort of utility function that an individual maximizes subject to a budget constraint, with that utility function simply having an additional argument: fairness, which then must be traded off against other more standard such arguments in the utility maximization calculus. In this regard, Rabin in his interview in Colander et al. (2004, p. 151) expressed the side of his views that believes in a more conventional approach.

“In fact, I have various fears about teaching psychological economics. One is of attracting graduate students who are just hostile to economics. Another is all the people who want to use evolution and other approaches to explain departures from classical models rather than using evidence of departures to do better economics.”

This last sentence describes those taking the alternative view, which would include Vernon Smith (2008) as well as some from the evolutionary game theoretic school such as Binmore, as expressed in Colander et al., 2004. pp. 65-66, where he complains of people assuming “exotic preferences” not defensible by evolutionary processes. While Binmore tends to defend somewhat more conventional approaches, as with his questioning of the robustness of the ultimatum game findings, Smith pushes a methodologically more unconventional position, rejecting standard utility maximization modeling, indeed of assuming a utility function at all. The evidence of people’s lack of standard calculating and conscious optimization is too strong to be ignored. In the end, this group argues for assuming an ultimate more selfish approach, that of some form of reciprocity, even if it is distant and indirect. They argue that this is defensible as having arisen from evolutionary processes, even if the mind makes the necessary calculations in the sort of unconscious way described earlier above. It is an ultimately “selfish” reciprocity motive that underlies ultimatum game results, not some vaguely selfish altruism or inequity aversion.8

A response by those more willing to allow for such motives has spawned an enormous amount of research, with Fehr and Schmidt (1999) and Bolton and Ockenfels (2000) posing ways to distinguish reciprocity from inequity aversion in data. These proposals have proven controversial and considerable debate has ensued regarding whether these methods are accurate or not (Charness and Rabin, 2002; Engelmann and Tyran, 2005). In any case, some of those supporting this approach see more generalized forms of reciprocity, or strong reciprocity, as indeed being evolutionarily founded (Gintis et al., 2004) and drawing on work of anthropologists as well as psychologists.9 Such forms have come to be labeled social preferences.

C. Neuroeconomics and the Struggle of the Laboratory versus the Field

In laboratory experiments these debates have moved more deeply into the brain itself as the use of magnetic resonance imaging (MRI) has come to be used in economic experiments to see which parts of the brain are stimulated and how when agents are engaging in various kinds of decisionmaking or actions (Zak, 2004; Camerer et al. 2005). One of the first in economics was the study of McCabe et al. (2001) of cooperative versus non-cooperative behavior in the prisoner’s dilemma. They found that very different parts of the brain were stimulated when agents were engaging in one or the other. Interestingly, whereas Nash originally posited that the selfishness of not cooperating is “rational,” it is when agents are cooperating that they appear to be using the pre-frontal cortex parts of their brains that are more usually identified with “higher” or more “rational” thought by psychologists.

Given that such hard-wiring of the brain is almost certainly the result of deep evolutionary processes, the debates over reciprocity and altruism have more recently moved into the laboratories of the neuroeconomists. Thus, Fehr and Gächter (2002) and Fehr et al. (2002) argue strongly that strong reciprocity involves a pleasure reward in the brain associated with punishing those who violate social norms. Zak (2005) has identified the development of trust as being associated with the chemical oxytocin, which is released in the brain and provides a sensation of pleasure. Zak (2008) and his associates have pushed this argument to say that oxytocin release and reinforcement of socially virtuous behavior serves as the foundation of broader morality and social cohesion within market economies..

Left unresolved here is the degree to which we can associate stimulation of specific areas of the brain with particular psychological states or perceptions. Furthermore, there also remains the issue of whether or not what is found in the laboratory really reflects what goes on in reality, or “in the field.” This has led to deep debates over the use of laboratory experiments versus so-called field experiments (Levitt and List, 2007). Advocates of the former stress the ability to more scientifically pin down circumstances and eliminate extra “noise,: even as advocates of the latter argue that in laboratories important framing effects can arise from the influence of the experimenters themselves that can influence and bias the results. A compromise would seem to be that the two approaches are complementary, but this is a matter of ongoing dispute, just as is the matter of the degree to which the results of neuroeconomics studies of which parts of the brain are associated with what kinds of decisions really carry over into reality. In any case, all of those involved in experimental economics hope to achieve a more truly scientific approach to the studying and understanding human behavior.
III. The Complex Implications of Economic Complexity


  1. The Complexity Problem

While both the behavioralist and complexity approaches profoundly undermine

the fully rational and informed and selfish homo oeconomicus of neoclassical economics, especially in its stricter Walrasian form, they have developed and operated largely independently. However, they had a common link at the foundation of behavioral economics with Herbert Simon, already mentioned as the inventor of the bounded rationality hypothesis. He was also an early developer of complexity theory, especially its hierarchical version (Simon, 1962). Simon saw the deep link between complexity and behaviorialism through the role of complexity as a foundation for bounded rationality. It should be kept in mind that Simon’s later innovative research efforts in pursuit of the ideal of artificial intelligence was motivated deeply by his awareness of bounded rationality and its foundation in the inevitable and unavoidable reality of complexity.

Rosser (2008a) poses three levels of thinking about the nature of complexity. At the highest level is meta-complexity, the broadest conception, which allows for the 45 definitions of Seth Lloyd (Horgan, 1997, p. 303)10 as well as others. Next down is the dynamic definition provided in Rosser (1999), taken from Day (1994). This “big tent” definition is that a system is (dynamically) complex if it deterministically and endogenously exhibits irregular dynamics that do not converge on a point, a limit cycle, or a smooth growth or decline path (thus ruling out irregularities due to the exogenous shock noise that drives real business cycle models). This “big tent” definition includes “the four C’s” of cybernetics, catastrophe theory, chaos theory, and the “small tent” complexity of heterogeneous interacting agent models. It is this last type that constitutes our lowest level and is what most people think of when they think of “complexity economics.” This last approach has often been associated with the Santa Fe Institute and such computer simulation methods as cellular automata (Wolfram, 1984) and artificial life programs (Langton, 1990; Tesfatsion, 2006).

Of the other definitions of complexity that one can find at the meta-complexity level, probably the most important one in terms of current research are those based on computability (Markose, 2005; Velupillai, 2005). There are a variety of these, most of them ultimately drawing on the information theory of Shannon (1948) and its further development by Kolmogorov (1965), Chaitin (1987), and others. Although there are competing versions of these definitions and measures, such well-studied divisions as between programs that are solvable in polynomial time versus in exponential time versus those that cannot be solved at all in finite time due to halting problems (Blum et al., 1998). All of these in some way or another end up being related to the minimum length of the program required to compute a solution to a problem or system, and while there are variations, these provide more specifically measurable degrees of complexity. It is this latter fact (along with the increasing spread of computational economics in various forms) that has added to the popularity more recently of this sort of definition. Although this approach now has many followers, it was probably first applied in economics by Albin (1982), with full discussion in Albin with Foley (1998).



  1. The Agent-Based Approach

This is one of the most productive and promising areas of current research at the

frontiers of economic research, although Tesfatsion and Judd (2006) provides broad and in-depth coverage across many areas of economics and Delli Gatti et al. (2008) provide excellent coverage for macroeconomics more particularly. Among the more useful of the surveys in Tesfatsion and Judd are those on agent-based models in economics and computational finance by Hommes (2006) and by LeBaron (2006) and also the one by Duffy (2006), which examines links between the heterogeneous agent-based approaches and the behavioral and experimental approaches we have discussed above. An early approach that has been widely imitated for modeling broader societal development from agent-based modeling is Epstein and Axtell (1996).

General theoretical modeling of interacting agent-based systems has been done by Brock and Hommes (1997) and Brock and Durlauf (2001).11 These papers provided a foundational approach based on mean-field dynamics drawn from statistical physics. Central to the analysis of these nonlinear systems is consideration of the structure of bifurcation sets and various forms of complex dynamics that arise as the system crosses bifurcation. The complex dynamics extend well beyond just complex dynamics to involve multiple basins of attraction and fractal basin boundaries of these basins.12 The crucial control parameters determining the bifurcation structures in this set of models turn out to be degree of interaction between the agents (willingness to herd) and their willingness to change strategies, with more complex dynamics more likely as the values of these parameters increase. Brock et al. (2008) show how a model based on this framework can be used to analyze the destabilization of financial markets as hedging instruments increase, a result counter to that of standard financial economics theory. Gallegati et al. (2008) also use a variation of this to study financial market dynamics with the imposition of a Minsky (1972) type of financial constraint that allows for showing the phenomenon of a “period of financial distress” between the peak and the crash of a speculative bubble that is the most common pattern seen in historical bubbles (Kindleberger, 2000, Appendix B), even though again such a pattern has not been modeled by conventional economic theory. Föllmer et al. (2005) are able to get somewhat similar results with a related agent-based model.

Agent-based models have also been used to model more specific market dynamics not involving finance. So the evolving structures of loyalty in a market have been studied by Kirman and Vriend (2001). How market structures relate to technological change has also been a major area of study. Fagioli and Dosi (2003) have studied innovation dynamics on a two-dimensional lattice with a variety of effects and processes. Silverberg and Verspagen (2005) have looked at the diffusion process in more detailed in a variety of technology spaces. Dawid and Reimann (2004) have used genetic algorithms to examine innovation in relation to product life cycles.

A major thrust of many agent-based models has been to study learning dynamics. This is an area where the agent-based complexity approach has intersected with the experimental behavioral approach, with experiments increasingly involving subjects interacting with computerized systems in repeated games or market interactions in which learning takes place. Probably the first such in this line was Roth and Murnighan (1978) who had subjects playing repeated Prisoners’ Dilemma games against computerized opponents. Arifovic (1996) had experimental subjects using genetic algorithms to learn about foreign exchange rate dynamics. She (2001) would later find that boundedly rational agents could do better than fully rational agents in a complex context. Bullard and Duffy (2001) studied how learning dynamics can lead to excess volatility, and Brenner and Witt (2003) further studied game dynamics with learning. Nyarko and Schotter (2002) saw how eliciting beliefs could enhance learning in laboratory subjects. Perhaps the ultimate result involves zero-intelligence computerized agents that nevertheless “learn” in a certain sense to move toward a double auction equilibrium (Gode and Sunder, 1993). This result can be seen as underpinning the spread of completely computerized exchanges without human actors, which fully make manifest the prediction of Mirowski (2007) that markets simply are algorithms that evolve to become what he labels markomata.

Finally, another area of current interest for studying agent-based models has been in the emerging area of network analysis. This work, with close links to sociology derives largely from the study of Watts and Strogatz (1998), but is now burgeoning widely. Kirman and Vriend (2001), mentioned above, is an example in terms of networks of market relations. Kranton and Minehart (2001) also study networks in market relations. A general finding of this literature is that clusterings occur in patterns that are neither completely random nor fully concentrated. The famous “small worlds” phenomenon, sometimes described by the famous claim that no two people in the world are separated by more than six degrees of separation.

B. Econophysics

A term coined by H. Eugene Stanley in the mid-1990s, econophysics has been described by Mantegna and Stanley (2000, vii-ix) as the “multidisciplinary field…that denotes the activities of physicists who are working on economic problems to test a variety of new conceptual approaches deriving from the physical sciences.” This sociologically oriented definition emphasizes who is doing it more than what it is. In practice much emphasis has been on looking at data first and then trying to find models that fit the data, in contrast with the usual economics approach of assuming that the standard theory is generally correct and then applying it to data to see if it or the data can be sufficiently tweaked to find a matchup. A major focus of much of the econophysics research has been upon power law distributions that are linear in the mapping of logged relationships between variables. In terms of returns in financial markets, such power laws are associated with the presence of more extremal events than would be predicted if the distribution were Gaussian normal, so that one observes kurtosis or “fat tails.” Many of the models that the econophysicists bring to bear can be seen as variations on heterogeneous agent models as many are drawn from statistical mechanics models of interacting particles. They are thus linked to the broader “small tent” complexity approach that emphasizes such interacting heterogeneous agent models. The most fervent defenders of the approach (McCauley, 2004) argue for its superiority over economic models because of their lack of invariance laws.

Prominent applications in financial models included Bouchaud and Cont (2002), Farmer and Joshi (2002), and Sornette (2003). Wealth distributions have also been found to have power law distributions (Levy and Solomon, 1997), although income distributions appear to have such only at their upper ends (Drăgolescu and Yakovenko, 2001; Chatterjee et al. 2005).13 City size distributions have also been estimated by such means (Gabaix, 1999) as have firm sizes (Axtell, 2001).

A curious aspect of this is that many of these ideas were first developed by economists, whereas some of the more conventional ideas used by standard economic theory originated from physicists, an old argument of Mirowski (1989). Thus the first to suggest the use of power laws for studying income distribution was Pareto (1897). Whereas the idea of normal distributions of financial returns was initiated by the mathematician Bachelier (1900), this would become a standard idea in physics as Brownian motion, and the physicist Osborne (1959) would later suggest its use in financial economics, which then became standard. It was the mathematician Mandelbrot (1963) who first suggested applying power laws to asset returns. It was a geographer who would first suggest that city sizes might be distributed according to a power law (Zipf, 1941), and Ijiri and Simon (1977) did so for firm sizes a quarter of a century before Simon’s student Axtell (2001) would confirm this finding. Furthermore, the mathematical economist Föllmer (1974) suggested applying statistical mechanics models to studying markets well before the physicists did, and the economist, Duncan Foley (1993) applied entropy theory to define statistical distributions of prices as a new concept of equilibrium to replace the neoclassical one.

Given this tangled history it is not surprising that econophysics has engendered controversy. Gallegati et al. (2006) criticized much work as not properly citing previous work by economists, as using shoddy statistical methods, asserting universal laws where none apply, and lacking in theoretical models or explanations for findings. Rosser (2008b) argues that most of these criticisms had some merit, but that they are being overcome or can be by having economists and physicists work together, with Foley (1993) providing an example of a theoretical model drawn from physics that is of relevance for explaining economic phenomena. However, Lux (2009) warns that physicists may get led astray if they work with economists whose ideas are too conventional, with some econophysicists embarrassingly making incorrect public forecasts based on inappropriate models drawn from economics.

C. Econobiology

In contrast to econophysics, econobiology is not a self-identified entity created by its practitioners but rather a term of disapprobation assigned by critics such as McCauley (2004, chap. 9), who sees biology lacking in invariance laws in the same way that economics does and that therefore it cannot be a real science in the way that physics is.(and econophysics might be). However, we can identify at least three sub-areas of what might be called econobiology that have their own distinct and identities: evolutionary economics, bioeconomics, and ecological economics.

Of these the first is certainly the oldest in terms of identity, with old institutional economists taking up the term and the identity (Veblen, 1898) and economics and the theory of evolution having a complicated interrelationship throughout much of the nineteenth century as Malthus influenced Darwin, who in turn influenced both Marx and Marshall (Rosser, 1991, chap. 12). As old institutionalism lost its struggle with neoclassical economics (Veblen having coined the term, “neoclassical economics”), and new institutional economics became more important, evolutionary economics fell from favor and attention, although arguments relating to evolution continued to be made in various parts of economics such as the theory of firms and how they change over time and in the theory of technological change, although Schumpeter (1936) favored a more discontinuous view of evolution that contrasted with Darwin and foreshadowed the punctuated equilibrium theory of Stephen Jay Gould (2002).

However, evolutionary economics is experiencing a revival along two lines. One is the already discussed area of evolutionary game theory, which has now become the leading edge of game theory. The other is in work that seeks to overcome the divide between the old and new institutionalist schools. This approach has been pushed by Hodgson (2006) who sees in modern evolutionary theory as it has come to be after the innovations of Gould and others a fresh paradigm for use in economics.

Arguably the latter two are simply variations on each other, although bioeconomics is the older term, dating to its introduction by Colin W. Clark in 1976, whereas ecological economics came into usage in the 1980s (Martinez-Alier, 1987). Both can be viewed as transdisciplinary disciplines (indeed the ecological economists specifically label themselves as such). Both seek to integrate fully models of biology or ecology with those of economics. The former are more likely to more specifically model population dynamics of biological populations in conjunction with the impact of interactions with economic agents. Clark’s work on fisheries is the model here, and the fact that supply curves for fish tend to bend backwards opens up the door to a variety of possible nonlinear and complex dynamics in fisheries, including catastrophic collapses (Rosser, 2001; Gunderson and Holling, 2002),which we have observed all too many times in real life unfortunately.

Forestry has long been an area of study as well by bioeconomists and ecological economists, using a variety of methods (Kant and Berry, 2005). The study of both fisheries and forestry are seen as increasingly important given the role these sectors play in developing countries and in the pressure that they are both perceived to be facing at the global level. Similarly lake systems have been studied by joint teams of ecologists and economists (Carpenter et al. 1999). For many of these systems problems of common property resource management are the issue (Sethi and Somanathan, 1996), which bring into relevance the issues studied by experimental and behavioral economists regarding the ability of people to cooperate or not cooperate, along with the old debate about multi-level evolution (Henrich, 2004). These problems arguably come to ahead in the great question of managing the global commons in the face of man-induced global warming (Stern, 2008), which are from easy to resolve.

  1   2


Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©atelim.com 2016
rəhbərliyinə müraciət