Lazy Sunday, watching the clouds

Editorial

This is one of those days when I experience radically contradictory intuitions as for my research work. One voice in my head is telling me: ‘Stay focused. You have a nice research path here, with those evolutionary models of technological change’. Still, there is another voice, who is currently watching the clouds, as they rush through the late summer sky, and is wondering what the hell is it all about, you know, universe and stuff. By the way, can a voice watch clouds? Well, basically it can, if it has eyes, and some brain behind. What I need seems to be that special kind of broad picture. You know, that kind you can come by in some social relations. Somebody frames you into some lamentable deal, you say it was really bitchy from their part, and they say something like: ‘Yes, but we should see the broad picture’. Interestingly, said broad picture is focused on providing good excuses to that person. Still there is that rhetorical technique of focused broad picture, kind of precise and kind of overarching in the same time. This is what I need, to reconcile those two voices in my head.

In a picture, I’d better sketch before putting any thick paint in it. I start sketching with defining corners and frames first, and then I create a structure inside those frames. This is, at least, what I retained from my Art classes at school. So I sketch. Corner #1: Herbert W. Simon, reputed to be the patient zero of evolutionary approach in economics, and his article of 1955[1], treating of bound rationality in economic decisions. Corner #2: Arnold Toynbee, and his metaphor of civilisations seen as people struggling to get somewhere up from a mountain ridge, contained in his monumental work entitled ‘Study of History’, first published in 1939, then abridged, during World War II,  by an enthusiastic follower, David Churchill Somervell, and published abridged in 1946[2]. I mean more specifically the content to be found on pages 68 – 69 of this abridged version. Corner #3: a recent discovery in evolutionary biology – made and disseminated by professor Adam Hargreaves from the University of Oxford – that besides the known mechanisms of evolution, namely spontaneous mutation and natural selection, there is a third one, some kind of super-fast mutation in some genes, which works takes place so bloody quickly that those genes seem to disappear from the genome as we can sequence it . Corner #4: my own research, summing up, so far, to saying that our global achievement regarding technological change, is rather in ameliorating life conditions, for example in alleviating food deficit, rather than in maximizing Total Factor Productivity.

Fine, as I look critically at those four corners, I would add some more, but a frame with more than four corners becomes a bit awkward for sketching anything inside. I wanted a picture, I have a picture. Format is format, period. I draw a first edge, from corner #1 to corner #2, from Herbert Simon to Arnold Toynbee. The edge turns out to be somehow symbolic: professor Toynbee retired from scientific career in 1955, exactly the same year when Herbert W. Simon published that article I have in mind. Herbert Simon says: we can be biased, in our choices, as for very nearly everything. The range of options we can really choose between, their possible outcomes and payoffs, as well as exogenous conditions: we can subjectively distort all of that. Arnold Toynbee says: social change is a struggle with highly uncertain outcomes, and these outcomes are observable just sometimes, as pit stops reached in an endless race. Many a human civilisation failed in assuring continuous development. This edge, connecting Herbert Simon to Arnold Toynbee, is a question: how can we climb the cliff of history more efficiently, knowing that every hold is burdened with cognitive bias? Now, I connect corner #2 (professor Toynbee), with #3, the recent discovery of super-fast genetic mutation. Once again, a question arises on that edge. What happened if, in our civilisation, our cultural success depended on something that changes so fast we can’t even say how it is happening nor how is it subject to natural selection? Next edge, from that discovery by professor Hargreaves to my own research. This time, the edge question comes to my mind quite naturally. What if the technological change that we can observe, I mean invention, obsolescence in established technologies, production function, what if all that was a sort of blanket cover for some other process of change, taking place kind of underneath? What would be that process?

Finally, the fourth edge of my canvas, from my own research back to Herbert Simon and his theory of cognitive bias in economic decisions. We know that collective intelligence, understood as learning by experimentation and interaction, can reduce the overall impact of individual cognitive biases. Does the current proportion between the input of production factors (i.e. capital and labour), and the output of technological change (patentable invention, obsolescence of established technologies) reflect some kind of local equilibrium, a production function of technological change? How is that hypothetical function of technological change specific to precise social conditions, and how can it contribute to changing those conditions?

Thus, as I walk back in my footsteps, just to check if I haven’t trodden on something interesting, I reconstitute the edges of my canvas, and I try to define some kind of central point and the intersection of diagonals. What I am looking for is a model (theory?) of technological change, embedded in broader social change, which could help in discovering some possibly unexplained characteristics of our modern civilisation, and possibly assist future social change. Ambitious. Possibly impossible to achieve with my intellectual resources. Cool. I’m in, and now, I am restating the sparse hypotheses that my internal trio – the ape, the monk, and the bulldog – has hatched over the last weeks.  Hypothesis #1: innovation helps people out of hunger. Hypothesis #2: the number of resident patent applications per year, in a given country, significantly depends on the amount of production factors available. Hypothesis #3: evolutionary selection of new technologies works as an interaction between a set of female organisms disposing of production factors, and a set of male organisms generating intelligible blueprints of new technologies. Hypothesis #4: different social structures yield different selection functions, with different equilibriums between the number of patent applications and the input of production factors, capital and labour.

Good. Now, I check if all that intellectual diarrhoea makes a coherent logical structure. I start with bringing a little correction in the last hypothesis: I replace ‘equilibriums’ with ‘proportions’. In economics, an equilibrium is supposed to be something really cool, kind of serenissime; it is supposed to be a state of at least locally optimal use of resources. I cannot prove that any proportion between the number of patent applications and the input of production factors is such a state. I can observe that this proportion is somehow specific to distinct social structures, but I cannot see any way (nor any willingness, by the way) to prove that it is an optimal state for these structures. Now, provability. I can empirically check #1, #2, and #4, but not #3 (at least I cannot see how I could check it with the data I have). The #3 seems to be a speculative hypothesis, kind of a cherry I can put on the top of a cake, but I have to bake the cake first.

I have that logical construct, made of hypotheses #1, #2, and #4, which I place in the centre of my canvas. If I was painting a landscape, it would be that beautiful [lake, mountain, horse, river, sunset, or put here whatever you want] to be presented between those four corners and four edges. Now, following the logic by Milton Friedman, what I will be doing about those hypotheses would be not so much proving them true, as absolute truth does not exist in a probabilistic world, but finding conditions for not refuting those hypotheses as false. Supposing I have proven that (I kind of have, in my earlier posts), I can now try to connect the proof to edge questions, i.e. I can try to build a speculative reasoning. Tomorrow.

[1] Simon A.,H., A Behavioral Model of Rational Choice, The Quarterly Journal of Economics, Vol. 69, No. 1 (Feb., 1955), pp. 99-118

[2] Toynbee, J. Arnold. Study of history. University press, 1946.

Equilibrium in high esteem

My editorial

I am still at this point of defining my point in that article I am supposed to hatch on the topic of evolutionary modelling in studying technological change. Yes, it takes some time and some work to define my point but, man, that’s life. I think I know things, and then I think how to say what I know about things, and it brings me to thinking once again what is it that I know. If, hopefully, I come to any interesting conclusions about what I know, I start reading literature and I discover that other people know things, too, and so I start figuring out what’s so original in what I know and how to say it. You know those scenes from Jane-Austin-style movies, where people are gossiping in a party and they try to outgossip each other, just to have that momentary feeling of being the most popular gossiper in the ballroom? Well, this is the world of scientific publications. This is what I do for a living, very largely. I am lucky, mind you. I don’t have to wear one of those white wigs with a fake tress. This is a clear sign that mankind is going some interesting way forward.

Yesterday, as I was gossiping in French (see ‘Deux lions de montagne, un bison mort et moi’ ), I came to some conclusions about my point. I think I can demonstrate that the pace and intensity of technological change we have been experiencing for the last six or seven decades can be explained as a function of intelligent adaptation, in the human civilisation, to a growing population in the presence of scarce food. This is slightly different an angle of approach from those evolutionary models I have been presenting on my blog over the last few weeks, but what do you want: blog is blog, and scientific gossip is scientific gossip. This deep ontological distinction means I have to adapt my message to my audience and to my medium of communication. Anyway, why this? Well, because as I turned and returned all the data I have about technological change, I found only one absolutely unequivocal gain in all that stuff: between 1992 and 2016, the human population on the planet has doubled, but the average food deficit per person per day has been cut by half, period. This is it. Of course, other variables follow, of similar nature: longer life expectancy, better access to healthcare and sanitation etc. Still, the bottom line remains the same: technological change occurs at intensifying a pace, it costs more and more money, and it is correlated with improvements in the living conditions much more than with increased Total Factor Productivity.

There is a clan of evolutionary models, which, when prodded with the stick labelled ‘adaptation’, automatically reply with a question: ‘Adaptation to what?’. Wrong question, clan. Really. You, clan, you have to turn your kilts over, to the other side, and see that other tartan pattern. Adaptation is adaptation to anything. Otherwise, if we select just some stressors and say we want to adapt to those, it becomes optimization, not adaptation, not anymore. The right question is ‘How do we adapt?’. Oh, yes, at this point of stating my point I suddenly remember I have to do some review of literature. So I jump onto the first piece of writing about intelligent adaptation I can find. My victim’s name is Andrew W. Lo and his article about adaptive markets hypothesis (2005[1]).  Andrew W. Lo starts from the biological assumption that individuals are organisms, which, through generations of natural selection form so as to maximize the survival of their genetic material.

Moreover, Andrew Lo states that natural selection operates not only upon genetic material as such, but also upon functions this genetic heritage performs. It means that even if a genetic line gets successfully reproduced over many generations, so if it kind of goes intact and immutable through consecutive generational turns, the functions it performs can change through natural selection. In a given set of external conditions, a Borgia (ducal bloodline) with inclinations to uncontrolled homicide can get pushed off to the margin of the dynasty by a Borgia (ducal bloodline) with inclinations to peaceful manipulation and spying. If external conditions change, the vector of pushing off can change, and the peaceful sneaky kind may be replaced by the violent beast. At the end of the day, and this is a very important statement from Andrew W. Lo, social behaviour and cultural norms are also subject to natural selection. The question ‘how?’, according to Andrew Lo, is being answered mainly as ‘through trial and error’ (which is very much my own point, too). In other words, the patterns of satisfactory behaviour are being determined by experimentation, not analytically.

I found an interesting passage to quote in this article: ‘Individuals make choices based on experience and their best guesses as to what might be optimal, and they learn by receiving positive or negative reinforcement from the outcomes. If they receive no such reinforcement, they do not learn. In this fashion, individuals develop heuristics to solve various economic challenges, and as long as those challenges remain stable, the heuristics eventually will adapt to yield approximately optimal solutions’. From that, Andrew Lo derives a general thesis, which he calls ‘Adaptive Markets Hypothesis’ or AMH, which opposes the Efficient Market Hypothesis (EMH). The way it works in practice is being derived by close analogy to biology. Andrew Lo makes a parallel between the aggregate opportunities of making profit in a given market and the amount of resources available in an ecosystem: the more resources are there, the less fierce is the competition to get a grab of them. If the balance tilts unfavourably, between the population and the resources, competition becomes more ruthless, but ultimately the population gets checked at its base, and declines. Declining population makes competition milder, and the cycle either loops in a band of predictable variance, or it goes towards a corner solution, i.e. a disaster.

The economic analogy to that basic biological framework is that – according to AMH and contrarily to EMH – ‘convergence to economic equilibrium is neither guaranteed nor likely to occur at any point in time’. Andrew Lo states that economic equilibrium is rather a special case than a general one, and that any economic system can either converge towards equilibrium or loop in a cycle of adaptation, depending on the fundamental balance between resources and population. Interestingly, Andrew Lo manages to supply convincing empirical evidence to support that claim, when he assumes that profit opportunities in a market are the economic equivalent of food supplies in an ecosystem.

I find that line of thinking in Andrew Lo really interesting, and my own research, that you could have been following over the last weeks on this blog, aims at pinning down the ‘how?’ of natural selection. The concept is being used frequently: ‘The fittest survive; that’s natural selection!’. We know that, don’t we? Still, as I have that inquisitive ape inside of me, and as that ape is being backed by an austere monk equipped with the Ockham’s razor, questions abound. Natural selection? Splendid! Who selects and how? What do you mean by what do I mean by ‘who selects?’? (two question marks in one sentence is something I have never achieved before, by the way). Well, if we say ‘selection’, it is a choice. You throw a stone in the air and you let it fall on the ground, and you watch where it falls exactly. Has there been any selection? No, this is physics. Selection is a human concept and means choice. Thus, when we state something like ‘natural selection’, I temporarily leave aside the ‘natural’ part (can there be unnatural selection?) and I focus on the act of selecting, or picking up from a lot. Natural selection means that there is a lot of items, produced naturally, through biology (both the lot in its entirety and each item separately), and then an entity comes and chooses one item from the lot, and the choice has consequences regarding biological reproduction.

In other words, as long as we see that ‘natural selection’ as performed by some higher force (Mother Nature?), we are doing metaphysics. We are pumping smoke up our ass. Selection means someone choosing. This is why in my personal research I am looking for some really basic forms of selection with biological consequences. Sexual selection seems to fit the bill. Thus, when Andrew Lo states that natural selection creates some kind of economic cycle, and possibly makes the concept of economic equilibrium irrelevant, I intuitively try to identify those two types of organisms in the population – male and female – as well as a selection function between them. That could be the value I can add, with my model, to the research presented by Andrew Lo. Still, I would have to argue with him about the notion of economic equilibrium. He seems to discard it almost entirely, whilst I hold it in high esteem. I think that if we want to go biological and evolutionist in economics, the concept of equilibrium is really that elfish biscuit we should take with us on the journey. Equilibrium is deeply biological, and even physical. Sometimes, nature is in balance. This is more or less stationary a state. An atom is an equilibrium between forces. An ecosystem is an equilibrium between species and resources. Yes, equilibrium is something we more frequently crave for rather than have, and still it is a precious benchmark for modelling what we want and what kind of s*** we can possibly encounter on the way.

[1] Lo, A.,W., 2005, Reconciling Efficient Markets with Behavioral Finance: The Adaptive Markets Hypothesis, The Journal of Investment Consulting, Volume 7, no. 2, pp. 1 – 24

I cannot prove we’re smart

My editorial

I am preparing an article, which presents, in a more elegant and disciplined form, that evolutionary model of technological change. I am going once again through all the observation, guessing and econometric testing. My current purpose is to find simple, intelligible premises that all my thinking started from. ‘Simple and intelligible’ means sort of hard, irrefutable facts, or, foggy, unresolved questions in the available literature. This is the point, in scientific research, when I am coining up statements like: ‘I took on that issue in my research, because facts A,B, C suggest something interesting, and the available literature remains silent or undecided about it’. So now, I am trying to reconstruct my own thinking and explain, to whomever would read my article, why the hell did I adopt that evolutionary perspective. This is the point when doing science as pure research is being transformed into scientific writing and communication.

Thus, facts should come first. The Schumpeterian process of technological progress can be decomposed into three parts: the exogenous scientific input of invention, the resulting replacement of established technologies, and the ultimate growth in productivity. Empirical data provides a puzzling image of those three sub-processes in the modern economy. Data published by the World Bank regarding science, research and development allow noticing, for example, a consistently growing number of patent applications per one million people in the global economy (see http://data.worldbank.org/indicator/IP.PAT.RESD ). On the other hand, Penn Tables 9.0 (Feenstra et al. 2015[1]) make it possible to compute a steadily growing amount of aggregate amortization per capita, just as a growing share of aggregate amortization in the global GDP (see Table 1 in the Appendix). Still, the same Penn Tables 9.0, indicate unequivocally that the mean value of Total Factor Productivity across the global economy has been consistently decreasing since 1979 until 2014.

Of course, there are alternative views of measuring efficiency in economic activity. It is possible, for example, to consider energy efficiency as informative about technological progress, and the World Bank publishes the relevant statistics, such as energy use per capita, in kilograms of oil equivalent (see http://data.worldbank.org/indicator/EG.USE.PCAP.KG.OE ). Here too, the last decades do not seem to have brought any significant slowdown in the growth of energy consumption. The overall energy-efficiency of the global economy, measured with this metric, is decreasing, and there is no technological progress to observe at this level. A still different approach is possible, namely that of measuring technological progress at the very basic level of economic activity, in farming and food supply. The statistics reported by the World Bank as, respectively, the cereal yield per hectare ( see http://data.worldbank.org/indicator/AG.YLD.CREL.KG ), and the depth of food deficit per capita (see http://data.worldbank.org/indicator/SN.ITK.DFCT ), allow noticing a progressive improvement, at the scale of global economy, in those most fundamental metrics of technological performance.

Thus, the very clearly growing effort in research and development, paired with a seemingly accelerating pace of moral ageing in established technologies, occurs together with a decreasing Total Factor Productivity, decreasing energy efficiency, and just very slowly increasing efficiency in farming and food supply chains. Now, in science, there are basically three ways of apprehending facts: the why, the what, and the how. Yes, I know, there is a fourth way, the ‘nonsense!’ one, currently in fashion as ‘this is fake news! we ignore it’. Still, this fourth way is not really science. This is idiocy dressed fancily for an electoral meeting. So, we have three: the why, the what, and the how.

The why, or ‘Why are things happening the way they are?’, is probably the oldest way of starting science. ‘Why?’ is the intuitive way we have of apprehending things we don’t quite understand, like ‘Why is this piece of iron bending after I have left it close to a furnace?’. Probably, that intuitive tendency to ask for reasons reflects the way our brain works. Something happens, and some neurons fire in response. Now, they have to get social and to inform other neurons about that something having happened. Only in the world of neurons, i.e. in our nervous system, the category ‘other neurons to inform’ is quite broad. There are millions of them, in there. Besides, they need synapses to communicate, and synapses are an investment. Sort of a fixed asset. So, neurons have to invest in creating synapses, and they have a wide choice as for where exactly they should branch. As a result, neurons like fixed patterns of communication. Once they make a synaptic connection, they just use it. The ‘why?’ reflects this predilection, as in response we expect ‘Because things happen this way’, i.e. in response to this stimulus we fire that synaptic network, period.

The problem with the ‘why?’ is that it is essentially deterministic. We ask ‘why?’ and we expect ‘Because…’ in return. The ‘Because…’ is supposed to be reassuringly repetitive. Still, it usually is not. We build a ‘Because…’ in response to a ‘why?’, and suddenly something new pops up. Something, which makes the established ‘Because…’ look a little out of place. Something that requires a new ‘Because…’ in response to essentially the same ‘why?’. We end up with many becauses being attached to one why. Picking up the right because for the situation at hand becomes a real issue. Which because is the right because can be logically derived from observation, or illogically derived from our emotional stress due to cognitive dissonance. Did you know that the experience of cognitive dissonance can trigger, in a human being, stronger a stress reaction than the actual danger of death? This is probably why we do science. Anyway, choosing the right because on the illogical grounds of personal emotions leads to metaphysics, whilst an attempt to pick up the right because for the occasion by logical inference from observation leads to the next question: the ‘what?’. What exactly is happening? If we have many becauses to choose between, choosing the right one means adapting our reaction to what is actually taking place.

The ‘what?’ is slightly more modern than the ‘why?’. Probably, mathematics were historically the first attempt to harness the subtleties of the ‘what?’, so we are talking about settled populations, with a division of labour allowing some people to think about things kind of professionally. Anyway, the ‘what?’ amounts to describing reality so as the causal sequence of ‘because…’ is being decomposed as a sequence. Instead of saying ‘C happens because of B, and B happens because of A’, we state a sequence: A comes first, then comes B, and finally comes C. If we really mean business, we observe probabilities of occurrence and we can make those sequences more complex and more flexible. A happens with a probability of 20%, and then B can happen with a probability of 30%, or B’ can happen at 50% odds, and finally we have 20% of chances that B’’ happens instead. If it is B’’ than happens, it can branch into C, C’ or C’’ with the respective probabilities of X, Y, Z etc.

Statistics are basically a baby of the ‘what?’. As the ‘why?’ is stressful and embarrassingly deterministic, we dodge and duck and dive into the reassuringly cool waters of the ‘what?’. Still, I am not the only one to have a curious ape inside of me. Everyone has, and the curiosity of the curious ape is neurologically wired around the ‘why?’ pattern. So, just to make the ape calm and logical, whilst satisfying its ‘why’-based curiosity, we use the ‘how?’ question. Instead of asking ‘why are things happening the way they are?’, so instead of looking for fixed patterns, we ask ‘how are things happening?’. We are still on the hunt for links between phenomena, but instead of trying to shoot the solid, heavy becauses, we satisfy our ambition with the faster and more flexible hows. The how is the way things happen in a given context. We have all the liberty to compare the hows from different contexts and to look for their mutual similarities and differences. With enough empirical material we can even make a set of connected hows into a family, under a common ‘why?’. Still, even with such generalisations, the how is always different an answer from ‘because…’. The how is always context-specific and always allows other hows to take place in different contexts. The ‘because…’ is much more prone to elbow its way to the front of the crowd and to push the others out of the way.

Returning to my observations about technological change, I can choose, now, between the ‘why?’, the ‘what?’, and the “how?’. I can ask ‘Why is this apparent contradiction taking place between the way technological change takes place, and its outcomes in terms of productivity?’. Answering this question directly with a ‘Because…’ means building a full-fledged theory. I do not feel ready for that, yet. All these ideas in my head need more ripening, I can feel it. I have to settle for a ‘what?’, hopefully combined into context-specific hows. Hows run fast, and they change their shape, according to the situation. If you are not quick enough to run after a how, you have to satisfy yourself with the slow, respectable because. Being quick, in science, means having access to empirical data and be able to test quickly your hypotheses. I mean, you can be quick without access to empirical data, but then you just run very quickly after your own shadow. Interesting, but moderately productive.

So I am running after my hows. I have that empirical landscape, where a continuously intensifying experimentation with new technologies leads, apparently, to decreasing a productivity. There is a how, camouflaging itself in that landscape. This how assumes that we, as a civilisation, randomly experiment with new technologies, kind of which idea comes first, and then we watch the outcomes in terms of productivity. The outcomes are not really good – Total Factor Productivity keeps falling in the global economy – and we still keep experimenting at an accelerating pace. Are we stupid? That would be a tempting because, only I can invert my how. We are experimenting with new technologies at an increasing pace as we face disappointing outcomes in terms of productivity. If technology A brings, on the long run, decreasing productivity, we quickly experiment with A’, A’’, A’’’ etc. Something that we do brings unsatisfactory results. We have two options then. Firstly, we can stop doing what we do, or, in other words, in the presence of decreasing productivity we could stop experimenting with new technologies. Secondly, we can intensify experimentation in order to find efficient ways to do what we do. Facing trouble, we can be passive or try to be clever. Which option is cleverer, at the end of the day? I cast my personal vote for trying to be clever.

Thus, it would turn out that the global innovative effort is an intelligent, collective response to the unsatisfactory outcomes of previous innovative effort. Someone could say that this is irrational to go deeper and deeper into something that does not bring results. That is a rightful objection. I can formulate two answers. First of all, any results come with a delay. If something is not bringing results we want, we can assume it is not bringing them yet. Science, which allows invention, is in itself quite a recent invention. The scientific paradigm we know today has taken definitive shape in the 19th century. Earlier, we basically have been using philosophy in order to invent science. It makes some 150 years that we can use real science to invent new technologies. Maybe it has not been enough to learn how to use science properly. Secondly, there is still the question of what we want. The Schumpeterian paradigm assumes we want increased productivity but do we really? I can assume, very biologically, what I already signalled in my previous posts: any living species tends to maximize its hold on the environment by absorbing as much energy as possible. Maybe we are not that far from amoeba, after all, and, as a species, we collectively tend towards maximizing our absorption of energy from our environment. From this point of view, technological change that leads to increasing our energy use per capita and to engaging an ever growing amount of capital and labour into the process could be a perfectly rational behaviour.

All that requires assuming collective intelligence in the mankind. Proving the existence of intelligence is both hard and easy. On the one hand, culture is proof of intelligence: this is one of the foundational principles in anthropology. From that point of view, we can perfectly assume that the whole human species has collective intelligence. Still, an economist has a problem with this view. In economics, we assume individual choice. Can individual choice be congruent with collective intelligence, i.e. can individual, conscious behaviour change in step with collective decisions? Well, we did Renaissance, didn’t we? We did electricity, we did vaccines, we did religions, didn’t we? I use the expression ‘we did’ and not ‘we made’, because it wasn’t that one day in the 15th century we collectively decided that from now on, we sculpt people with no clothes on and we build cathedrals on pseudo-ancient columns. Many people made individual choices, and those individual choices turned out to be mutually congruent, and produced a coherent social change, and so we have Tesla and Barbie dolls today.

Now, this is the easy part. The difficult one consists in passing from those general intuitions, which, in the scientific world, are hypotheses, to empirical verification. Honestly, I cannot directly, empirically prove we are collectively intelligent. I reviewed thoroughly the empirical data I have access to and I found nothing that could serve as direct proof of collective intelligence in the mankind. Maybe this is because I don’t know how exactly could I formulate the null hypothesis, here. Would it be that we are collectively dumb? Milton Friedman would say that in such a case, I have to options: forget it or do just as if. In other words, I can drop entirely the hypothesis of collective intelligence, with all its ramifications, or construe a model implying its veracity, so treating this hypothesis as an actual assumption, and see how this model fits, in confrontation with facts. In economics, we have that assumption of efficient markets. Right, I agree, they are not necessarily perfectly efficient, those markets, but they arrange prices and quantities in a predictable way. We have the assumption of rational institutions. In general, we assume that a collection of individual acts can produce coherent social action. Thus, we always somehow imply the existence of collective intelligence in our doings. Dropping entirely this hypothesis would be excessive. So I stay with doing just as if.

[1] Feenstra, Robert C., Robert Inklaar and Marcel P. Timmer (2015), “The Next Generation of the Penn World Table” American Economic Review, 105(10), 3150-3182, available for download at http://www.ggdc.net/pwt

Everything even remotely economic

My editorial

Back to my work on innovation, I am exploring a new, interesting point of view. What if we perceived technological change and innovation as collective experimentation under uncertainty, an experimentation that we, as a species, are becoming more and more proficient at?  Interesting path to follow. It has many branches into various fields of research, like games theory, for example. The curious ape in me likes branches. They allow it to dangle over problems and having an aerial view. The view involves my internal happy bulldog rummaging in the maths of the question at hand, and my internal monk, the one with the Ockham’s razor, fending the bulldog away from the most vulnerable assumptions.

One of the branches that my ape can see almost immediately is that of incentives. Why do people figure out things, at all? First, because they can, and then because they’d better, under the penalty of landing waist deep in shit. I think that both incentives, namely ‘I can’ and ‘I need to’ sum up very much to the same, on the long run. We can do things that we learn how to do it, and we learn things that we’d better learn if we want our DNA to stay in the game, and if such is our desire, we’d better not starve to death. One of the most essential things that we have historically developed the capacity of learning about is how to get our food. There is that quite cruel statistic published by the World Bank, the depth of food deficit. It indicates the amount of calories needed to lift the undernourished from their status, everything else being constant. As the definition of that variable states: ‘The average intensity of food deprivation of the undernourished, estimated as the difference between the average dietary energy requirement and the average dietary energy consumption of the undernourished population (food-deprived), is multiplied by the number of undernourished to provide an estimate of the total food deficit in the country, which is then normalized by the total population’.

I have already made reference to this statistic in one of my recent updates (see  http://researchsocialsci.blogspot.com/2017/08/life-idea-research-some-facts-bloody.html ). This time, I am coming back with the whole apparatus. I attach this variable, as reported by the World Bank, to my compound database made of Penn Tables 9.0 (Feenstra et al. 2015[1]), as well as of other data from the World Bank. My curious ape swings on a further branch and asks: ‘What does innovation and technological progress look like in countries where people still starve? How different is it from those wealthy enough for not worrying so much about food?’. Right you are, ape. This is a good question to ask, on this Thursday morning. Let’s check.

I made a pivot out of my compound database, summarizing the distribution of key variables pertaining to innovation, across the intervals defined regarding the depth of food deficit. You can grab the Excel file at this link:    https://drive.google.com/file/d/0B1QaBZlwGxxAQ1ZBR0Z1MU9oRTA/view?usp=sharing . A few words of explanation are due as for the contents. The intervals in the depth of food deficit have been defined automatically by my statistical software, namely Wizard for MacOS, version 1.9.9 (222), created by Evan Miller. Those thresholds of food deficit look somehow like sextiles (spelled together!) of the distribution: there is approximately the same number of observations in each interval, namely about 400. The category labelled ‘Missing’ stands for all those country – year observations, where there is no recorded food deficit. In other words, the ‘Missing’ category actually represents those well present in the sample, just eating to their will.

I took three variables, which I consider really pertinent regarding innovation: Total Factor Productivity, the share of the GDP going to the depreciation in fixed assets, and the ratio of resident patent applications per one million people. I start with having a closer look at the latter. In general, people have much more patentable ideas when they starve just slightly, no more than 28 kilocalories per day per person. Those people score over 312 resident patent applications per million inhabitants. Interestingly, those who don’t starve at all score much lower: 168,9 on average. The overall distribution of that variable looks really interesting. Baby, it swings. It swings across the intervals of food deficit, and it swings even more inside those intervals. As the food deficit gets less and less severe, the average number of patent applications per one million people grows, and the distances between those averages tend to grow, too, as well as the variance. In the worst off cases, namely people living in the presence of food deficit above 251 kilocalories a day, on average, that generation of patentable ideas is really low and really predictable. As the situation ameliorates, more ideas get generated and more variability gets into the equation. This kind of input factor to the overall technological change looks really unstable structurally, and, in the same time, highly relevant regarding the possible impact of innovation on food deficit.

I want this blog to have educational value, and so I explain how am I evaluating relevance in this particular case. If you dig into the theory of statistics, and you really mean business, you are likely to dig out something called ‘the law of large numbers’. In short, that law states that the arithmetical difference between averages is highly informative about real differences between populations these averages have been computed in. More arithmetical difference between averages spells more real difference between populations and vice versa. As I am having a look at the distribution in the average number of resident patent applications per capita, distances between different classes of food deficit are really large. The super-high average in the ‘least starving’ category, the one between 28 kilocalories a day and no deficit at all, together with the really wild variance, suggest me that this category could be sliced even finer.

Across all the three factors of innovation, the same interesting pattern sticks out: average values are the highest in the ‘least starving’ category, and not in the not starving at all. Unless I have some bloody malicious imp in my dataset, it gives strong evidence to my general assertion that some light discomfort is next to none in boosting our propensity to figure things out. There is an interesting thing to notice about the intensity of depreciation. I use the ratio of aggregate depreciation as a measure for speed in technological change. It shows, how quickly the established technologies age and what economic effort it requires to provide for their ageing. Interestingly, this variable is maybe the least differentiated of the three, between the classes of food deficit as well as inside those classes. It looks as if the depth of food deficit hardly mattered as for the pace of technological change.

Another interesting remark comes as I look at the distribution of total factor productivity. You remember that on the whole, we have that TFP consistently decreasing, in the global economy, since 1979. You remember, do you? If not, just have a look at this Excel file, here: https://drive.google.com/file/d/0B1QaBZlwGxxAZ3MyZ00xcV9zZ1U/view?usp=sharing . Anyway, whilst productivity falls over time, it certainly climbs as more food is around. There is a clear progression of Total Factor Productivity across the different classes of food deficit. Once again, those starving just a little score better than those, who do not starve at all.

Now, my internal ape has spotted another branch to swing its weight on. How does innovation contribute to alleviate that most abject poverty, measured with the amount of food you don’t get? Let’s model, baby. I am stating my most general hypothesis, namely that innovation helps people out of hunger. Mathematically, it means that innovation acts as the opposite of food deficit, or:

Food deficit = a*Innovation     , a < 0

 I have my three measures of innovation: patent applications per one million people (PattApp), the share of aggregate depreciation in the GDP (DeprGDP), and total factor productivity (TFP). I can fit them under that general category ‘Innovation’ in my equation. The next step consists in reminding that anything that happens, happens in a context, and leaves some amount of doubt as for what exactly happened. The context is made of scale and structure. Scale is essentially made of population (Pop), as well as its production function, or: aggregate output (GDP), aggregate amount of fixed capital available (CK), aggregate input of labour (hours worked, or L). Structure is given by: density of population (DensPop), share of government expenditures in the capital stock (Gov_in_CK), the supply of money as % of GDP (Money_in_GDP, or the opposite of velocity in money), and by energy intensity measured in kilograms of oil equivalent consumed annually per capita (Energy Use). The doubt about things that happen is expressed as residual component in the equation. The whole is driven down to natural logarithms, just in order to make those numbers more docile.

In the quite substantial database I start with, only n = 296 observations match all the criteria. On the one hand, this is not much, and still, it could mean they are really well chosen observations. The coefficient of determination is R2 = 0.908, and this is a really good score. My model, as I am testing it here, in front of your eyes, explains almost 91% of the observable variance in food deficit. Now, one remark before we go further. Intuitively, we tend to interpret positive regression coefficients as kind of morally good, and the negative ones as the bad ones. Here, our explained variable is expressed in positive numbers, and the more positive they are, the more fucked are people living in the given time and place. Thus, we have to flip our thinking: in this model, positive coefficients are the bad guys, sort of a team of Famine riders, and the good guys just don’t leave home without their minuses on.

Anyway, the regressed model looks like that:

variable coefficient std. error t-statistic p-value
ln(GDP) -5,892 0,485 -12,146 0,000
ln(Pop) -2,135 0,186 -11,452 0,000
ln(L) 4,265 0,245 17,434 0,000
ln(CK) 3,504 0,332 10,543 0,000
ln(TFP) 1,766 0,335 5,277 0,000
ln(DeprGDP) -1,775 0,206 -8,618 0,000
ln(Gov_in_CK) 0,367 0,11 3,324 0,001
ln(PatApp) -0,147 0,02 -7,406 0,000
ln(Money_in_GDP) 0,253 0,06 4,212 0,000
ln(Energy use) 0,079 0,1 0,796 0,427
ln(DensPop) -0,045 0,031 -1,441 0,151
Constant residual -6,884 1,364 -5,048 0,000

I start the interpretation of my results with the core factors in the game, namely with innovation. What really helps, is the pace of technological change. The heavier the burden of depreciation on the GDP, the lower food deficit we have. Ideas help, too, although not as much. In fact, they help less than one tenth of what depreciation helps. Total Factor Productivity is a bad guy in the model: it is positively correlated with food deficit. Now, the context of the scale, or does size matter? Yes, it does, and, interestingly, it kind of matters in opposite directions. Being a big nation with a big GDP certainly helps in alleviating the deficit of food, but, strangely, having a lot of production factors – capital and labour – acts in the opposite direction. WTH?

Does structure matter? Well, kind of, not really something to inform the government about. Density of population and energy use are hardly relevant, given their high t-statistic. To me, it means that I can have many different cases of food deficit inside a given class of energy use etc. Those two variables can be useful if I want to map the working of other variables: I can use density of population and energy use as independent variables, to construe finer a slicing of my sample. Velocity of money and the share of government spending in the capital stock certainly matter. The higher the velocity of money, the lower the deficit of food. The more government weighs in relation to the available capital stock, the more malnutrition.

Those results are complex, and a bit puzzling. Partially, they confirm my earlier intuitions, namely that quick technological change and high efficiency in the monetary system generally help in everything even remotely economic. Still, other results, as for example that internal contradiction between scale factors, need elucidation. I need some time to wrap my mind around it.

[1] Feenstra, Robert C., Robert Inklaar and Marcel P. Timmer (2015), “The Next Generation of the Penn World Table” American Economic Review, 105(10), 3150-3182, available for download at http://www.ggdc.net/pwt

Don’t die or don’t invent anything interesting

My editorial on You Tube

 

It’s done. I’m in. I mean, I am in discussing the Cobb-Douglas production function, which seems to be some sort of initiation path for any economist who wants to be considered as a real scientist. Yesterday, in my update in French, I started tackling the issue commonly designated in economics as ‘the enigma of decreasing productivity’. Long story in short words: capital and labour work together and generate the final output of the economy, they contribute to said final output with their respective rates of productivity, and those factor contributions can be summed up in order to calculated the so-called Total Factor Productivity, or TFP for close acquaintances. With all those wonders of technology we have, like solar panels, Hyperloop, as well as Budweiser beer supplying rigorously the same taste across millions of its bottles and cans, we should see that TFP rocketing up at an exponential rate. The tiny little problem is that it is actually the opposite. The database I am quoting and using in my own research so frequently, namely Penn Tables (Feenstra et al. 2015[1]), is very much an attempt at measuring productivity in a complex way. I made a pivot out of that database, focused exclusively on TFP. You can find it here  https://drive.google.com/file/d/0B1QaBZlwGxxAZ3MyZ00xcV9zZ1U/view?usp=sharing and you can see by yourself: since 1979, total productivity of production factors has been consistently falling.

There are a few interesting things about that tendency in the TFP to fall: it seems to be correlated with decreasing a velocity of money, and with increasing a speed of depreciation in fixed assets, but it also seems to be structurally stable. What? How can I say it is structurally stable? Right, it deserves some explanation. Good, so I explain my point. In that Excel file I have just given the link to, I provide the mean value of TFP for each year, across the whole sample of countries, as well as the variance of TFP among these countries. Now, when you take the square root of variance (painful, but be brave), and divide it by the mean value, you obtain a coefficient called ‘variability of distribution’. In a set of data, variability is a proportion, or, if you want, a morphology, like arm length to leg length in the body of Michael Angelo’s David. In statistics, it is very much the same as in sculpture: as long as the basic proportions remain the same, we have more or less the same body on the pedestal, give or take some extra thingies (clothes on-clothes off, leaf on or off, some head of a mythical monster cut off and held in one hand etc.). If the coefficient of variability in a statistical distribution remains more or less the same over time, we can venture to hypothesise that the whole thing is structurally stable.

If you care to analyse that pivot of TFP that I placed on my Google disc (link provided earlier in this update), you will see that the variability of cross-sectional distribution in TFP remains more or less constant over time, and very docile by the way, between 0,3 and 0,6. Nothing that the government should be informed about, really. So we have a diminishing trend in a structurally stable spatial distribution. Structure determines function: it is plausible to hypothesise that it is the geography of production, understood as a structure, which produces this diminishing outcome. This is an extremely interesting path to follow, which, by the way, I already made a few shy steps into (see http://researchsocialsci.blogspot.com/2017/08/life-idea-research-some-facts-bloody.html ).

Whatever the interest in studying empirical data about TFP as such, I decided to track back the way we approach the whole issue, in economic sciences. I decided to track it back in the good old biblical way, back to its ancestry. The whole concept of Total Factor Productivity seems to originate from that of production function, which, in turn, we owe to Prof Charles W. Cobb and Prof Paul H. Douglas, in their common work from 1928[2]. As their paper is presently owned by the JSTOR library, I cannot link to it on my disk. Still, I can make available for you the documents, which seem to have been the prime sources of information for prof. Cobb and prof. Douglas, and this can be even more fun, as it shows the context of their research, in all its depth and texture. The piece of empirical data, which seems to have really inspired their whole work seems to be a report issued in 1922 by the US Department of Commerce, Bureau of the Census, and entitled ‘Wealth, Public Debt, and Taxation: 1922. Estimated National Wealth’. You can find it at my Google Disc, here: https://drive.google.com/file/d/0B1QaBZlwGxxAdWFhUE83eEdSbHM/view?usp=sharing . Besides, the two scientists seem to have worked a lot (this is my interpretation of their paper) with annual reports issued by the Federal Trade Commission. I found the report for the fiscal year ended on June the 30th, 1928, so basically published when that paper by prof Cobb and prof Douglas had already been released. You can find it there: https://drive.google.com/file/d/0B1QaBZlwGxxAbXRlS1JBZk51YUE/view?usp=sharing .

Provisionally, that Census report from 1922 seems to be The Mother of All Production Functions, as it made prof Cobb and prof Douglas work for six years (well, five, they must have turned the manuscript in at least half a calendar year before publication) on their concept of production function. So I open that report and try to understand what did the poet mean. The foreword starts with the following statement: ‘When the statistician attempts to measure the wealth of a nation, he encounters two distinct difficulties: First, it is hard to define the term “wealth”; second, it is by no means easy to secure the needed data’. Right, so the headache, then, back in the day, consisted in defining the substance of wealth. Interesting. Let’s stroll further.

The same foreword formulates an interesting chain of ideas, fundamental to our present understanding of economics. Firstly, ‘wealth’ points at two distinct ideas. Firstly, private individuals have some private wealth, and, secondly, the society as a whole has some social wealth. Private wealth is, using the phrasing of the report, practically coextensive and nearly equal in value with private property. The value of private property is the market value of the corresponding legal deeds (money, bonds, stocks etc.) minus the debt burdening the holder of those titles. On the other hand, we have social wealth, and here, it becomes really interesting. The report states: ‘Social wealth includes all objects having utility, that is, all things which people believe will minister to their wants either immediately or in the not too distant future. In this category are included not only those goods which are scarce or which cost money, but also those which are free, as, for example, water, air, the sun, beautiful scenery, and all those gifts of nature which gratify our desires. This is the kind of wealth to which we generally refer when we say that a nation is wealthy or opulent. It is the criterion that should be used if we wish to ascertain whether a nation is becoming richer or poorer. No other concept of wealth is more definite or more real, yet, from the standpoint of the statistician, this definition of wealth has one very serious drawback – no one has yet devised a satisfactory unit which can be applied practically in measuring the quantity of social wealth’.

Gotcha, Marsupilami! We are cornered with the concept of social wealth, or utility in objects we have and make. My intuition is that it was the general point of departure for prof Cobb and prof Douglas. Why? Well, let’s quickly read the introductory part of their 1928 paper. The two authors state their research interest in the form of five questions. Firstly, can it be estimated, within limits, whether the observable increase in production was purely fortuitous, whether it was primarily caused by technique, and the degree, if any, to which it responded to changes in the quantity of labour and capital. Secondly, may it be possible to determine, again within limits, the relative influence of upon production of labour as compared to capital? May it be possible to deduce the relative amount added to the total physical product by each unit of labour and capital and what is more important still by the final units of labour and capital in these respective years? Is there historical validity in the theories of decreased imputed productivity? Can we measure the probable slopes of the curves of incremental product, which are imputed to labour and to capital and thus give greater definiteness to what is at present purely a hypothesis with no quantitative values attached? Are the processes of distribution modelled at all closely upon those of the production of values?

In order to have a reliable picture of the context, in which prof Cobb and prof Douglas formed their theory, it is useful to shed light upon one little phrase, namely the timeline of data they started with. The 1922 report from the Census bureau, which seems to have caused all the trouble, gives data for: 1912, 1904, 1900, 1890, 1880, 1870, 1860, and 1850. Just to give an idea to those mildly initiated in economic history. The time we have covered here is the time when American railroads flourished, then virtually collapsed fault of sufficient financing, and almost miraculously rose from ashes. What rose with them, kind of in accompaniment, was the US dollar considered, finally, as a serious currency, together with the Federal Reserve Bank, one of the most convoluted financial structures in the history of mankind. This is the time, when the first capitalistic crisis, based on overinvestment, broke, and brought a wave of mergers and acquisitions, and on the crest of that wave, Mr J.P. Morgan came to the scene. Europe said ‘No!’ to the stability of the Vienna Treaty, and things were getting serious about waging war at each other.

Shortly, the period, statistically referred to in that 1922 Census report, had been the hell of a ride. Studying the events of that timeline must have been a bit like inventorying the outcomes of first contact with an alien civilisation. In that historical hurricane, prof Cobb and prof Douglas tried to keep their bearings and to single the statistics of productive assets out of the total capital account of the nation, so after accounting for other types of fixed property. What came out was a really succinct piece of empirics, namely three periods: 1889, 1899, and 1904. It is all important, in my opinion, because it shows a fundamental trait of the Cobb-Douglas production function: it had been designed, originally, to find underlying, robust logic as for how social wealth is being generated, against a background made of extremely turbulent economic and political changes, and that logic was being searched with very sparse data, covering long leaps in time, and necessitating a lot of largely arbitrary, intellectual shortcuts.

The original theory by Charles W. Cobb, Paul H. Douglas was like a machete that one uses to cut their way out of a bloody mess of entangled vegetation. It wasn’t, at least originally, a tool for fine measurements that we are so used to nowadays. Does it matter? My intuition tells me that yes, it does. It is the well-known principle of influence that the observer has on the observed object. When we study big objects, like big historical jumps and leaps, the methodology of measurement we use is likely to have little influence on the measurement itself, as compared to studying small incremental changes. When you measure a building, your error is likely to be much smaller, in relative terms, than the measurement of cogwheels inside a Swiss watch.

Thus, my point is that the original theory of production function, as formulated by Charles W. Cobb, Paul H. Douglas, was an attempt to make sense out of a turbulent historical change. I think this was precisely the reason for its subsequent success in economic sciences: it gave a lot of people a handy tool for understanding what the hell has just happened. It is interesting to see, how those two authors perceived their own theory. At the end of their paper, they formulate directions for further research. I am repeating the whole content of the two paragraphs I judge the most interesting: ‘Thus we may hope for: (1) An improved index of labour supply which will approximate more closely the relative actual number of hours worked not only by manual workers but also by clerical workers as well; (2) a better index of capital growth; (3) an improved index of production which will be based upon the admirable work of Dr. Thomas; (4) a more accurate index of the relative exchange value of a unit of manufactured goods. In analysing this data, we should (1) be prepared to devise formulas which will not necessarily be based upon constant relative “contributions” of each factor to the total product but which will allow for variations from year to year, and (2) will eliminate so far as possible the time element from the process’.

The last sentence is probably the most intriguing. Charles W. Cobb, Paul H. Douglas clearly signal that they had a problem with time. I mean, most people have, but in this precise case the two scientists clearly suggest that the model they provide is a simplification, and most likely an oversimplification, of a phenomenon not-quite-clearly-elucidated as for its unfolding in time. The funny part is that today, we use the Cobb-Douglas production function for assessing exactly the class of phenomena those two distinguished, scientific minds had the most doubts about: changes over time. They clearly suggest that the greatest weakness of their approach is robustness over time, and this is exactly what we do with their model today: we use it to assess temporal sequences. Kind of odd, I must say. Mind you, this is what happens when you figure out something interesting and then you die. Take Adam Smith. Nowhere in his writings, literally nowhere, did he use the expression ‘invisible hand of the market’. You can check by yourself. Still, this stupid metaphor (how many hands does a market have?) has become the staple of Smithsonian approach. There are two ways out of that dire predicament: you don’t die, or you don’t invent anything interesting. The latter seems relatively easier.

Right, time to go back forward in time. I mean, back to the present, or, rather, to a more recent past. Time is bloody complicated. My point is that I take that Total Factor Productivity from Penn Tables 9.0, and I regress it linearly, by Ordinary Least Squares, on a bunch of things I think are important. As I study any social phenomenon that I can measure, I assume that three kinds of things are important: the functional factors, the scale effects, and the residual value. The functional factors are phenomena that I suspect being really at work and shaping the value of my outcome value, the TFP in this precise occurrence. I have four prime suspects. Firstly, it is the relative speed of technology ageing, measured as the share of aggregate amortization in the GDP. ‘AmortGDP’ for friends. Secondly, it is the Keynesian intuition that governments have an economic role to play, and so I take the share of government spending in the available stock of fixed capital. I label it ‘GovInCK’. Now, my third suspect are ideas we have, and I measure ideas as the number of resident patent applications per million people, on average. Spell ‘PatAppPop’. Finally, productivity in making things could have something to do with efficiency in using energy. So I causally drop the energy use per capita, in kilograms of oil equivalent, into my model, under the heading of ‘EnergyUse’.

Now, I assume that size matters. Yes, I assume so. The size of what I am measuring is given by three variables: GDP (no need to abridge), population (Pop), and the available capital stock (CK). When size matters, some of that size may remain idle, out of reach of the functional factors. It just may, mind you, it doesn’t have to. This is why at the beginning, I assume the existence of a residual value in TFP, independent from functional factors and from scale effects. Events tend to get out of hand, in life as a whole, so just for safety, I take the natural logarithm of everything. Logarithms are much more docile than real things, and the natural logarithm is kind of knighted, as it has that Euler’s constant as base. Anyway, the model I am coming up with, is the following:

ln(TFP) = a1*ln(GDP) + a2*ln(Pop) + a3*ln(CK) + a4*ln(AmortGDP) + a5*ln(GovInCK) + a6*ln(PatAppPop) + a7*ln(EnergyUse) + a8*Residual_value

Good, I am checking. Sample size: n = 2 323 country/year observations. Not bad. Accuracy of explaining the variance in TFP: R2 = 0,643. Quite respectable. Now, the model:

ln(TFP) = 1,094*ln(GDP), standard error (0,035), significance p < 0,001

                                  +

      -0,471*ln(Pop), standard error (0,013), significance p < 0,001

                                 +

      -0,644*ln(CK), standard error (0,028), significance p < 0,001

                               +

     -0,033*ln(AmortGDP), standard error (0,03), significance p = 0,276

                              +

     -0,127*ln(GovInCK), standard error (0,013), significance p < 0,001

                              +

                -0,04*ln(PatAppPop), standard error (0,004), significance p < 0,001

                             +

                -0,03*ln(EnergyUse), standard error (0,013), significance p = 0,018

                            +

    Residual_value = -3,967, standard error (0,126), significance p < 0,001

Well, ‘moderate success’ seems to be the best term to describe those results. The negative residual value is just stupid. It does not make sense to have negative productivity. Probably too much co-integration between the explanatory variables. Something to straighten up in the future. The scale effect of GDP appears as the only sensible factor in the equation. Lots of work ahead.

[1] Feenstra, Robert C., Robert Inklaar and Marcel P. Timmer (2015), “The Next Generation of the Penn World Table” American Economic Review, 105(10), 3150-3182, available for download at http://www.ggdc.net/pwt

[2] Charles W. Cobb, Paul H. Douglas, 1928, A Theory of Production, The American Economic Review, Volume 18, Issue 1, Supplement, Papers and Proceedings of the Fortieth Annual Meeting of the American Economic Association (March 1928), pp. 139 – 165

What’s up, Joseph?

 

My editorial on You Tube >> https://youtu.be/XnlGWiliAwk

I am starting to mirror my research blog at a new site, namely https://discoversocialsciences.wordpress.com . My goal for the next year or so is to create a fully-blown, scientific and educational website devoted to social sciences. I thought that Word Press is a good tool in that view. Anyway, for the months to come, my readers from http://researchsocialsci.blogspot.com can find a copy of each post at https://discoversocialsciences.wordpress.com and vice versa.

This said, I am getting back to scientific writing. That science is not going to develop by itself: it needs me. Yesterday, in my update in French (see http://researchsocialsci.blogspot.com/2017/08/a-bientot-milton.html ), with the help of Milton Friedman (yes, I know he is no more among us since 2006, and still he wants to be of some help) I started to lay the foundations for that book I intend to write this year, to satisfy the terms of my research grant, and the terms of my curiosity. As I was doing it, some facts attracted my attention. This is usually how it starts with me. Some facts attract the attention of the curious ape in me, and then, man, God only knows what can happen next. As I am basically agnostic, I can even face a situation when no one knows what happens next.

Anyway, facts attracted my attention. Calm down, ape, we are going to play with those facts in a minute. Now, please, let me explain to the readers. Nice ape. So I am explaining. My basic field of interest in that research grant is innovation, which you might already know from my last two posts. In economic sciences, scientific invention is treated very much as exogenous to innovation in production. Probably it goes back to Joseph Schumpeter and his theory of business cycles (see Schumpeter 1939[1]). Schumpeter assumed that science is exciting, of course, but just some science does any stir in the world of business. Sometimes, a scientific invention hits the business so hard that the latter is being knocked off balance, or, in elaborate scientific terms, it is being pushed off the neighbourhood of general Walrasian equilibrium, where was dozing calmly just before the shock, and it goes into creative destruction.

Having made that observation, Joseph Schumpeter couldn’t but explain what is so special about those precise scientific inventions, which make the world of business rock and sway. His assertion was that science knocks business off balance when said science can significantly improve the efficiency of the production function in business. Economic sciences use the term ‘productivity’ to express this efficiency. It is an old intuition, going back to Adam Smith and David Ricardo, that productivity is the key to successful business practice. Still, for a long time since those first T-rexes of economics, it was assumed that business actions taken by business people simply display different levels of efficiency, full stop. If someone was really keen on moral philosophy, like John Stuart Mill, they could add that it is a good thing to develop efficient practices, and generally a bad habit to indulge in inefficient ones. Still, some kind of diversity in productive output was being implicitly assumed to exist in the social fabric around us.

Joseph Schumpeter took a different hand of cards to approach the problem. Born in 1883, his scientific mind had been bred both on the stupefying speed of development in industrial production, and on the great reshufflings in industrial structures, made of spectacular bankruptcies, mergers, and acquisitions. To Joseph Schumpeter, capitalism was by definition something similar to the battle for Gondor. It was supposed to be epic, turbulent, and spectacular, or it didn’t count as real capitalism. Schumpeter used to perceive technologies as something akin to tsunamis. His question was simple: when two or more tsunamis meet at some point, which one prevails? Answer: the most powerful one. The transformative power of new technologies was supposed to be observable as their capacity to increase efficiency in the use of production factors, or their productivity.

Look, Joseph, I fully agree with you that new technologies should be more productive than the old ones. Only you see, Joseph, after your death we started to have sort of a problem: they are not. I mean, new technologies do not seem to be definitely more productive that the old ones. I am sorry, Joseph. I know that any respectable scientist has the right to have a quiet after-life, but I just had to tell you. You take that database called Penn Tables 9.0 (Feenstra et al. 2015[2]). I know you liked data and statistics, Joseph. This was the basic for your critical stance towards Karl Marx, who did not really bother about real numbers. So you take that Penn Tables 9.0, Joseph, and you take out of it a variable called ‘total factor productivity’. They even have it, over at Penn Tables, in two different flavours.

I know you are an inquisitive mind, Joseph, so you can read about the exact recipes of those two flavours at http://www.rug.nl/ggdc/productivity/pwt/related-research-papers/capital_labor_and_tfp_in_pwt80.pdf . Anyway, the one labelled ‘ctfp’ measures total factor productivity at current Purchasing Power Parities, with your new home, USA, standing for the jauge (USA=1). The other one, called ‘cwtfp’, measures the welfare-relevant TFP levels at current PPPs (USA=1). I made a data pivot for you, Joseph. You can find it at my Google Disc, right here: https://drive.google.com/file/d/0B1QaBZlwGxxAZ3MyZ00xcV9zZ1U/view?usp=sharing

You can see by yourself, Joseph, that this productivity you used to be so keen about is not really keen to cooperate. Back in the day, until the late 1970ies, it had been growing gently and in conformity with the economic theory that you, Joseph, contributed to create. Only after 1979, something broke in the machinery, and total factor productivity started to fall. It is still falling, Joseph, and we don’t exactly know why. I mean, you have those General Electric, Tesla, Microsoft and l’Oreal guys launching another revolutionary technology every two or three years, but these revolutions kind of get bogged down, somewhere down the road to Total Factor Productivity.

Still, Joseph, there is light at the end of the tunnel, and this is not a train coming the opposite way. I like physics, Joseph, and I am kind of thinking that we can go a long way with physics. Them people in physics, they say we all need energy. On the top of that, Joseph, we have biology, and biology says we need to eat energy in order to have energy to spend. So I take two basic measures of our efficiency in the use of energy: the consumption of energy per capita, in kilograms of oil-equivalent, and the cereal yield in kilograms per hectare. You can find both of these metrics, as aggregate averages for the global economy, as published by the World Bank, right at this address here: https://drive.google.com/file/d/0B1QaBZlwGxxAZnJldTZDV0pHMWM/view?usp=sharing

So, Joseph, I’ll tell you what I think. We, as a species, are still quite young. We didn’t even have to fight off the dinosaurs: a bloody asteroid did the job. We came to the grand landscape of history with kind of a joker card up our sleeve. It is only now that we are realizing the true challenge of staying alive as a civilisation. The good thing is that we obviously learn to get more and more food from your average hectare. I know, not everybody eats cereals. I don’t, for example. Yet, once we have learnt how to get more cereals from one hectare, we can have some carryover to other types of food. I like bananas, for example. More bananas from one average hectare, it sounds optimistic to me. Could work nicely, Joseph, if nothing kills us in the meantime. We are still struggling to manage primary energy use, although we succeeded to press on the brake, those last decades. Still, I agree, Joseph: total factor productivity is a mess.

So what do we do, Joseph, with that book I am supposed to write until the end of this year, about innovation. I had that idea, Joseph, that I could kind of go a different way than you did. You represented innovation and technological progress as a way towards more efficient production. I am tempted to try a different approach. When we are around, we tend to gather around something: fire, temple, market place etc. As we gather around, there are more and more of us around, and then, there is that funny thing that happens: the more we are around per square kilometre, the more ideas we have per one thousand people. The more densely we live, the more things we can figure out. We do innovation simply because we can, not necessarily because we have precise gains in view. I mean, gains are important, but the process of figuring out things goes on kind of propelled by its own momentum. We invent things, we try them out, sometimes it works just smoothly (the wheel), sometimes we can even have fun with it (cognac and other distillates of fermented vegetable material), and sometimes it is kind of a failure.

So, Joseph, my view of technological change is that of adaptation going on in a loop. One of the most visible patterns in the historical development of mankind is that we create more and more densely populated social structures. Greater density of population creates new social structures, which impose upon us new challenges about how to sustain more people per square mile. This is how and why we invent and try new things. From this point of view, anything we do is a technology. The pattern of my average working day, combined with the working day of my neighbour, and all that combined with the way we feed ourselves and power our machines, it can all be perceived as a technology. Technologies that you defined, Joseph, like the process of making a car, could be just small building blocks in much broader and more intricate a process.

[1] Schumpeter, J.A. (1939). Business Cycles. A Theoretical, Historical and Statistical Analysis of the Capitalist Process. McGraw-Hill Book Company

[2] Feenstra, Robert C., Robert Inklaar and Marcel P. Timmer (2015), “The Next Generation of the Penn World Table” American Economic Review, 105(10), 3150-3182, available for download at http://www.ggdc.net/pwt