Judges and Consuls

 

My today’s editorial on You Tube

I feel like giving a bit of rest to all that stuff about innovation and production function. Just for a short while. During that short while I am returning to one of my favourite interlocutors, Jacques Savary. In 1675, he published his book, entitled ‘The Perfect Merchant or General Instructions as Regards Commerce’, with Louis Billaine’s publishing house (Second Pillar of the Grand Salle of the Palace, at Grand Cesar), and with the privilege of the King. In his book, Master Savary discusses at length a recent law: the Ordinance of 1673 or Edict of the King Serving as Regulation For The Business of Negociants And Merchants In Retail as well As In Wholesale. This is my own, English translation of the original title in French, namely “ORDONNANCE DE 1673 Édit du roi servant de règlement pour le commerce des négociants et marchands tant en gros qu’en détail”. You can have the full original text of that law at this link: https://drive.google.com/file/d/0B1QaBZlwGxxAanpBSVlPNW9LeFE/view?usp=sharing

I am discussing this ordinance in connection with Jacques Savary’s writings because he was reputed to be its co-author. In his book, he boasts about having been asked by the King (Louis the XIV Bourbon) to participate in a panel of experts in charge of preparing a reform of business law.

I like understanding how things work. By education, I am both a lawyer and an economist, so I like understanding how does the business work, as well as the law. I have discovered that looking at things in the opposite order, i.e. opposite to the officially presented one, helps my understanding and my ability to find hidden levers and catches in the officially presented logic. When applied to a legal act, this approach of mine sumps up, quite simply, to reading the document in the opposite order: I start with the last section and I advance, progressively, towards the beginning. I found out that things left for being discussed at the end of a legal act are usually the most pivotal patterns of social action in the whole legal structure under discussion. It looks almost as if most legislators were leaving the best bits for the dessert.

In this precise case, the dessert consists in Section XII, or ‘Of The Jurisdiction of Consuls’. In this section, the prerogatives of Judges and Consuls are discussed. The interesting thing here is that the title of the section refers to Consults, but each particular provision uses exactly this expression: ‘Judges and Consuls’. It looks as if there were two distinct categories of officers, and as if the ordinance in question attempted to bring their actions and jurisdictions over a common denominator. Interestingly, in some provisions of section XII, those Judges and Consuls are opposed to a category called ‘ordinary judges’. A quick glance at the contents of the section informs me that those guys, Judges and Consuls, were already in office at the moment of enacting the ordinance. The law I am discussing attempts to put order in their activity, without creating the institution as such.

Now, I am reviewing the list of prerogatives those Judges and Consuls were supposed to have. As I started with the last section of the legal act, I am starting from the last disposition of the last section. This is article 18, which refers to subpoena and summonses issued by Judges and Consuls. That means those guys were entitled to force people to come to court. This is not modern business arbitrage: we are talking about regular judicial power. That ordinance of 23rd of March, 1673, puts order in much more than commercial activities: it makes part of a larger attempt to put order in adjudication. I can only guess, by that categorization into ‘Judges’, ‘Consuls’, and ‘ordinary judges’ that at the time, many parallel structures of adjudication were coexisting, maybe even competing against each other as for their prerogatives. Judges and Consuls seem to have been victorious in at least some of this general competition for judicial power. Article 15, in the same section XII, says ‘We declare null all ordinances, commissions, mandates for summoning, and summonses issued by consequence in front of our judges and those of lords, which would revoke those issued in front of Judges and Consuls. We forbid, under the sanction of nullity, to overrule or suspend procedures and prosecutions undertaken in the execution of their verdicts, as well as to bar the way to proceeding in front of them. We want that, on the grounds of the present ordinance, they are executed, and that parties who will have presented their requests to overrule, revoke, suspend or defend the execution of their judgments, the prosecutors who will have signed such requests, the bailiffs or sergeants who will have notify about such requests, be sentenced each to fifty livres of penalty, half to the benefit of the party, half to the benefit of the poor, and those penalties will not be subject to markdown nor rebate; regarding the payment of which the party, the prosecutors and the sergeants are constrained in solidarity’.

That article 15 is a real treat, for institutional analysis. Following my upside down way of thinking, once again, I can see that at the moment of issuing this ordinance, the legal system in France must have been like tons of fun. If anyone was fined, they could argue for marking down the penalty or at least for having a rebate on it. They could claim they are liable to pay just a part of the fine (I did not do it as such; I was just watching them doing!). If a fine was adjudicated, the adjudicating body had to precise, whose benefit will this money contribute to. You could talk and cheat your way through the legal system by playing various categories of officers – bailiffs, sergeants, prosecutors, lord’s judges, royal judges, Judges and Consuls – against each other. At least some of them had the capacity to overrule, revoke, or suspend the decisions of others. This is why we, the King of France, had to put some order in that mess.

Francois Braudel, in his wonderful book entitled ‘Civilisation and Capitalism’, stated that the end of the 17th century – so the grand theatre where this ordinance happens – was precisely the moment when the judicial branch of government, in the more or less modern sense of the term, started to emerge. A whole class of professional lawyers was already established, at the time. An interesting mechanism of inverted entropy put itself in motion. The large class of professional lawyers emerged in dynamic loop with the creation of various tribunals, arbiters, sheriffs and whatnot. At the time, the concept of ‘jurisdiction’ apparently meant something like ‘as much adjudicating power you can grab and get away with it’. The more fun in the system, the greater need for professionals to handle it. The more professionals in the game, the greater market for their services they need etc. Overlapping jurisdictions were far from being as embarrassing as they are seen today: overlapping my judicial power with someone else’s was all the juice and all the fun of doing justice.

That was a general trait of the social order, which today we call ‘feudal’: lots of fun as various hierarchies overlapped and competed against each other. Right, those lots of fun could mean, quite frequently, paid assassins disguised in regular soldiers and pretending to fend off the King’s mousquetaires disguised in paid assassins. This is why that strange chaos, emerging out of a frantic push towards creating rivalling orders, had to be simplified. Absolute monarchy came as such a simplification. This is interesting to study how that absolute monarchy, so vilified in the propaganda by early revolutionaries, laid the actual foundations of what we know as modern constitutional state. Constitutional states work because constitutional orders work, and constitutional orders are based, in turn, on a very rigorously observed, institutional hierarchy, monopolistic in its prerogatives. If we put democratic institutions, like parliamentary vote, in the context of overlapping hierarchies and jurisdictions practiced in the feudal world, it would simply not work. Parliamentary votes have power because, and just as long as there is a monopolistic hierarchy of enforcement, created under absolute monarchies.

Anyway, the Sun King (yes, it was Louis the XIV) seems to have had great trust in the institution of Judges and Consuls. He seems to have been willing to give them a lot of powers regarding business law, and thus to forward his plan of putting some order in the chaos of the judicial system. Articles 13 and 14, in the same section XII, give an interesting picture of that royal will. Article 13 says that Judges and Consuls, on the request from the office of the King or from its palace, have the power to adjudicate on any request or procedure contesting the jurisdiction of other officers, ordinary judges included, even if said request regards an earlier privilege from the King. It seems that those Judges and Consuls are being promoted to the position of super-arbiters in the legal system.

Still, Article 14 is even more interesting, and it is so intriguing in its phrasing that I am copying here its original wording in French, for you to judge if I grasped well the meaning: ‘Seront tenus néanmoins, si la connaissance ne leur appartient pas de déférer au déclinatoire, à l’appel d’incompétence, à la prise à partie et au renvoi’. I tried to interpret this article with the help of modern legal doctrine in French, and I can tell you, it is bloody hard. It looks like a 17th century version of Catch 22. As far as I can understand it, the meaning of article 14 is the following: if a Judge or Consul does not have the jurisdiction to overrule a procedure against their jurisdiction, they will be subject to judgment on their competence to adjudicate. More questions than answers, really. Who decides whether the given Judge or Consul has the power to overrule a procedure against their authority? How this power is being evaluated? What we have here is an interesting piece of nothingness, right where we could expect granite-hard rules of competence. Obviously, the Sun King wanted to put some order in the judicial system, but he left some security valves in the new structure, just to allow the releasing of extra pressure, inevitably created by that new order.

Other interesting limitations to the powers of Judges and Consuls come in articles 3 and 6 of the same section XII. Article 3, in connection with article 2, states the jurisdiction of Judges and Consuls over the bills of exchange. Before I go further, a bit of commentary. Bills of exchange, at the time, made a monetary system equivalent to what today we know as account money, together with a big part of the stock market, as well as the market of futures contracts. At the end of the 17th century, bills of exchange were a universal instrument for transferring capital and settling the accounts. Circulation in bills of exchange was commonly made through recognition and endorsement, which, in practice, amounted to signing your name on the bill that passed through your hands (your business), and, subsequently, admitting (or not) that said signature is legitimate and valid. The practical problem with endorsement was that with many signatures on the same bill, together with accompanying remarks in the lines of ‘recognise up to the amount of…’, it was bloody complicated to reconstruct the chain of claims. For example, if you wanted to kind of sneak through the system, it came quite handy to endorse by signature, whilst writing the date of your signature kind of a few inches away, so as it looks signed before someone else. This detail alone provoked disputes about the timeline of endorsement.

Now, in that general context, article 2 of section XII, in the royal ordinance of March 23rd, 1673, states that Judges and Consuls have jurisdiction over bills of exchange between merchants and negociants, or those, in which merchants or negociants are the obliged party, as well as the letters of exchange and transfers of money between places. Article 3, in this general context, comes with an interesting limitation: ‘We forbid them, nevertheless, to adjudicate on bills of exchange between private individuals, other than merchants or negociants, or where a merchant or negociant is not obliged whatsoever. We want the parties to refer to ordinary judges, just as regarding simple promises’.

We, the King of France, want those Judges and Consuls to be busy just with the type of matters they are entitled to meddle with, and we don’t want their schedules to be filled with other types of cases. This is clear and sensible. Still, one tiny little Catch 22 pokes its head out of that wording. There visibly was a whole class of bills of exchange, where merchants or negociants were just the obliged party, the passive one, without having any corresponding claims on other classes of people. Bills of exchange with obliged merchants and negociants involved entered into the jurisdiction of Judges and Consuls, and, in the absence of such involvement, Judges and Consuls were basically off the case. Still, I saw examples of those bills of exchange, and I can tell you one thing: in all that jungle of endorsements, remarks and clauses to endorsements and whatnot written on those bills, there was a whole investigation to carry out just in order to establish the persons involved as obligators. Question: who assessed, whether a merchant or negociant is involved in the chain of endorsement regarding a specific bill? How was it being assessed?

One final remark. As for the term ‘negociant’, if it sounds strange, you can refer to one of my earlier posts, the one you can find at http://researchsocialsci.blogspot.com/2017/06/comes-time-comes-calm-duke.html .

Significatif et logique et néanmoins différent des rêves

 

Mon éditorial sur You Tube

Mon esprit navigue entre les rochers de la théorie de production et de productivité. Je veux comprendre, je veux dire : vraiment COMPRENDRE, le phénomène de productivité décroissante dans l’économie mondiale. Comme je regarde les choses maintenant, j’ai deux pistes intéressantes à suivre. Les deux premières réfèrent à la distinction fondamentale faite à la source-même de la théorie que nous appelons aujourd’hui « la fonction de production de Cobb – Douglas ». Cette fonction est quelque chose que tous les manuels d’économie servent comme hors d’œuvre, avant de passer plus loin. Dans ma mise à jour d’hier, en anglais, j’ai déjà commencé à étudier le contenu de cette théorie à sa source-même, dans l’article original publié en 1928 par les professeurs : Charles W. Cobb et Paul H. Douglas[1]. Vous pouvez suivre mes réflexions initiales sur cet article ici https://discoversocialsciences.com/2017/08/06/dont-die-or-dont-invent-anything-interesting/ ou bien à travers http://researchsocialsci.blogspot.com/2017/08/dont-die-or-dont-invent-anything.html . Une des distinctions fondamentales que Charles W. Cobb et Paul H. Douglas introduisent au tout début de leur article est celle entre la croissance de production due à la croissance de productivité – donc au progrès technologique – d’une part, et celle due à un apport accru des facteurs de production. Ça, c’est ma première piste. La deuxième réfère à ma propre recherche sur l’intensité d’amortissement d’actifs fixes considérée comme mesure de vitesse de changement technologique.

Commençons par le début, donc par les distinctions initiales faites par Charles W. Cobb et Paul H. Douglas. Tout d’abord, pour pouvoir passer rapidement entre la théorie et les tests empiriques, je transforme l’équation de base de la fonction de production. Dans sa version théorique, la fonction de production de Cobb – Douglas est exprimée par l’équation suivante :

Q = Ka*Lß*A

où « Q » est, bien sûr, le produit final, « K » est l’apport de capital physique, « a » est son coefficient de rendement, « L » veut dire l’apport de travail, « ß » est le rendement du travail, et le « A » est tout ce que vous voulez d’autre et c’est appelé élégamment « facteur de coordination ».

Pour faciliter des vérifications empiriques rapides, je transforme cette fonction en un problème linéaire, en utilisant les logarithmes naturels :

ln(Q) = c1*ln(K) + c2*ln(L) + Composante résiduelle

Dans cette équation transformée, je sépare l’apport des facteurs de production du tout le reste. Je leur attribue des coefficients linéaires « c1 » et « c2 » et tous les facteurs de productivité à proprement dite sont groupés dans la « composante résiduelle » de l’équation. Comme vous pouvez le constater, dans les sciences économiques, c’est comme dans une famille respectable : lorsque quelqu’un se comporte d’une façon pas vraiment compréhensible, on l’isole. C’est le cas de la productivité dans mon problème : je ne comprends pas vraiment sa mécanique, donc je la mets dans une camisole de force, bien isolée du reste de l’équation par le signe confortable d’addition.

J’utilise la base de données Penn Tables 9.0 (Feenstra et al. 2015[2]) comme source principale d’informations. Elle contient deux mesures alternatives du Produit Intérieur Brut (PIB) : côté production, appelée ‘rgdpo’, et côté consommation finale, servi sous l’acronyme ‘rgdpe’. Intuitivement, je sens que pour parler production et productivité il vaut mieux utiliser le PIB estimé côté production. Quant au côté droit de l’équation, j’ai deux mesures toute faite de capital physique. L’une, appelée « rkna », mesure le stock de capital physique en dollars US, aux prix constants 2011. La deuxième, désignée comme « ck », corrige la première avec la parité courante du pouvoir d’achat. J’avais déjà testé les deux mesures dans ma recherche et la deuxième, celle corrigée avec le pouvoir d’achat courant, semble être moins exposée aux fluctuations des prix. Je peux risquer une assomption que la mesure « ck » est plus proche de ce que je pourrais appeler l’utilité objective d’actifs fixes. Je décide donc, pour le moment, de me concentrer sur « ck » comme ma mesure de capital. Par la suite, je pourrais revenir à cette distinction particulière et tester mes équations avec « rkna ».

Quant à la contribution du facteur travail, Penn Tables 9.0 fournissent deux mesures : le nombre des personnes employées, ou « emp », ainsi que le nombre moyen d’heures travaillées par une personne, ou « avh ». Logiquement, L = emp*avh. Finalement, je teste donc empiriquement le modèle suivant :

ln(rgdpo) = c1*ln(ck) + c2*ln(emp) + c3*ln(avh) + Composante résiduelle

Bon, je teste. J’ai n = 3 319 observations « pays – année », et un coefficient de détermination égal à R2 = 0,979. C’est bien respectable, comme exactitude te prédiction. Allons voir le côté droit de l’équation. Le voilà :

ln(rgdpo) = 0,687*ln(ck), erreur standard (0,007), p-valeur < 0,001

+

           0,306*ln(emp), erreur standard (0,006), p-valeur < 0,001

+

-0,241*ln(avh), erreur standard (0,049), p-valeur < 0,001

+

Composante résiduelle = 4,395, erreur standard (0,426), p-valeur < 0,001

Maintenant, pour être cent pour cent sage et prévoyant comme scientifique, j’explore les alentours de cette équation. Les alentours en question, ils consistent, d’une part, en une hypothèse nulle par rapport à la productivité. Je teste donc une équation sans composante résiduelle, ou toute différence temporelle ou spatiale entre les cas individuels de PIB est expliquée uniquement par des différences correspondantes d’apport des moyens de production, sans se casser la tête avec la composante résiduelle, ainsi qu’avec la productivité cachée dedans. D’autre part, mes alentours sont faits de la stabilité (ou instabilité) structurelle de mes variables. J’ai une façon un peu rudimentaire d’appréhender la stabilité structurelle. Je suis un gars simple, quoi. C’est peut-être ça qui me fait faire de la science. Si j’ai deux dimensions dans ma base de données – temps et espace – j’étends mes données sur l’axe du temps et j’étudie la variabilité de leur distribution entre pays, chaque année séparément, en divisant la racine carrée de variance entre pays par la valeur moyenne pour tous les pays. La structure dont la stabilité je teste de cette façon c’est la structure géographie. Structure géographique n’est pas tout, vous direz. Tout à fait, encore que l’histoire nous apprend que quand la géographie change, tout un tas d’autres choses peut changer.

Bon, l’hypothèse nulle d’abord. Je me débarrasse de la composante résiduelle dans l’équation. Avec le même nombre d’observations, donc n = 3 319, j’obtiens un coefficient de détermination égal à R2 = 1,000. C’est un peu louche, ça, parce que ça indique que mon équation est en fait une identité comptable et ça ne devrait pas arriver si les variables ont de l’autonomie. Enfin, regardons ça de plus près :

 ln(rgdpo) = 0,731*ln(ck), erreur standard (0,004), p-valeur < 0,001

+

                      0,267*ln(emp), erreur standard (0,004), p-valeur < 0,001

+

              0,271*ln(avh), erreur standard (0,007), p-valeur < 0,001

Au premier coup d’œil, la composante résiduelle dans mon équation de base, ça couvrait surtout des déviations vraiment résiduelles, apparemment causées par des anomalies dans l’influence de rendement individuel par travailleur, donc du nombre d’heures travaillées, en moyenne et par personne. En fait, on pourrait se passer du tout ce bazar de productivité : les différences dans l’apport des moyens de production peuvent expliquer 100% de la variance observée dans le PIB. La mesure de productivité apparaît donc comme une explication de rendement anormalement bas de la part du capital et du travail. Je fais un pivot de la base de données, où je calcule les moyennes et les variances annuelles pour chaque variable, ainsi que pour les composantes résiduelles du PIB, possibles à calculer sur la base de ma première équation linéaire. Je passe tout ça dans un fichier Excel et je place le fichier sur mon disque Google. Voilà ! Vous pouvez le trouver à cette adresse-ci : https://drive.google.com/file/d/0B1QaBZlwGxxAZHVUbVI4aXNlMUE/view?usp=sharing .

Je peux faire quelques observations à faire à propos de cette stabilité structurelle. En fait, c’est une observation de base, que je pourrais ensuite décomposer à souhait : la chose fondamentale à propos de cette stabilité structurelle, c’est qu’il n’y en a pas. Peut-être, à la rigueur, on pourrait parler de stabilité dans la géographie du PIB, mais dès qu’on passe sur le côté droit de l’équation, tout change. Comme on pouvait l’attendre, c’est la composante résiduelle du PIB qui a le plus de swing. J’ai l’impression qu’il y a des périodes, comme le début des années 1960, ou des 1980, quand la géographie inexpliquée du PIB devenait vraiment importante. Ensuite, la variable d’heures travaillées par personne varie dans un intervalle au moins un ordre de grandeur plus étroit que les autres variables, mais c’est aussi la seule qui démontre une tendance claire à croître dans le temps. De décade en décade, les disparités géographiques quant au temps passé au boulot s’agrandissent.

Bon, temps de résumer. La fonction de production de Cobb – Douglas, comme je l’ai testée dans une forme linéaire assez rudimentaire, est loin d’accomplir les rêves de par Charles W. Cobb et Paul H. Douglas. Si vous prenez la peine de lire leur article, ils voulaient une séparation claire entre la croissance de production à espérer par la suite d’investissement, et celle due aux changements technologiques. Pas vraiment évident, ça, d’après ces calculs que je viens de faire. Ces deux chercheurs, dans la conclusion de leur article, parlent aussi de la nécessité de créer un modèle sans facteur « temps » dedans. Eh bien, ça serait dur, vu l’instabilité structurelle des variables et de la composante résiduelle.

Je fais un dernier test, celui de corrélation entre la composante résiduelle que j’ai obtenue avec mon équation linéaire de base, et la productivité totale des moyens de production, comme données dans Penn Tables 9.0. Le coefficient de corrélation de Pearson entre ces deux-là est de r = 0,644. Significatif et logique et néanmoins différent de 1,00, ce qui est intéressant à suivre.

Je rappelle que je suis en train de jumeler deux blogs dans deux environnements différents : Blogger et Word Press. Vous pouvez donc trouver les mêmes mises à jour sur http://researchsocialsci.blogspot.com ainsi que sur https://discoversocialsciences.wordpress.com .

[1] Charles W. Cobb, Paul H. Douglas, 1928, A Theory of Production, The American Economic Review, Volume 18, Issue 1, Supplement, Papers and Proceedings of the Fortieth Annual Meeting of the American Economic Association (March 1928), pp. 139 – 165

[2] Feenstra, Robert C., Robert Inklaar and Marcel P. Timmer (2015), “The Next Generation of the Penn World Table” American Economic Review, 105(10), 3150-3182, available for download at http://www.ggdc.net/pwt

Don’t die or don’t invent anything interesting

My editorial on You Tube

 

It’s done. I’m in. I mean, I am in discussing the Cobb-Douglas production function, which seems to be some sort of initiation path for any economist who wants to be considered as a real scientist. Yesterday, in my update in French, I started tackling the issue commonly designated in economics as ‘the enigma of decreasing productivity’. Long story in short words: capital and labour work together and generate the final output of the economy, they contribute to said final output with their respective rates of productivity, and those factor contributions can be summed up in order to calculated the so-called Total Factor Productivity, or TFP for close acquaintances. With all those wonders of technology we have, like solar panels, Hyperloop, as well as Budweiser beer supplying rigorously the same taste across millions of its bottles and cans, we should see that TFP rocketing up at an exponential rate. The tiny little problem is that it is actually the opposite. The database I am quoting and using in my own research so frequently, namely Penn Tables (Feenstra et al. 2015[1]), is very much an attempt at measuring productivity in a complex way. I made a pivot out of that database, focused exclusively on TFP. You can find it here  https://drive.google.com/file/d/0B1QaBZlwGxxAZ3MyZ00xcV9zZ1U/view?usp=sharing and you can see by yourself: since 1979, total productivity of production factors has been consistently falling.

There are a few interesting things about that tendency in the TFP to fall: it seems to be correlated with decreasing a velocity of money, and with increasing a speed of depreciation in fixed assets, but it also seems to be structurally stable. What? How can I say it is structurally stable? Right, it deserves some explanation. Good, so I explain my point. In that Excel file I have just given the link to, I provide the mean value of TFP for each year, across the whole sample of countries, as well as the variance of TFP among these countries. Now, when you take the square root of variance (painful, but be brave), and divide it by the mean value, you obtain a coefficient called ‘variability of distribution’. In a set of data, variability is a proportion, or, if you want, a morphology, like arm length to leg length in the body of Michael Angelo’s David. In statistics, it is very much the same as in sculpture: as long as the basic proportions remain the same, we have more or less the same body on the pedestal, give or take some extra thingies (clothes on-clothes off, leaf on or off, some head of a mythical monster cut off and held in one hand etc.). If the coefficient of variability in a statistical distribution remains more or less the same over time, we can venture to hypothesise that the whole thing is structurally stable.

If you care to analyse that pivot of TFP that I placed on my Google disc (link provided earlier in this update), you will see that the variability of cross-sectional distribution in TFP remains more or less constant over time, and very docile by the way, between 0,3 and 0,6. Nothing that the government should be informed about, really. So we have a diminishing trend in a structurally stable spatial distribution. Structure determines function: it is plausible to hypothesise that it is the geography of production, understood as a structure, which produces this diminishing outcome. This is an extremely interesting path to follow, which, by the way, I already made a few shy steps into (see http://researchsocialsci.blogspot.com/2017/08/life-idea-research-some-facts-bloody.html ).

Whatever the interest in studying empirical data about TFP as such, I decided to track back the way we approach the whole issue, in economic sciences. I decided to track it back in the good old biblical way, back to its ancestry. The whole concept of Total Factor Productivity seems to originate from that of production function, which, in turn, we owe to Prof Charles W. Cobb and Prof Paul H. Douglas, in their common work from 1928[2]. As their paper is presently owned by the JSTOR library, I cannot link to it on my disk. Still, I can make available for you the documents, which seem to have been the prime sources of information for prof. Cobb and prof. Douglas, and this can be even more fun, as it shows the context of their research, in all its depth and texture. The piece of empirical data, which seems to have really inspired their whole work seems to be a report issued in 1922 by the US Department of Commerce, Bureau of the Census, and entitled ‘Wealth, Public Debt, and Taxation: 1922. Estimated National Wealth’. You can find it at my Google Disc, here: https://drive.google.com/file/d/0B1QaBZlwGxxAdWFhUE83eEdSbHM/view?usp=sharing . Besides, the two scientists seem to have worked a lot (this is my interpretation of their paper) with annual reports issued by the Federal Trade Commission. I found the report for the fiscal year ended on June the 30th, 1928, so basically published when that paper by prof Cobb and prof Douglas had already been released. You can find it there: https://drive.google.com/file/d/0B1QaBZlwGxxAbXRlS1JBZk51YUE/view?usp=sharing .

Provisionally, that Census report from 1922 seems to be The Mother of All Production Functions, as it made prof Cobb and prof Douglas work for six years (well, five, they must have turned the manuscript in at least half a calendar year before publication) on their concept of production function. So I open that report and try to understand what did the poet mean. The foreword starts with the following statement: ‘When the statistician attempts to measure the wealth of a nation, he encounters two distinct difficulties: First, it is hard to define the term “wealth”; second, it is by no means easy to secure the needed data’. Right, so the headache, then, back in the day, consisted in defining the substance of wealth. Interesting. Let’s stroll further.

The same foreword formulates an interesting chain of ideas, fundamental to our present understanding of economics. Firstly, ‘wealth’ points at two distinct ideas. Firstly, private individuals have some private wealth, and, secondly, the society as a whole has some social wealth. Private wealth is, using the phrasing of the report, practically coextensive and nearly equal in value with private property. The value of private property is the market value of the corresponding legal deeds (money, bonds, stocks etc.) minus the debt burdening the holder of those titles. On the other hand, we have social wealth, and here, it becomes really interesting. The report states: ‘Social wealth includes all objects having utility, that is, all things which people believe will minister to their wants either immediately or in the not too distant future. In this category are included not only those goods which are scarce or which cost money, but also those which are free, as, for example, water, air, the sun, beautiful scenery, and all those gifts of nature which gratify our desires. This is the kind of wealth to which we generally refer when we say that a nation is wealthy or opulent. It is the criterion that should be used if we wish to ascertain whether a nation is becoming richer or poorer. No other concept of wealth is more definite or more real, yet, from the standpoint of the statistician, this definition of wealth has one very serious drawback – no one has yet devised a satisfactory unit which can be applied practically in measuring the quantity of social wealth’.

Gotcha, Marsupilami! We are cornered with the concept of social wealth, or utility in objects we have and make. My intuition is that it was the general point of departure for prof Cobb and prof Douglas. Why? Well, let’s quickly read the introductory part of their 1928 paper. The two authors state their research interest in the form of five questions. Firstly, can it be estimated, within limits, whether the observable increase in production was purely fortuitous, whether it was primarily caused by technique, and the degree, if any, to which it responded to changes in the quantity of labour and capital. Secondly, may it be possible to determine, again within limits, the relative influence of upon production of labour as compared to capital? May it be possible to deduce the relative amount added to the total physical product by each unit of labour and capital and what is more important still by the final units of labour and capital in these respective years? Is there historical validity in the theories of decreased imputed productivity? Can we measure the probable slopes of the curves of incremental product, which are imputed to labour and to capital and thus give greater definiteness to what is at present purely a hypothesis with no quantitative values attached? Are the processes of distribution modelled at all closely upon those of the production of values?

In order to have a reliable picture of the context, in which prof Cobb and prof Douglas formed their theory, it is useful to shed light upon one little phrase, namely the timeline of data they started with. The 1922 report from the Census bureau, which seems to have caused all the trouble, gives data for: 1912, 1904, 1900, 1890, 1880, 1870, 1860, and 1850. Just to give an idea to those mildly initiated in economic history. The time we have covered here is the time when American railroads flourished, then virtually collapsed fault of sufficient financing, and almost miraculously rose from ashes. What rose with them, kind of in accompaniment, was the US dollar considered, finally, as a serious currency, together with the Federal Reserve Bank, one of the most convoluted financial structures in the history of mankind. This is the time, when the first capitalistic crisis, based on overinvestment, broke, and brought a wave of mergers and acquisitions, and on the crest of that wave, Mr J.P. Morgan came to the scene. Europe said ‘No!’ to the stability of the Vienna Treaty, and things were getting serious about waging war at each other.

Shortly, the period, statistically referred to in that 1922 Census report, had been the hell of a ride. Studying the events of that timeline must have been a bit like inventorying the outcomes of first contact with an alien civilisation. In that historical hurricane, prof Cobb and prof Douglas tried to keep their bearings and to single the statistics of productive assets out of the total capital account of the nation, so after accounting for other types of fixed property. What came out was a really succinct piece of empirics, namely three periods: 1889, 1899, and 1904. It is all important, in my opinion, because it shows a fundamental trait of the Cobb-Douglas production function: it had been designed, originally, to find underlying, robust logic as for how social wealth is being generated, against a background made of extremely turbulent economic and political changes, and that logic was being searched with very sparse data, covering long leaps in time, and necessitating a lot of largely arbitrary, intellectual shortcuts.

The original theory by Charles W. Cobb, Paul H. Douglas was like a machete that one uses to cut their way out of a bloody mess of entangled vegetation. It wasn’t, at least originally, a tool for fine measurements that we are so used to nowadays. Does it matter? My intuition tells me that yes, it does. It is the well-known principle of influence that the observer has on the observed object. When we study big objects, like big historical jumps and leaps, the methodology of measurement we use is likely to have little influence on the measurement itself, as compared to studying small incremental changes. When you measure a building, your error is likely to be much smaller, in relative terms, than the measurement of cogwheels inside a Swiss watch.

Thus, my point is that the original theory of production function, as formulated by Charles W. Cobb, Paul H. Douglas, was an attempt to make sense out of a turbulent historical change. I think this was precisely the reason for its subsequent success in economic sciences: it gave a lot of people a handy tool for understanding what the hell has just happened. It is interesting to see, how those two authors perceived their own theory. At the end of their paper, they formulate directions for further research. I am repeating the whole content of the two paragraphs I judge the most interesting: ‘Thus we may hope for: (1) An improved index of labour supply which will approximate more closely the relative actual number of hours worked not only by manual workers but also by clerical workers as well; (2) a better index of capital growth; (3) an improved index of production which will be based upon the admirable work of Dr. Thomas; (4) a more accurate index of the relative exchange value of a unit of manufactured goods. In analysing this data, we should (1) be prepared to devise formulas which will not necessarily be based upon constant relative “contributions” of each factor to the total product but which will allow for variations from year to year, and (2) will eliminate so far as possible the time element from the process’.

The last sentence is probably the most intriguing. Charles W. Cobb, Paul H. Douglas clearly signal that they had a problem with time. I mean, most people have, but in this precise case the two scientists clearly suggest that the model they provide is a simplification, and most likely an oversimplification, of a phenomenon not-quite-clearly-elucidated as for its unfolding in time. The funny part is that today, we use the Cobb-Douglas production function for assessing exactly the class of phenomena those two distinguished, scientific minds had the most doubts about: changes over time. They clearly suggest that the greatest weakness of their approach is robustness over time, and this is exactly what we do with their model today: we use it to assess temporal sequences. Kind of odd, I must say. Mind you, this is what happens when you figure out something interesting and then you die. Take Adam Smith. Nowhere in his writings, literally nowhere, did he use the expression ‘invisible hand of the market’. You can check by yourself. Still, this stupid metaphor (how many hands does a market have?) has become the staple of Smithsonian approach. There are two ways out of that dire predicament: you don’t die, or you don’t invent anything interesting. The latter seems relatively easier.

Right, time to go back forward in time. I mean, back to the present, or, rather, to a more recent past. Time is bloody complicated. My point is that I take that Total Factor Productivity from Penn Tables 9.0, and I regress it linearly, by Ordinary Least Squares, on a bunch of things I think are important. As I study any social phenomenon that I can measure, I assume that three kinds of things are important: the functional factors, the scale effects, and the residual value. The functional factors are phenomena that I suspect being really at work and shaping the value of my outcome value, the TFP in this precise occurrence. I have four prime suspects. Firstly, it is the relative speed of technology ageing, measured as the share of aggregate amortization in the GDP. ‘AmortGDP’ for friends. Secondly, it is the Keynesian intuition that governments have an economic role to play, and so I take the share of government spending in the available stock of fixed capital. I label it ‘GovInCK’. Now, my third suspect are ideas we have, and I measure ideas as the number of resident patent applications per million people, on average. Spell ‘PatAppPop’. Finally, productivity in making things could have something to do with efficiency in using energy. So I causally drop the energy use per capita, in kilograms of oil equivalent, into my model, under the heading of ‘EnergyUse’.

Now, I assume that size matters. Yes, I assume so. The size of what I am measuring is given by three variables: GDP (no need to abridge), population (Pop), and the available capital stock (CK). When size matters, some of that size may remain idle, out of reach of the functional factors. It just may, mind you, it doesn’t have to. This is why at the beginning, I assume the existence of a residual value in TFP, independent from functional factors and from scale effects. Events tend to get out of hand, in life as a whole, so just for safety, I take the natural logarithm of everything. Logarithms are much more docile than real things, and the natural logarithm is kind of knighted, as it has that Euler’s constant as base. Anyway, the model I am coming up with, is the following:

ln(TFP) = a1*ln(GDP) + a2*ln(Pop) + a3*ln(CK) + a4*ln(AmortGDP) + a5*ln(GovInCK) + a6*ln(PatAppPop) + a7*ln(EnergyUse) + a8*Residual_value

Good, I am checking. Sample size: n = 2 323 country/year observations. Not bad. Accuracy of explaining the variance in TFP: R2 = 0,643. Quite respectable. Now, the model:

ln(TFP) = 1,094*ln(GDP), standard error (0,035), significance p < 0,001

                                  +

      -0,471*ln(Pop), standard error (0,013), significance p < 0,001

                                 +

      -0,644*ln(CK), standard error (0,028), significance p < 0,001

                               +

     -0,033*ln(AmortGDP), standard error (0,03), significance p = 0,276

                              +

     -0,127*ln(GovInCK), standard error (0,013), significance p < 0,001

                              +

                -0,04*ln(PatAppPop), standard error (0,004), significance p < 0,001

                             +

                -0,03*ln(EnergyUse), standard error (0,013), significance p = 0,018

                            +

    Residual_value = -3,967, standard error (0,126), significance p < 0,001

Well, ‘moderate success’ seems to be the best term to describe those results. The negative residual value is just stupid. It does not make sense to have negative productivity. Probably too much co-integration between the explanatory variables. Something to straighten up in the future. The scale effect of GDP appears as the only sensible factor in the equation. Lots of work ahead.

[1] Feenstra, Robert C., Robert Inklaar and Marcel P. Timmer (2015), “The Next Generation of the Penn World Table” American Economic Review, 105(10), 3150-3182, available for download at http://www.ggdc.net/pwt

[2] Charles W. Cobb, Paul H. Douglas, 1928, A Theory of Production, The American Economic Review, Volume 18, Issue 1, Supplement, Papers and Proceedings of the Fortieth Annual Meeting of the American Economic Association (March 1928), pp. 139 – 165

Réserve à tout hasard, juste en cas

 Mon éditorial sur You Tube 

Ces derniers jours, je me suis rendu compte à quel point la science, tout du moins les sciences sociales, sont proches du spiritisme. Je développe un sujet, tranquillos, j’y vais mollo, et tout à coup :  paf ! y a un classique que je veux bien citer. Ces deux derniers jours, j’ai déjà eu deux conversations sérieuses avec deux hommes morts – Milton Friedman et Joseph Schumpeter – et c’étaient vraiment des conversations. Moi, j’ai quelque chose à dire et je découvre que mon collègue, bien distingué et bien décédé, a une remarque très pertinente qui tranche vraiment dans le vif de sujet que je suis en train de travailler dessus. C’est probablement parce que j’ai décidé de me mettre enfin à remplir les obligations contenues dans mon contrat de recherche pour cette année et j’ai donc commencé à mettre en place la structure logique d’un livre sur l’innovation et le progrès technologique.

Il y a cette chose étrange quand ça vient aux livres : l’impératif de revue complète du sujet qui, à son tour, impose de tracer une sorte d’héritage intellectuel, comme un chemin continu qui a mené des classiques de ma discipline jusqu’à ma propre recherche. Quand j’écris un article, j’ai le choix d’adopter une telle approche ou pas, suivant ma volonté d’obtenir des points additionnels de la part des critiques, pour avoir offert « une revue satisfaisante de littérature du sujet ». Ce qui est un choix dans un article, devient un must dans le cas d’un livre. Le truc marrant avec ces revues de littérature c’est qu’elles mènent, inévitablement, à un appauvrissement d’horizons intellectuels plutôt qu’à la préservation de leur richesse. Toute citation est une version abrégée de l’original et tout choix de littérature est sélectif par rapport à l’acquis intellectuel vraiment en place.

Bon, fini de geindre ; y a du boulot de recherche qui ne va pas se faire par lui-même. Quand on traite le sujet d’innovation et de progrès technologique, une question revient en boomerang : la productivité totale des facteurs de production, ou, dans une version plus abordable, la TFP. Cet acronyme vient de l’anglais ‘Total Factor Productivity’. Pour faciliter la lecture tout comme pour faciliter mon écriture à moi, je vais désormais utiliser TFP pour désigner la productivité totale des facteurs de production. J’espère bien que ça ne va froisser personne. Alors, la théorie économique de base dit qu’on peut parler de progrès technologique véritable seulement si TFP croît par la suite du remplacement d’une technologie précédente par une technologie nouvelle. Seulement, voilà, on a un petit problème, là, avec cette assomption quasi-axiomatique : plus personne ne croit, aujourd’hui, que TFP croisse. Il y a des faits qui disent que ça ne croît pas et même dans la science, il est dangereux d’ignorer les faits.

Les faits, je peux vous les offrir de ce pas. La base de données connue comme Penn Tables 9.0 (Feenstra et al. 2015[1]) fournit des données très complètes sur TFP dans chaque pays séparément, entre 1950 et 2014. Je me suis permis de publier, sur mon disque Google, le calcul de la moyenne, ainsi que de la variance de TFP, toutes les deux à l’échelle globale. Vous pouvez trouver le fichier Excel correspondant, en anglais, à cette adresse-ci :      https://drive.google.com/file/d/0B1QaBZlwGxxAZ3MyZ00xcV9zZ1U/view?usp=sharing . Le problème avec TFP, c’est que ça a arrêté de croître en 1979 et depuis, ça glisse gentiment vers des valeurs de plus en plus basses. La glissade est élégante, il est vrai, mais elle semble inexorable. On a donc Microsoft, on a Tesla, on a le TGV, on a même la poupée Barbie et tout ça, ça semble contribuer plutôt au gaspillage des moyens de production qu’à leur utilisation plus productive. C’est qui est tout à fait étrange, aussi, mais étrange à un niveau légèrement plus initié d’analyse quantitative, est la stabilité structurelle de la distribution mondiale de TFP parmi les pays recensés dans Penn Tables 9.0. Dans mon fichier Excel, je donne la moyenne et la variance de TFP, pour chaque année séparément. Alors, si vous prenez la racine carrée de la variance, et ensuite vous la divisez par la moyenne, vous obtenez un coefficient appelé ‘variabilité de distribution’. La variabilité est une proportion, comme la proportion entre le bras et la jambe dans un corps humain : si la proportion demeure plus ou moins la même, on assume une structure morphologique similaire. La variabilité de distribution de TFP se confine gentiment à un intervalle entre 0,3 et 0,6 : c’est bien bas, comme variabilité, et ça change très peu dans le temps. Rien dont il faudrait informer la famille. Ce truc est structurellement stable.

Si une structure est stable et le produit mesurable de cette structure – la TFP moyenne dans l’économie mondiale – à une tendance décroissante, cela veut dire que le plus probablement c’est la structure en question, elle-même, qui produit cette tendance. Il y a quelque chose dans la géographie de la TFP à travers le globe qui la fait baisser. Voilà une énigme. Chouette ! Le singe curieux en moi a enfin des choses intéressantes à explorer. L’exploration, je la mène en deux directions. D’une part, j’essaie de deviner ce que le poète a voulu bien dire : je consulte les notes méthodologiques des créateurs de Penn Tables. Vous pouvez trouver ces mêmes notes ici :http://www.rug.nl/ggdc/productivity/pwt/related-research-papers/capital_labor_and_tfp_in_pwt80.pdf  . D’autre part, je fais des tests économétriques pour voir la corrélation de TFP avec d’autres variables économiques.

Comme je procède à la revue des fondements théoriques des données présentées dans les notes méthodologiques, une chose attire mon attention plus particulièrement : cette méthodologie assume une observation directe de la productivité du capital et ensuite une dérivation de la productivité du travail sur la base de la fonction de production type Cobb-Douglas. La fonction de production, ça montre la proportion entre l’apport des moyens de production et la production elle-même. La fonction type Cobb-Douglas, elle assume une substitution parfaite entre le capital est le travail, comme moyens de productions. Personnellement, je ne comprends pas pourquoi le monde des sciences économiques est tellement attaché à cette assomption de substitution parfaite. Bien sûr, elle est confortable, cette assomption : une fois qu’on l’adopte, on n’a plus à se casser la tête avec toutes les configurations possibles de substitution imparfaite. Seulement voilà, le confortable n’est pas nécessairement ce qu’on cherche dans la science et ensuite, cette assomption ne marche pratiquement jamais en pratique : une fois que vous l’insérez dans un modèle, vous devez tout de suite ajouter des paramètres additionnels pour balancer vos équations. A quoi bon, donc ? C’est une vieille vache, cette substitution parfaite, est les vieilles vaches, on ne les trait plus, sous peine d’avoir des surprises infectieuses.

De toute façon, la méthodologie de Penn Tables, dans le calcul de TFP, elle est basée sur la productivité du capital et le capital, c’est en train de s’accumuler dans l’économie mondiale à une vitesse folle. Vous pouvez consulter Piketty et Zucman, à ce sujet, par exemple (Piketty, Zucman 2014[2]). J’ai aussi fait un pivot rapide dans Penn Tables 9.0, pour vous donner une idée de changement dans les proportions entre les actifs fixes et le PIB, dans l’économie mondiale ; vous pouvez le trouver dans un fichier Excel à cette adresse : https://drive.google.com/file/d/0B1QaBZlwGxxAS2wxOFhfSzJjcjg/view?usp=sharing .

Alors, pour résoudre l’énigme de la TFP décroissante, il semble utile de regarder du côté d’accumulation de capital. En fait, de ce côté-là, je crois bien que je peux contribuer quelque chose. Dans l’article que j’ai écrit au printemps ( voyez https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2975683 ) je pense avoir prouvé d’une façon convaincante que l’accumulation de capital dans les bilans entrepreneuriaux, ça suit un schéma de hamster : plus rapide est le vieillissement des technologies en place, plus de capital est accumulé dans les bilans, comme réserve à tout hasard, juste en cas où ces technologies vieillissent encore plus vite.

Bon, pour finir cette mise à jour, j’ai un message particulier à mes lecteurs. J’ai l’ambition de développer ce blog de recherche en une ressource complète dans le domaine des sciences sociales. Je navigue donc progressivement vers un site Internet à part et je commence par refléter le contenu du blog que vous connaissez, accessible à http://researchsocialsci.blogspot.com  , sur un site construit avec Word Press, le https://discoversocialsciences.wordpress.com . Dans les mois à venir, je vais faire des mises à jour jumelles sur les deux sites, jusqu’au moment quand je serais sûr que tous mes lecteurs ont effectivement migré sur https://discoversocialsciences.wordpress.com . Je vous invite donc à souscrire comme abonnés au https://discoversocialsciences.wordpress.com .

[1] Feenstra, Robert C., Robert Inklaar and Marcel P. Timmer (2015), “The Next Generation of the Penn World Table” American Economic Review, 105(10), 3150-3182, available for download at http://www.ggdc.net/pwt

[2] Piketty, T., Zucman, G. (2014). Capital is back: Wealth-income ratios in rich countries 1700-2010. Quarterly Journal of Economics: 1255–1310

What’s up, Joseph?

 

My editorial on You Tube >> https://youtu.be/XnlGWiliAwk

I am starting to mirror my research blog at a new site, namely https://discoversocialsciences.wordpress.com . My goal for the next year or so is to create a fully-blown, scientific and educational website devoted to social sciences. I thought that Word Press is a good tool in that view. Anyway, for the months to come, my readers from http://researchsocialsci.blogspot.com can find a copy of each post at https://discoversocialsciences.wordpress.com and vice versa.

This said, I am getting back to scientific writing. That science is not going to develop by itself: it needs me. Yesterday, in my update in French (see http://researchsocialsci.blogspot.com/2017/08/a-bientot-milton.html ), with the help of Milton Friedman (yes, I know he is no more among us since 2006, and still he wants to be of some help) I started to lay the foundations for that book I intend to write this year, to satisfy the terms of my research grant, and the terms of my curiosity. As I was doing it, some facts attracted my attention. This is usually how it starts with me. Some facts attract the attention of the curious ape in me, and then, man, God only knows what can happen next. As I am basically agnostic, I can even face a situation when no one knows what happens next.

Anyway, facts attracted my attention. Calm down, ape, we are going to play with those facts in a minute. Now, please, let me explain to the readers. Nice ape. So I am explaining. My basic field of interest in that research grant is innovation, which you might already know from my last two posts. In economic sciences, scientific invention is treated very much as exogenous to innovation in production. Probably it goes back to Joseph Schumpeter and his theory of business cycles (see Schumpeter 1939[1]). Schumpeter assumed that science is exciting, of course, but just some science does any stir in the world of business. Sometimes, a scientific invention hits the business so hard that the latter is being knocked off balance, or, in elaborate scientific terms, it is being pushed off the neighbourhood of general Walrasian equilibrium, where was dozing calmly just before the shock, and it goes into creative destruction.

Having made that observation, Joseph Schumpeter couldn’t but explain what is so special about those precise scientific inventions, which make the world of business rock and sway. His assertion was that science knocks business off balance when said science can significantly improve the efficiency of the production function in business. Economic sciences use the term ‘productivity’ to express this efficiency. It is an old intuition, going back to Adam Smith and David Ricardo, that productivity is the key to successful business practice. Still, for a long time since those first T-rexes of economics, it was assumed that business actions taken by business people simply display different levels of efficiency, full stop. If someone was really keen on moral philosophy, like John Stuart Mill, they could add that it is a good thing to develop efficient practices, and generally a bad habit to indulge in inefficient ones. Still, some kind of diversity in productive output was being implicitly assumed to exist in the social fabric around us.

Joseph Schumpeter took a different hand of cards to approach the problem. Born in 1883, his scientific mind had been bred both on the stupefying speed of development in industrial production, and on the great reshufflings in industrial structures, made of spectacular bankruptcies, mergers, and acquisitions. To Joseph Schumpeter, capitalism was by definition something similar to the battle for Gondor. It was supposed to be epic, turbulent, and spectacular, or it didn’t count as real capitalism. Schumpeter used to perceive technologies as something akin to tsunamis. His question was simple: when two or more tsunamis meet at some point, which one prevails? Answer: the most powerful one. The transformative power of new technologies was supposed to be observable as their capacity to increase efficiency in the use of production factors, or their productivity.

Look, Joseph, I fully agree with you that new technologies should be more productive than the old ones. Only you see, Joseph, after your death we started to have sort of a problem: they are not. I mean, new technologies do not seem to be definitely more productive that the old ones. I am sorry, Joseph. I know that any respectable scientist has the right to have a quiet after-life, but I just had to tell you. You take that database called Penn Tables 9.0 (Feenstra et al. 2015[2]). I know you liked data and statistics, Joseph. This was the basic for your critical stance towards Karl Marx, who did not really bother about real numbers. So you take that Penn Tables 9.0, Joseph, and you take out of it a variable called ‘total factor productivity’. They even have it, over at Penn Tables, in two different flavours.

I know you are an inquisitive mind, Joseph, so you can read about the exact recipes of those two flavours at http://www.rug.nl/ggdc/productivity/pwt/related-research-papers/capital_labor_and_tfp_in_pwt80.pdf . Anyway, the one labelled ‘ctfp’ measures total factor productivity at current Purchasing Power Parities, with your new home, USA, standing for the jauge (USA=1). The other one, called ‘cwtfp’, measures the welfare-relevant TFP levels at current PPPs (USA=1). I made a data pivot for you, Joseph. You can find it at my Google Disc, right here: https://drive.google.com/file/d/0B1QaBZlwGxxAZ3MyZ00xcV9zZ1U/view?usp=sharing

You can see by yourself, Joseph, that this productivity you used to be so keen about is not really keen to cooperate. Back in the day, until the late 1970ies, it had been growing gently and in conformity with the economic theory that you, Joseph, contributed to create. Only after 1979, something broke in the machinery, and total factor productivity started to fall. It is still falling, Joseph, and we don’t exactly know why. I mean, you have those General Electric, Tesla, Microsoft and l’Oreal guys launching another revolutionary technology every two or three years, but these revolutions kind of get bogged down, somewhere down the road to Total Factor Productivity.

Still, Joseph, there is light at the end of the tunnel, and this is not a train coming the opposite way. I like physics, Joseph, and I am kind of thinking that we can go a long way with physics. Them people in physics, they say we all need energy. On the top of that, Joseph, we have biology, and biology says we need to eat energy in order to have energy to spend. So I take two basic measures of our efficiency in the use of energy: the consumption of energy per capita, in kilograms of oil-equivalent, and the cereal yield in kilograms per hectare. You can find both of these metrics, as aggregate averages for the global economy, as published by the World Bank, right at this address here: https://drive.google.com/file/d/0B1QaBZlwGxxAZnJldTZDV0pHMWM/view?usp=sharing

So, Joseph, I’ll tell you what I think. We, as a species, are still quite young. We didn’t even have to fight off the dinosaurs: a bloody asteroid did the job. We came to the grand landscape of history with kind of a joker card up our sleeve. It is only now that we are realizing the true challenge of staying alive as a civilisation. The good thing is that we obviously learn to get more and more food from your average hectare. I know, not everybody eats cereals. I don’t, for example. Yet, once we have learnt how to get more cereals from one hectare, we can have some carryover to other types of food. I like bananas, for example. More bananas from one average hectare, it sounds optimistic to me. Could work nicely, Joseph, if nothing kills us in the meantime. We are still struggling to manage primary energy use, although we succeeded to press on the brake, those last decades. Still, I agree, Joseph: total factor productivity is a mess.

So what do we do, Joseph, with that book I am supposed to write until the end of this year, about innovation. I had that idea, Joseph, that I could kind of go a different way than you did. You represented innovation and technological progress as a way towards more efficient production. I am tempted to try a different approach. When we are around, we tend to gather around something: fire, temple, market place etc. As we gather around, there are more and more of us around, and then, there is that funny thing that happens: the more we are around per square kilometre, the more ideas we have per one thousand people. The more densely we live, the more things we can figure out. We do innovation simply because we can, not necessarily because we have precise gains in view. I mean, gains are important, but the process of figuring out things goes on kind of propelled by its own momentum. We invent things, we try them out, sometimes it works just smoothly (the wheel), sometimes we can even have fun with it (cognac and other distillates of fermented vegetable material), and sometimes it is kind of a failure.

So, Joseph, my view of technological change is that of adaptation going on in a loop. One of the most visible patterns in the historical development of mankind is that we create more and more densely populated social structures. Greater density of population creates new social structures, which impose upon us new challenges about how to sustain more people per square mile. This is how and why we invent and try new things. From this point of view, anything we do is a technology. The pattern of my average working day, combined with the working day of my neighbour, and all that combined with the way we feed ourselves and power our machines, it can all be perceived as a technology. Technologies that you defined, Joseph, like the process of making a car, could be just small building blocks in much broader and more intricate a process.

[1] Schumpeter, J.A. (1939). Business Cycles. A Theoretical, Historical and Statistical Analysis of the Capitalist Process. McGraw-Hill Book Company

[2] Feenstra, Robert C., Robert Inklaar and Marcel P. Timmer (2015), “The Next Generation of the Penn World Table” American Economic Review, 105(10), 3150-3182, available for download at http://www.ggdc.net/pwt