I am writing a book, right now, and I am sort of taken, and I blog much less frequently than I planned. Just to keep up with the commitment, which any blogger has sort of imprinted in their mind, to deliver some meaningful content, I am publishing, in this update, the outline of my first chapter. It has become almost a truism that we live in a world of increasingly rapid technological change. When a statement becomes almost a cliché, it is useful to pass it in review, just to be sure that we understand what the statement is about. In a very pragmatic perspective of an entrepreneur, or, as a matter of fact, that of an infrastructural engineer, technological change means that something old needs to be coupled with or replaced by something new. When a new technology comes around, it is like a demon: it is essentially an idea, frequently prone to protection through intellectual property rights, and that idea looks for a body to sneak into. Humans are supposed to supply the body, and they can do it in two different ways. They can tell the new idea to coexist with some older ones, i.e. we embody new technologies in equipment and solutions which we couple functionally with older ones. Take any operational system for computers or mobile phones. On the moment, the people who are disseminating it claim it is brand new but scratch the surface just a little bit and you find 10-year-old algorithms underneath. Yes, they are old, and yes, they still work.
Another way to embody a new technological concept is to make it supplant older ones completely. We do it reluctantly, yet sometimes it really looks like a better idea. Electric cars are a good example of this approach. Initially, the basic idea seems to have consisted in putting electric engines into an otherwise unchanged structure of vehicles propelled by combustion engines. Still, electric propulsion is heavier, as we need to drive those batteries around. Significantly greater weight means the necessity to rethink steering, suspension, structural stability etc., whence the need to design a new structure.
Whichever way of embodying new technological concepts we choose, our equipment ages. It ages physically and morally, in various proportions. Aging in technologies is called depreciation. Physical depreciation means physical wearing and destruction in a piece of equipment. As it happens – and it happens to anything used frequently, e.g. shoes – we choose between repairing and replacing the destroyed parts. Whatever we do, it requires resources. From the economic point of view, it requires capital. As strange as it could sound, physical depreciation occurs in the world of digital technologies, too. When a large digital system, e.g. that of an airport, is being run, something apparently uncanny happens: some component algorithms of that system just stop working properly, under the burden of too much data, and they need to be replaced sort of on the go, without putting the whole system on hold. Of course, the essential cause of that phenomenon is the disproportion between the computational scale of pre-implementation tests, and that of real exploitation. Still, the interesting thing about those on-the-go patches of the system is that they are not fundamentally new, i.e. they do not express any new concept. They are otherwise known, field-tested solutions, and they have to be this way in order to work. Programmers who implement those patches do not invent new digital technologies; they just keep the incumbent ones running. They repair something broken with something working smoothly. Functionally, it is very much like repairing a fleet of vehicles in an express delivery business.
As we take care of the physical depreciation occurring in our incumbent equipment and software, new solutions come to the market, and let’s be honest: they are usually better than what we have at the moment. The technologies we hold become comparatively less and less modern, as new ones appear. That phenomenon of aging by obsolescence is called moral depreciation. Proportions of the actual physical depreciation & moral depreciation-cocktail depend on the pace of technological race in the given industry. When a lot of alternative, mutually competing solutions emerge, moral obsolescence accelerates and tends to become the dominant factor of aging in our technological assets. Moral depreciation creates a tension: as we look the state-of-the-art in our industry progressively moving away from our current technological position, determined by the assets we have, we find ourselves under a growing pressure to do something about it. Finally, we come to the point of deciding to invest in something definitely more up to date than what we currently have.
Both layers of depreciation – physical and moral – absorb capital. It seems pertinent to explain how exactly they do so. We need money to pay for goods and services necessary for repairing and replacing the physically used parts of our technological basket. We obviously need money to pay for the completely new equipment, too. Where does that money come from? Are there any patterns as for its sourcing? The first and the most obvious source of money to finance depreciation in our assets is the financial scheme of amortization. In many legal regimes, i.e. in all the developed countries and in a large number of emerging and developing economies, an entity being in possession of assets subject to depreciation is allowed to subtract from its income tax base, a legally determined financial amount, in order to provide for depreciation.
The legally possible amount of amortization is calculated as a percentage of book value ascribed to the corresponding assets, and this percentage is based on their assumed. If a machine is supposed to have a useful life of five years, after all is said and done as for its physical and moral depreciation, I can subtract from my tax base 1/5th = 20% of its book value. Question: which exact book value, the initial one or the current one? It depends on the kind of deal an entrepreneur makes with tax authorities. Three alternative ways are possible: linear, decreasing, and increasing. When I do linear amortization, I take the initial value of the machine, e.g. $200 000, I divide it into 5 equal parts right after the purchase, thus in 5 instalments of $40 000 each, and I subtract those instalments annually from my tax base, starting from the current year. After linear amortization is over, the book value of the machine is exactly zero.
Should I choose decreasing amortization, I take the current value of my machine as the basis for the 20% reduction of my tax base. The first year, the machine is brand new, worth $200 000, and so I amortize 20% * $200 000 = $40 000. The next year, i.e. in the second year of exploitation, I start with my machine being worth $200 000 – $40 000 = (1 – 20%) * $200 000 = $160 000. I repeat the same operation of amortizing 20% of the current book value, and I do: $160 000 – 20% * $160 000 = $160 000 – $32 000 = $128 000. I subtracted $32 000 from my tax base in this second year of exploitation (of the machine), and, and the end of the fiscal year, I landed with my machine being worth $128 000 net of amortization. A careful reader will notice that decreasing amortization is, by definition, a non-linear function tending asymptotically towards zero. It is a never-ending story, and a paradox. I assume a useful life of 5 years in my machine; hence I subtract 1/5th = 20% of its current value from my tax base, and yet the process of amortization takes de facto longer than 5 years and has no clear end. After 5 years of amortization, my machine is worth $65 536 net of amortization, and I can keep going. The machine is technically dead as useful technology, but I still have it in my assets.
Increasing amortization is based on more elaborate assumptions than the two preceding methods. I assume that my machine will be depreciating over time at an accelerating pace, e.g. 10% of the current value in the first year, 20% annually over the years 2 – 4, and 30% in the 5th year. The underlying logic is that of progressively diving into the stream of technological race: the longer I have my technology, the greater is the likelihood that someone comes up with something definitely more modern. With the same assumption of $200 000 as initial investment, that makes me write off my tax base the following amounts: 1st year – $20 000, 2nd ÷ 4th year – $40 000, 5th year – $60 000. After 5 years, the net value of my equipment is zero.
The exact way I can amortize my assets depends largely on the legal regime in force – national governments have their little ways in that respect, using the rates of amortization as incentives for certain types of investment whilst discouraging other types – and yet there is quite a lot of financial strategy in amortization, especially in large business structures with ownership separate from management. We can notice that linear amortization gives comparatively greater savings in terms of tax due. Still, as amortization consists in writing an amount off the tax base, we need any tax base at all beforehand. When I run a well-established, profitable business way past its break-even point, tax savings are a sensible idea, and so is linear amortization in my fixed assets. However, when I run a start-up, still deep in the red zone below the break-even point, there is not really any tax base to subtract amortization from. Recording a comparatively greater amortization from operations already running at a loss just deepens the loss, which, at the end of the day, has to be subtracted from the equity of my business, and it doesn’t look good in the eyes of my prospective investors and lenders. Relatively quick, linear amortization is a good strategy for highly profitable operations with access to lots of cash. Increasing amortization could be good for that start-up business, when relatively the greatest margin of operational income turns up some time after the day zero of operations.
Interestingly, the least obvious logic comes with decreasing amortization. What is the practical point of amortizing my assets asymptotically down to zero, without ever reaching zero? Good question, especially in the light of a practical fact of life, which the author challenges any reader to test by themselves: most managers and accountants, especially in small and medium sized enterprises, will intuitively amortize the company’s assets precisely this way, i.e. along the decreasing path. Question: why people do something apparently illogical? Answer: because there is a logic to that, it is just hard to phrase out. What about the logic of accumulating capital? Both the linear amortization and the increasing one lead to having, at some point in time, the book value of the corresponding assets drops down to zero. A lot of value off my assets means that either I subtract the corresponding amount from the passive side of my balance sheet (i.e. I repay some loans or I give away some equity), or I compensate the write-off with new investment. Either I lose cash, or I am in need of more cash. When I am in tight technological race, and my assets are subject to quick moral depreciation, those sudden drops down to zero can put a lot of financial streets on my balance sheet. When I do something apparently detached from my technological strategy, i.e. when I amortize decreasingly, sudden capital quakes are replaced by a gentle descent, much more predictable. Predictable means e.g. negotiable with banks who lend me money, or with investors buying shares in my equity.
This is an important pattern to notice in commonly encountered behaviour regarding capital goods: most people will intuitively tend to protect the capital base of their organisation, would it be a regular business or a public agency. When choosing between amortizing their assets faster, so as to reflect the real pace of their ageing, or amortizing them slower, thus a bit against the real occurrence of depreciation, most people will choose the latter, as it smoothens the resulting changes in the capital base. We can notice it even in ways that most of us manage our strictly private assets. Let’s take the example of an ageing car. When a car reaches the age when an average household could consider to change it, like 3 – 4 years, only a relatively tiny fraction of the population, probably not more than 16%, will really change for a new car. The majority (the author of this book included, by the way) will rather patch and repair, and claim that ‘new cars are not as solid as those older ones’. There is a logic to that. A new car is bound to lose around 25% of its market value annually over the first 2 – 3 years of its useful life. An old car, aged 7 years or more, loses around 10% or less per year. In other words, when choosing between shining new things that age quickly and the less shining old things that age slowly, only a minority of people will choose the former. The most common behavioural pattern consists in choosing the latter.
When recurrent behavioural patterns deal with important economic phenomena, such as technological change, an economic equilibrium could be poking its head from around the corner. Here comes an alternative way of denominating depreciation and amortization, i.e. instead of denominating it as a fraction of value attributed to assets, we can denominate over the revenue of our business. Amortization can be seen as the cost of staying in the game. Technological race takes a toll on our current business. The faster our technologies depreciate, the costlier it is to stay in the race. At the end of the day, I have to pay someone or something that helps me keeping up with the technological change happening around, i.e. I have to share, with that someone or something, a fraction of what my customers pay me for the goods and services I offer. When I hold a differentiated basket of technological assets, each ageing at a different pace and starting from a different moment in time, the aggregate capital write-off that corresponds to their amortization is the aggregate cost of keeping up with science.
When denoting K as the book value of assets, with a standing for the rate of amortization corresponding to one of the strategies sketched above, P representing the average price of goods we sell, and Q their quantity, we can sketch the considerations developed above in a more analytical way, as a coefficient labelled A, as in equation (1) below.
A = (K*a)/(P*Q) (1)
The coefficient A represents the relative burden of aggregate amortization of all the fixed assets in hand, upon the revenues recorded in a set of economic agents. Equation (1) can be further transformed so as to extract quantities at both levels of the fraction. Factors in the denominator of equation (1), i.e. prices and quantities of goods sold in order to generate revenues will be further represented as, respectively, PG and QG, whilst the book value of assets subject to amortization will be symbolized as the arithmetical product QK*PK of market prices PK of assets, and the quantity QK thereof. Additionally, we drive the rate of amortization ‘a’ down to what it really is, i.e. inverted representation of an expected lifecycle F, measured in years, and ascribed to our assets. Equation (2) below shows an analytical development in this spirit.
A = (1/F)*[(PK*QK)/(PG*QG)] (2)
Before the meaning of equation (2) is explored more in depth, it is worth explaining the little mathematical trick that economists use all the time, and which usually raises doubts in the minds of bystanders. How can anyone talk about an aggregate quantity QG of goods sold, or that of fixed assets, the QK? How can we distil those aggregate quantities out of the facts of life? If anyone in their right mind thinks about the enormous diversity of the goods we trade, and the assets we use, how can we even set a common scale of measurement? Can we add up kilograms of BMW cars with kilograms of food consumed, and use it as denominator for kilograms of robots summed up with kilograms of their operating software?
This is a mathematical trick, yet a useful one. When we think about any set of transactions we make, whether we buy milk or machines for a factory, we can calculate some kind of weighted average price in those transactions. When I spend $1 000 000 on a team of robots, bought at unitary price P(robot), and $500 000 on their software bought at price P(software), the arithmetical operation P(robot)*[$1 000 000 / ($1 000 000 + $500 000)] + P(software)*[$500 000 / ($1 000 000 + $500 000)] will yield a weighted average price P(robot; software) made in one third of the price of software, and in two thirds of the price of robots. Mathematically, this operation is called factorisation, and we use it when we suppose the existence of a common, countable factor in a set of otherwise distinct phenomena. Once we suppose the existence of recurrent transactional prices in anything humans do, we can factorise that anything as Price Multiplied By Quantity, or P*Q. Thus, although we cannot really add up kilograms of factories with kilograms of patents, we can factorise their respective prices out of the phenomenon observed and write PK*QK. In this approach, quantity Q is a semi-metaphysical category, something like a metaphor for the overall, real amount of the things we have, make and do.
Keeping those explanations in mind, let’s have a look at the empirical representation of coefficient A, as computed according to equation (2), on the grounds of data available in Penn Tables 9.1 (Feenstra et al. 2015), and represented graphically in Figure I_1 below. The database known as Penn Tables provides direct information about three big components of equation (2): the basic rate of amortization, the nominal value of fixed assets, and the nominal value of Gross Domestic Product GDP) for each of the 182 national economies covered. One of the possible ways of thinking about the wealth of a nation is to compute the value of all the final goods and services made by said nation. According to the logic presented in the preceding paragraph, whilst the whole basket of final goods is really diversified, it is possible to nail down a weighted, average transactional price P for all that lot, and, consequently, to factorise the real quantity Q out of it. Hence, the GDP of a country can be seen as a very rough approximation of value added created by all the businesses in that territory, and changes over time in the GDP as such can be seen as representative for changes in the aggregate revenue of all those businesses.
Figure I_1 introduces two metrics, pertinent to the empirical unfolding of equation (2) over time and across countries. The continuous line shows the arithmetical average of local, national coefficients A across the whole sample of countries. The line with square markers represents the standard deviation of those national coefficients from the average represented by the continuous line. Both metrics are based on the nominal computation of the coefficient A for each year in each given national economy, thus in current prices for each year from 1950 through 2017. Equation (2) gives many possibilities of change in the coefficient A – including changes in the proportion between the price PG of final goods, and the market price PK of fixed assets – and the nominal computation used in Figure I_1 captures that factor as well.
I_1_Coefficient of amortization in GDP, nominal, world, trend]
In 1950, the average national coefficient A, calculated as specified above, was equal to 6,7%. In 2017, in climbed to A = 20,8%. In other words, the average entrepreneur in 1950 would pay less than one tenth of their revenues to amortize the depreciation of technological assets, whilst in 2017 it was more than one fifth. This change in proportion can encompass many phenomena. It can the pace of scientific change as such, or just a change in entrepreneurial behaviour as regards the strategies of amortization, explained above. Show business is a good example. Content is an asset for television stations, movie makers or streaming services. Content assets age, and some of them age very quickly. Take the tonight news show on any TV channel. The news of today are much less of a news tomorrow, and definitely not news at all the next month. If you have a look at annual financial reports of TV broadcasters, such as the American classic of the industry, CBS Corporation, you will see insane nominal amounts of amortization in their cash flow statements. Thus, the ascending trend of average coefficient A, in Figure I_1, could be, at least partly, the result of growth in the amount of content assets held by various entities in show business. It is a good thing to deconstruct that compound phenomenon into its component factors, which is being undertaken further below. Still, before the deconstruction takes place, it is good to have an inquisitive look at the second curve in Figure I_1, the square-marked one, representing standard deviation of coefficient A across countries.
In common interpretation of empirical numbers, we almost intuitively lean towards average values, as the expected ones in a large set, and yet the standard deviation has a peculiar charm of its own. If we compare the paths followed by the two curves in Figure I_1, we can see them diverge: the average A goes resolutely up whilst the standard deviation in A stays almost stationary in its trend. In the 1950ies or 1960ies, the relative burden of amortization upon the GDP of individual countries was almost twice as disparate than it is today. In other words, back in the day it mattered much more where exactly our technological assets are located. Today, it matters less. National economies seem to be converging in their ways of sourcing current, operational cash flow to provide for the depreciation of incumbent technologies.
Getting back to science, and thus back to empirical facts, let’s have a look at two component phenomena of trends sketched in Figure I_1: the pace of scientific invention, and the average lifecycle of assets. As for the former, the coefficient of patent applications per 1 mln people, sourced from the World Bank, is used as representative metric. When we invent an original solution to an existing technological problem, and we think we could make some money on, we have the option of applying for legal protection of our invention, in the form of a patent. Acquiring a patent is essentially a three-step process. Firstly, we file the so-called patent application to the patent office adequate for the given geographical jurisdiction. Then, the patent office publishes our application, calling out for anyone who has grounds for objecting to the issuance of patent, e.g. someone we used to do research with, hand in hand, but hands parted as some point in time. As a matter of fact, many such disputes arise, which makes patent applications much more numerous than actually granted patents. If you check patent data, granted patents define a currently appropriated territories of intellectual property, whilst patent applications are pretty much informative about the current state of applied science, i.e. about the path this science takes, and about the pressure it puts on business people towards refreshing their technological assets.
Figure I_2 below shows the coefficient of patent applications per 1 mln people in the global economy. The shape of the curve is interestingly similar to that of average coefficient A, shown in Figure I_1, although it covers a shorter span of time, from 1985 through 2017. At the first sight, it seems making sense: more and more patentable inventions per 1 million humans, on average, puts more pressure on replacing old assets with new ones. Yet, the first sight may be misleading. Figure I_3, further below, shows the average lifecycle of fixed assets in the global economy. This particular metric is once again calculated on the grounds of data available in Penn Tables 9_1 (Feenstra et al. 2015 op. cit.). The database strictly spoken contains a variable called ‘delta’, which is the basic rate of amortization in fixed assets, i.e. the percentage of their book value commonly written off the income tax base as provision for depreciation. This is factor ‘a’ in equation (1), presented earlier, and reflects the expected lifecycle of assets. The inverted value ‘1/a’ gives the exact value of that lifecycle in years, i.e. the variable ‘F’ in equation (2). Here comes the big surprise: although the lifecycle ‘F’, computed as an average for all the 182 countries in the database, does display a descending trend, the descent is much gentler, and much more cyclical that what we could expect after having seen the trend in nominal burden A of amortization, and in the occurrence of patent applications. Clearly, there is a push from science upon businesses towards shortening the lifecycle of their assets, but businesses do not necessarily yield to that pressure.
I_2_Patent Applications per 1 mln people]
Here comes a riddle. The intuitive assumption that growing scientific input provokes shorter a lifespan in technological assets proves too general. It obviously does not encompass the whole phenomenon of increasingly cash-consuming depreciation in fixed assets. There is something else. After having casted a look at the ‘1/F’ component factor of equation (2), let’s move to the (PK*QK)/(PG*QG) one. Penn Tables 9.1 provide two variables that allow calculating it: the aggregate value of fixed assets in national economies, at current prices, and the GDP of those economies, in current prices as well. Interestingly, those two variables are provided in two versions each: one at constant prices of 2011, the other at current prices. Before the consequences of that dual observation are discussed, let’s remind some basic arithmetic: we can rewrite (PK*QK)/(PG*QG) as (PK/PG)*(QK/QG). The (PK/PG) component fraction corresponds to the proportion between weighted average prices in, respectively, fixed assets (PK), and final goods (PG). The other part, i.e. (QK/QG) stands for the proportion between aggregate quantities of assets and goods. Whilst we refer here to that abstract concept of aggregate quantities, observable only as something mathematically factorized out of something really empirical, there is method to that madness. How big a factory do we need to make 20 000 cars a month? How big a server do we need in order to stream 20 000 hours of films and shows a month? Presented under this angle, the proportion (QK/QG) is much more real. When both the aggregate stock of fixed assets in national economies, and the GDP of those economies are expressed in current prices, both the (PK/PG) factor, and the (QK/QG) really change over time. What is observed (analytically) is the full (PK*QK)/(PG*QG) coefficient. Yet, when prices are constant, the (PK/PG) component factor does not actually change over time; what really changes, is just the proportion between aggregate quantities of assets and goods.
The factorisation presented above allows another trick at the frontier of arithmetic and economics. The trick consists in using creatively two types of economic aggregates, commonly published in publicly available databases: nominal values as opposed to real values. The former category represents something like P*Q, or price multiplied by quantity. The latter is supposed to have kicked prices out of the equation, i.e. to represent just quantities. With those two types of data we can do something opposite to the procedure presented earlier, which serves to distil real quantities out of nominal values. This time, we have externally provided products ‘price times quantity’, and just quantities. Logically, we can extract prices out of the nominal values. When we have two coefficients given in the Penn Tables 9.1 database – the full (PK*QK)/(PG*QG) (current prices) and the partial (QK/QG) (constant prices) – we can develop the following equation: [(PK*QK)/(PG*QG)]/ (QK/QG) = PK/PG. We can use the really observable proportion between the nominal value of fixed assets and that of Gross Domestic Product, divide it by the proportion between real quantities of, respectively assets and final goods, in order to calculate the proportion between weighted average prices of assets and goods.
Figure I_4, below, attempts to represent all those three phenomena – the change in nominal values, the change in real quantities, and the change in prices – in one graph. As different magnitudes of empirical values are involved, Figure I_4 introduces another analytical method, namely indexation over constant denominator. When we want to study temporal trends in values, which are either measured with different units or display very different magnitudes, we can choose one point in time as the peg value for each of the variables involved. In the case of Figure I_4, the peg year is 2011, as Penn Tables 9.1 use 2011 as reference year for constant prices. Aggregate values of capital stock and national GDP, when measured in constant prices, are measured in the prices of the year 2011. For each of the three variables involved – the nominal proportion of capital stock to GDP (PK*QK)/(PG*QG), the real proportion thereof QK/QG and the proportion between the prices of assets and the prices of goods PK*QK – we take their values in 2011 as denominators for the whole time series. Thus, for example, the nominal proportion of capital stock to GDP in 1990 is the quotient of the actual value in 1990 divided by the value in 2011 etc. As a result, we can study each of the three variables as if the value in 2011 was equal to 1,00.
[Figure I_4 Comparative indexed trends in the proportion between the national capital stock and the GDP]
The indexed trends thus computed are global averages of across the database, i.e. averages of national values computed for individual countries. The continuous blue line marked with red triangles represents the nominal proportion between the national stocks of fixed assets, and the respective GDP of each country, or the full (PK*QK)/(PG*QG) coefficient. It has been consistently climbing since 1950, and since the mid-1980ies the slope of that climb seems to have increased. Just to give a glimpse of actual non-indexed values, in 1950 the average (PK*QK)/(PG*QG) coefficient was 1.905, in 1985 it reached 2.197, in the reference year 2011it went up to 3.868, to end up at 4.617 in 2017. The overall shape of the curve strongly resembles that observed earlier in the coefficient of patent applications per 1 mln people in the global economy, and in another indexed trend to find in Figure I_4, that of price coefficient PK*PG. Starting from 1985, that latter proportion seems to be following almost perfectly the trend in patentable invention, and its actual, non-indexed values seem to be informative about a deep change in business in connection with technological change. In 1950, the proportion between average weighted prices of fixed assets, and those of final goods was PK*PG = 0,465, and even in the middle of the 1980ies it kept roughly the same level, PK*PG = 0,45. To put it simply, fixed assets were half as expensive as final goods, per unit of quantity. Yet, since 1990, something had changed: that proportion started to grow: productive assets started to be more and more valuable in comparison to the market prices of the goods they served to make. In 2017, PK*PG reached 1,146. From a world, where technological assets were just tools to make final goods we moved into a world, where technologies are goods in themselves. If we look carefully at digital technologies, nanotechnologies or at biotech, this general observation strongly holds. A new molecule is both a tool to make something, and a good in itself. It can make a new drug, and it can be a new drug. An algorithm can create value added as such, or it can serve to make another value-creating algorithm.
Against that background of unequivocal change in the prices of technological assets, and in their proportion to the Gross Domestic Product of national economies, we can observe a different trend in the proportion of quantities: QK/QG. Hence, we return to questions such as ‘How big a factory we need in order to make the amount of final goods we want?’. The answer to that type of question takes the form of something like a long business cycle, with a peak in 1994, at QK/QG = 5,436. The presently observed QK/QG (2017) = 4,027 looks relatively modest and is very similar to the value observed in 1950ies. Seventy years ago, we used to be a civilization, which needed around 4 units of quantity in technological assets to make one unit of quantity in final goods. Then, starting from the mid-1970ies, we started turning into a more and more technology intensive culture, with more and more units of quantity in assets required to make one unit of quantity in final goods. In the mid-1990ies, that asset-intensity reached its peak, and now it is back at the old level.
 Feenstra, Robert C., Robert Inklaar and Marcel P. Timmer (2015), “The Next Generation of the Penn World Table” American Economic Review, 105(10), 3150-3182, available for download at www.ggdc.net/pwt