Clink!
The coin dropped… I have been turning that conceptual coin between my synapses
for the last 48 hours, and here it is. I know what I have been thinking about,
and what I want to write about today. I want to study the possible ways to
restart business and economy in the midst of the COVID-19 pandemic.
There
is a blunt, brutal truth: the virus will stay with us until we massively distribute
an efficient vaccine against it, and that is going to take many months, most
probably more than a year. Until then, we need to live our lives, and we cannot
live them in permanent lockdown. We need to restart, somehow, our socio-economic
structures. We need to overcome our fears, and start living in the presence of,
and in spite of danger.
Here
come three experiences of mine, which sum up to the financial concept I am
going to expose a few paragraphs further. The first experience is that of observing
a social project going on in my wife’s hometown, Starachowice, Poland, population
50 000. The project is Facebook-named ‘The Visible Hand’ (the original
Polish is: Widzialna Ręka), and it emerged spontaneously with the COVID-19
crisis. I hope to be able to present the full story of those people, which I
find truly fascinating, and now, I just give a short glimpse. That local community
has created, within less than two weeks, something like a parallel state, with its
supply system for the local hospital, and for people at risk. They even go into
developing their own technologies of 3D printing, to make critical medical equipment,
such as facial masks. Yesterday, I had a phone conversation with a friend, strongly
involved in that project, and my head still resonates with what he said: ‘Look,
the government is pretty much lost in all that situation. They pretend a lot,
and improvise a lot, and it is all sort of more pretending than actually doing
things. Our local politicians either suddenly evaporated, or make clumsy,
bitchy attempts to boost their popularity in the midst of all that s**t. But people…
Man, people are awesome. We are doing together things that our government
thinks it is impossible to do, and we are even sort of having fun with it. The
sense of community is nothing short of breath-taking’.
My
second experience is about the stock market. If you have been following my
updates since the one entitled ‘Back in the game’, you know that I decided to restart investing in the stock market,
which I had undertaken to do just before the s**t hit the fan, a few weeks ago.
Still, what I am observing right now, in the stock market, is something like a
latent, barely contained energy, which just seeks any opportunity to engage
into. Investors are really playing the game. Fear, which I could observe two weeks
ago, has almost vanished from the market. Once again, there is human energy to
exploit positively.
There is energy in people, but it is being locked down, with the
pandemic around. The big challenge is to restart it. Right now, many folks lose
their jobs, and their small businesses. It is important to create substantial
hope, i.e. hope which can be turned into action. Here comes my third experience,
which is that of preparing a business plan for an environmental project, which
I provisionally call Energy Ponds (see Bloody
hard to make a strategy and The
collective archetype of striking good deals in exports for latest developments).
As I prepare that business plan, I keep returning to the conclusion that I need
some sort of financial scheme for situations when a local community, willing to
implement the technology I propose, is short of capital and needs to sort of
squeeze money out of the surrounding landscape.
Those
three experiences of mine, taken together, lead me back to something I studied
3 years ago, when I was taking my first, toddler’s steps in scientific
blogging: the early days of the Bitcoin. Today, the Bitcoin is the big, sleek
predator of financial markets, yet most people have forgotten how that thing
was born. It was an idea
for safe financial transactions, based on an otherwise old concept of
financial law called ‘endorsement of debt’, implemented in the second year of
the big financial crisis, i.e. in 2009, to give some liquidity to small
networks of just as small local businesses. Initially, for more than 18 first months
of existence, the Bitcoin was a closed system of exchange, without any
interface with any established currency. As far as I know, it very much saved
the day for many small businesses, and I want to study the pattern of success,
so as to see how it can be reproduced today for restarting business in the context
of pandemic.
Before
I go analytical, two general remarks. Firstly, there is plenty of folks who
pretend having the magical recipe for the present s**t we are waist-deep in. I
start from the assumption that we have no fresh, general experience of
pandemics, and pretending to have figured the best way out is sheer bullshit.
Still, we need to explore and to experiment, and this is very much the spirit I
pursue.
Secondly,
the Bitcoin is a cryptocurrency, based on the technology designated as
Blockchain. What I want to take away is the concept of virtual financial
instrument focused on liquidity, rather than the strictly spoken technology. Of
course, platforms such as Ethereum can be
used for the purpose I intend to get across, here below, still they are just an
instrumental option.
Three
years ago, I used data from https://www.quandl.com/collections/markets/bitcoin-data,
which contains the mathematical early story
of what has grown, since, into the father of all cryptocurrencies, the Bitcoin.
I am reproducing this story, now, so as to grasp a pattern. Let’s walse. I am focusing
on the period, during which the Bitcoin started, progressively acquired any exchangeable
value against the US dollar, and finished by being more or less at 1:1 par therewith.
That period stretches from January 3rd, 2009 until February 10th,
2011. You can download the exact dataset I work with, in the Excel format, from
this link:
Before
I present my take on that early Bitcoin story, a few methodological remarks.
The data I took originally contains the following variables: i) total number of
Bitcoins mined, ii) days destroyed non-cumulative,
iii) Bitcoin number of unique addresses used per day, and iv) market
capitalization of the Bitcoin in USD. On the basis of these variables, I calculated
a few others. Still, I want to explain the meaning of those original ones. As
you might know, Bitcoins were initially mined (as far as I know, not
anymore), i.e. you could generate 1 BTC if you solved a mathematical riddle. In
other words, the value you had to bring to the table in order to have 1 BTC was
your programming wit plus computational power in your hardware. With time, computational
power had been prevailing more and more. The first original variable, i. e. total
number of Bitcoins mined, is informative about the total real economic value
(computational power) brought to the network by successive agents joining it.
Here
comes the first moment of bridging between the early Bitcoin and the present
situation. If I want to create some kind of virtual financial system to restart,
or just give some spin to local economies, I need a real economic value as gauge
and benchmark. In the case of Bitcoin, it was computational power. Question: what
kind of real economic value is significant enough, right now, to become the tool
for mining the new, hypothetical virtual currency? Good question, which I
don’t even pretend to have a ready-made answer to, and which I want to ponder
carefully.
The
variable ‘days destroyed non-cumulative’ refers to the fact that Bitcoins
are crypto-coins, i.e. each Bitcoin has a unique signature, and it includes the
date of the last transaction made. If I hold 1 BTC for 2 days, and put it in circulation
on the 3rd day, on the very same 3rd day I destroy 2 days
of Bitcoins. If I hold 5 Bitcoins for 7 days, and kick them back into market on
the 8th day, I destroy, on that 8th day, 5*7 = 35 days.
The more days of Bitcoin I destroy on the given day of transactions, the more I
had been accumulating. John Maynard Keynes argued that a true currency is used
both for paying and for saving. The emergence of accumulation is important in the
shaping of new financial instruments. It shows that market participants
start perceiving the financial instrument in question as trustworthy enough to transport
economic value over time. Note: this variable can take values, like days =
1500, which seem absurd at the first sight. How can you destroy 1500 days in a
currency born like 200 days ago? You can, if you destroy more than one Bitcoin,
held for at least 1 day, per day.
The
third original variable, namely ‘Bitcoin number of unique addresses used per
day’, can be interpreted as the number of players in the game. When
you trade Bitcoins, you connect to a network, you have a unique address in that
network, and your address appears in the cumulative signature that each of the
Bitcoins you mine or use drags with it.
With
those three original variables, I calculate a few coefficients of mine. Firstly,
I divide the total number of Bitcoins mined by the number of unique addresses,
on each day separately, and thus I obtain the average number of Bitcoins
held, on that specific day, by one average participant in the network. Secondly,
I divide the non-cumulative number of days destroyed, on the given day, by the total
number of Bitcoins mined, and present in the market. The resulting quotient is
the average number of days, which 1 Bitcoin has been held for.
The
‘market capitalization of the Bitcoin in USD’, provided in the original
dataset from https://www.quandl.com/collections/markets/bitcoin-data,
is, from my point of view, an instrumental variable. When it becomes non-null,
it shows that the Bitcoin acquired an exchangeable value against the US dollar.
I divide that market capitalization by the total number of Bitcoins mined, and
I thus I get the average exchange rate of Bitcoin against USD.
I
can distinguish four phases in that early history of the Bitcoin. The first one
is the launch, which seems to have taken 6 days, from January 3rd,
2009 to January 8th, 2009. There were practically no players, i.e.
no exchange transactions, and the number of Bitcoins mined was constant, equal
to 50. The early growth starts on January 9th, 2009, and last
just for 3 days, until January 11th, 2009. The number of Bitcoins
mined grows, from 50 to 7600. The number of players in the game grows as well, from
14 to 106. No player destroys any days, in this phase. Each Bitcoin mined is
instantaneously put in circulation. The average amount of Bitcoins per player
evolves from 50/14 = 3,57 to 7600/106 = 71,7.
On
January 12th, 2009, something changes: participants in the network
start (timidly) to hold their Bitcoins for at least one day. This is how the phase
of accelerating growth starts, and will last for 581 days, until August
16th, 2010. On the next day, August 17th, the first Bitcoins
will get exchanged against US dollars. On that path of accelerating growth, the
total number of Bitcoins mined passes from 7600 to 3 737 700, and the daily number
on players in the network passes from an average around 106 to about 500 a day.
By the end of this phase, the average amount of Bitcoins per player reaches 7475,4.
Speculative positions (i.e. propensity to save Bitcoins for later) grow, up to an
average of about 1500 days destroyed per address.
Finally,
the fourth stage of evolution is reached: entry into the financial market, when
we pass from 1 BTC = $0,08 to 1 BTC = $1. This transition from any exchange
rate at all to being at par with the dollar takes 189 days, from August 17th,
2010 until February 10th, 2011. The total number of Bitcoins grows at
a surprisingly steady rate, from 3 737 700 to about 5 300 000, whilst
the number of players triples, from about 500 to about 1 500. Interestingly,
in this phase, the average amount of Bitcoins per player decreases, from 7475,4
to 3 533,33. Speculative positions grow steadily, from about 1500 days destroyed
per address to some 2 400 days per address.
Below,
you will find graphs with a birds-eye view of the whole infancy of the Bitcoin.
Further below, after the graphs, I try to give some closure, i.e. to guess what
we can learn from that story, so as to replicate it, possibly, amid the
COVID-19 crisis.
My
first general conclusion is that the total number of Bitcoins mined is the only
variable, among those studied, which shows a steady, quasi linear trend of
growth. It is not really exponential, more sort of a power function. The total
number of Bitcoins mined corresponds, in the early spirit of this
cryptocurrency, to the total computational power brought to the game by its
participants. The real economic value pumped into the new concept was
growing steadily, linearly, and to an economist, such as I am, it suggests the
presence of exogenous forces at play. In other words, the early Bitcoin was
not growing by itself, through sheer enthusiasm of its early partisans. It was
growing because some people saw real value in that thing and kept bringing
assets to the line. It is important in the present context. If we want to use
something similar to power the flywheels of local markets under the COVID-19
restrictions, we need some people to bring real, productive assets to the game,
and thus we need to know what those key assets should be. Maybe the capacity to
supply medical materials, combined with R&D potential in biotech and 3D
printing? These are just loose thoughts, as I observe the way that events are unfolding.
My
second conclusion is that everything else I have just studied is very swingy
and very experimental. The first behavioural transition I can see is
that of a relatively small number of initial players experimenting with using
whatever assets they bring to the table in order to generate a growing number
of new tokens of virtual currency. The
first 7 – 8 months in the Bitcoin show the marks of such experimentation. There
comes a moment, when instead of playing big games in a small, select network,
the thing spills over into a larger population of participants. What attracts
those new ones? As I see it, the attractive force consists in relatively predictable
rules of the game: ‘if I bring X $mln of assets to the game, I will have
Y tokens of the new virtual currency’, something like that.
Hence,
what creates propitious conditions for acquiring exchangeable value in the new
virtual currency against the established ones, is a combination of steady
inflow of assets, and crystallization of predictable rules to use them in that
specific scheme.
I
can also see that people started saving Bitcoins before these had any value in
dollars. It suggests that even in a closed system, without openings to other
financial markets, a virtual currency can start giving to its holders a sense
of economic value. Interesting.
I keep philosophizing
about the current situation, and I try to coin up a story in my mind, a story meaningful
enough to carry me through the weeks and months to come. I try to figure out a strategy
for future investment, and, in order to do that, I am doing that thing called ‘strategic
assessment of the market’.
Now, seriously, I am profiting
from that moment of forced reclusion (in Poland we have just had compulsory sheltering
at home introduced, as law) to work a bit on my science, more specifically on
the application of artificial neural networks to simulate collective intelligence
in human societies. As I have been sending around draft papers on the topic, to
various scientific journals (here
you have a sample of what I wrote on the topic << click this link to
retrieve a draft paper of mine), I have encountered something like a pretty uniform
logic of constructive criticism. One of the main lines of reasoning in that
logic goes like: ‘Man, it is interesting what you write. Yet, it would be
equally interesting to explain what you mean exactly by collective
intelligence. How does it or doesn’t it rhyme with individual intelligence? How
does it connect with culture?’.
Good question, truly a
good one. It is the question that I have been asking myself for months, since I
discovered my fascination with the way that simple neural networks work. At the
time, I observed intelligent behaviour in a set of four equations, put back to
back in a looping sequence, and it was a ground-breaking experience for me. As
I am trying to answer this question, my intuitive path is that of distinction
between collective intelligence and the individual one. Once again (see The
games we play with what has no brains at all ), I go back to William James’s
‘Essays in Radical Empiricism’, and to his take on
the relation between reality and our mind. In Essay I, entitled ‘Does Consciousness
Exist?’, he goes: “My thesis is that if we start with the supposition
that there is only one primal stuff or material in the world, a stuff of which
everything is composed, and if we call that stuff ‘pure experience,’ then
knowing can easily be explained as a particular sort of relation towards one
another into which portions of pure experience may enter. The relation itself
is a part of pure experience; one of its ‘terms’ becomes the subject or bearer
of the knowledge, the knower, the other becomes the object known. […] Just
so, I maintain, does a given undivided portion of experience, taken in one
context of associates, play the part of a knower, of a state of mind, of
‘consciousness’; while in a different context the same undivided bit of
experience plays the part of a thing known, of an objective ‘content.’ In a
word, in one group it figures as a thought, in another group as a thing. And,
since it can figure in both groups simultaneously, we have every right to speak
of it as subjective and objective both at once.”
Here it is, my
distinction. Right, it is partly William James’s distinction. Anyway, individual
intelligence is almost entirely mediated by conscious experience of reality,
which is representation thereof, not reality as such. Individual intelligence
is based on individual representation of reality. By opposition, my take on collective
intelligence is based on the theory of adaptive walk in rugged landscape, a
theory used both in evolutionary biology and in the programming of artificial
intelligence. I define collective intelligence as the capacity to run constant
experimentation across many social entities (persons, groups, cultures,
technologies etc.), as regards the capacity of those entities to achieve a vector
of desired social outcomes.
The expression ‘vector
of desired social outcomes’ sounds as something invented by a philosopher
and mathematician, together, after a strong intake of strong spirits. I am
supposed to be simple in getting my ideas across, and thus I am translating
that expression into something simpler. As individuals, we are after something.
We have values that we pursue, and that pursuit helps us making it through each
consecutive day. Now, there is a question: do we have collective values that we
pursue as a society? Interesting question. Bernard Bosanquet, the British
philosopher who wrote ‘The
Philosophical Theory of The State’[1],
claimed very sharply that individual desires and values hardly translate into
collective, state-wide values and goals to pursue. He claimed that entire societies
are fundamentally unable to want anything, they can just be objectively after
something. The collective being after something is essentially non-emotional
and non-intentional. It is something like a collective archetype, occurring at
the individual level somewhere below the level of consciousness, in the
collective unconscious, which mediates between conscious individual intelligence
and the external stuff of reality, to use William James’ expression.
How to figure out what
outcomes are we after, as a society? This is precisely, for the time being, the
central axis of my research involving neural networks. I take a set of
empirical observations about a society, e.g. a set of country-year observation
of 30 countries across 40 quantitative variables. Those empirical observations
are the closest I can get to the stuff of reality. I make a simple neural
network supposed to simulate the way a society works. The simpler this network
is, the better. Each additional component of complexity requires making ever
strengthening assumptions about the way societies works. I use that network as
a simple robot. I tell the robot: ‘Take one variable from among those 40 in
the source dataset. Make it your output variable, i.e. the desired outcome of
collective existence. Treat the remaining 39 variables as input, instrumental
to achieving that outcome’. I make 40
such robots, and each of them produces a set of numbers, which is like a mutation
of the original empirical dataset, and I can assess the similarity between each
such mutation and the source empirical stuff. I do it by calculating the Euclidean distance
between vectors of mean values, respectively in each such clone and the
original data. Other methods can be used, e.g. kernel functions.
I worked that method
through with various empirical datasets, and my preferred one, for now, is Penn Tables 9.1. (Feenstra
et al. 2015[2]),
which is a pretty comprehensive overview of macroeconomic variables across the
planetary board. The detailed results of my research vary, depending on the
exact set of variables I take into account, and on the set of observations I
select, still there is a tentative conclusion that emerges: as a set of national
societies, living in separate countries on that crazy piece of rock, speeding through
cosmic space with no roof whatsoever, just with air condition on, we are mostly
after terms of trade, and about the way we work, we prepare for work, and the
way we remunerate work. Numerical robots which I program to optimize variables such
as average price in exports, the share of labour compensation in Gross National
Income, the average number of hours worked per year per person, or the number
of years spent in education before starting professional activity: all these
tend to win the race for similarity to the source empirical data. These seem to
be the desired outcomes that our human collective intelligence seems to be
after.
Is it of any help
regarding the present tough s**t we are waist deep in? If my intuitions are
true, whatever we will do regarding the COVID-19 pandemic, will be based on an
evolutionary, adaptive choice. Path #1 consists in collectively optimizing those
outcomes, whilst trying to deal with the pandemic, and dealing with the pandemic
will be instrumental to, for example, the deals we strike in international
trade, and to the average number of hours worked per person per year. An
alternative Path #2 means to reshuffle our priorities completely and reorganize
so as to pursue completely different goals. Which one are we going to take?
Good question, very much about guessing rather than forecasting. Historical
facts indicate that so far, as a civilization, we have been rather slow out of
the gate. Change in collectively pursued values had occurred slowly,
progressively, at the pace of generations rather than press conferences.
In parallel to doing
research on collective intelligence, I am working on a business plan for the
project I named ‘Energy Ponds’ (see, for example: Bloody
hard to make a strategy). I have done some market research down this specific
avenue of my intellectual walk, and here below I am giving a raw account of
progress therein.
The study of market environment for
the Energy Ponds project is pegged on one central characteristic of the
technology, which will be eventually developed: the amount of electricity
possible to produce in the structure based on ram pumps and relatively small
hydroelectric turbines. Will this amount be sufficient just to supply energy to
a small neighbouring community or will it be enough to be sold in wholesale
amounts via auctions and deals with grid operators. In other words, is Energy
Ponds a viable concept just for the off-grid installations or is it scalable up
to facility size?
There are examples of small
hydropower installations, which connect to big power grids in order to exploit
incidental price opportunities (Kusakana
2019[3]).
That basic question kept in mind, it
is worth studying both the off-grid market for hydroelectricity, as well as the
wholesale, on-grid market. Market research for Energy Ponds starts, in the
first subsection below, with a general, global take on the geographical
distribution of the main factors, both environmental and socio-economic. The
next sections study characteristic types of markets
Overview of environmental and socio-economic
factors
Quantitative investigation starts
with the identification of countries, where hydrological conditions are
favourable to implementation of Energy Ponds, namely where significant water
stress is accompanied by relatively abundant precipitations. More specifically,
this stage of analysis comprises two steps. In the first place, countries with
significant water stress are identified[4],
and then each of them is checked as for the amount of precipitations[5],
hence the amount of rainwater possible to collect.
Two remarks are worth formulating at
this point. Firstly, in the case of big countries, such as China or United
States, covering both swamps and deserts, the target locations for Energy Ponds
would be rather regions than countries as a whole. Secondly, and maybe a bit
counterintuitively, water stress is not a strict function of precipitations.
When studied in 2014, with the above-referenced data from the World Bank, water
stress is Pearson-correlated with precipitations just at r
= -0,257817141.
Water stress and precipitations have
very different distributions across the set of countries reported in the World
Bank’s database. Water stress strongly varies across space, and displays a
variability (i.e. quotient of its standard deviation divided by its mean value)
of v = 3,36. Precipitations are distributed much more evenly,
with a variability of v = 0,68. With that in mind, further
categorization of countries as potential markets for the implementation of
Energy Ponds has been conducted with the assumption that significant water
stress is above the median value observed, thus above 14,306296%. As
for precipitations, a cautious assumption, prone to subsequent revision, is
that sufficient rainfall for sustaining a structure such as Energy Ponds
is above the residual difference between mean rainfall observed and its
standard deviation, thus above 366,38 mm per year.
That first selection led to focusing
further analysis on 40 countries, namely: Kenya, Haiti, Maldives,
Mauritania, Portugal, Thailand, Greece, Denmark, Netherlands, Puerto Rico,
Estonia, United States, France, Czech Republic, Mexico, Zimbabwe, Philippines,
Mauritius, Turkey, Japan, China, Singapore, Lebanon, Sri Lanka, Cyprus, Poland,
Bulgaria, Germany, South Africa, Dominican Republic, Kyrgyz Republic, Malta,
India, Italy, Spain, Azerbaijan, Belgium, Korea, Rep., Armenia, Tajikistan.
Further investigation focused on
describing those 40 countries from the standpoint of the essential benefits
inherent to the concept of Energy Ponds: prevention of droughts and floods on
the one hand, with the production of electricity being the other positive
outcome. The variable published by the World Bank under the heading of ‘Droughts,
floods, extreme temperatures (% of population, average 1990-2009)’[6]
has been taken individually, and interpolated with the headcount of population.
In the first case, the relative importance of extreme weather phenomena for
local populations is measured. When recalculated into the national headcount of
people touched by extreme weather, this metric highlights the geographical
distribution of the aggregate benefits, possibly derived from adaptive
resilience vis a vis such events.
Below, both metrics, i.e. the percentage and the headcount of population, are shown as maps. The percentage of population touched by extreme weather conditions is much more evenly distributed than its absolute headcount. In general, Asian countries seem to absorb most of the adverse outcomes resulting from climate change. Outside Asia, and, of course, within the initially selected set of 40 countries, Kenya seems to be the most exposed.
Another possible take on the
socio-economic environment for developing Energy Ponds is the strictly business
one. Prices of electricity, together with the sheer quantity of electricity
consumed are the chief coordinates in this approach. Prices of electricity have
been reported as retail prices for households, as Energy Ponds are very likely
to be an off-grid local supplier. Sources of information used in this case are
varied: EUROSTAT data has been used as regards prices in European countries[1]
and they are generally relevant for 2019. For other countries sites such as
STATISTA or www.globalpetrolprices.com have been used, and most of them
are relevant for 2018. These prices are national averages across different
types of contracts.
The size of electricity markets has been measured in two steps, starting with consumption of electricity per capita, as published by the World Bank[2], which has been multiplied by the headcount of population. Figures below give a graphical idea of the results. In general, there seems to be a trade-off between price and quantity, almost as in the classical demand function. The biggest markets of electricity, such as China or the United States, display relatively low prices. Markets with high prices are comparatively much smaller in terms of quantity. An interesting insight has been found, when prices of electricity have been compared with the percentage of population with access to electricity, as published by the World Bank[3]. Such a comparison, shown in Further below, we can see interesting outliers: Haiti, Kenya, India, and Zimbabwe. These are countries burdened with significant limitations as regards access to electricity. In these locations, projects such as Energy Ponds can possibly produce entirely new energy sources for local populations.
The possible implementation of
Energy Ponds can take place in very different socio-economic environments. It
is worth studying those environments as idiosyncratic types. Further below, the
following types and cases are studied more in detail:
Type
‘Large cheap market with a lot of environmental outcomes’: China, India
>> low price of electricity, locally access to electricity, prevention of
droughts and floods,
Type
‘Small or medium-sized, developed European economy with high prices of electricity
and relatively small a market’
Special
case: United States ‘Large, moderately priced market, with moderate
environmental outcomes’: United States >>
moderate price of electricity, possibility to go off grid with Energy Ponds,
prevention of droughts and floods
Special
case: Kenya > quite low access to electricity (63%) and
moderately high retail price of electricity (0,22/ kWh), big population
affected by droughts and floods, Energy Ponds can increase access to
electricity
Table 1, further below, exemplifies
the basic metrics of a hypothetical installation of Energy Ponds, in specific
locations representative for the above-mentioned types and special cases. These
metrics are:
Discharge (of water) in m3 per second, in selected riverain locations. Each
type among those above is illustrated with a few specific, actual geographical
spots. The central assumption at this stage is that a local installation of
Energy Ponds abstracts 20% of the flow per second in the river. Of course,
should a given location be selected for more in-depth a study, specific
hydrological conditions have to be taken into account, and the 20%-assumption
might be verified upwards or downwards.
Electric power to expect with the given abstraction of water. That power has been calculated
with the assumption that an average ram pump can create elevation, thus
hydraulic head, of about 20 metres. There are more powerful ram pumps (see for
example: https://www.allspeeds.co.uk/hydraulic-ram-pump/ ), yet 20 metres is a safely
achievable head to assume without precise knowledge of environmental conditions
in the given location. Given that 20-meter head, the basic equation to
calculate electric power in watts is:
[Flow per second, in m3, calculated as 20% of abstraction
from the local river]
x
20 [head in meters, by ram pumping]
x
9,81 [Newtonian acceleration]
x
75% [average efficiency of hydroelectric turbines]
Financial results to expect from the sales of electricity. Those results are calculated on
the basis of two empirical variables: the retail price of electricity,
referenced as mentioned earlier in this chapter, and the LCOE (Levelized Cost
Of Energy). The latter is sourced from a report by the International Renewable
Energy Agency (IRENA 2019[1]), and provisionally pegged at $0,05
per kWh. This is a global average and in this context it plays the role of
simplifying assumption, which, in turn, allows direct comparison of various
socio-economic contexts. Of course, each specific location for Energy Ponds
bears a specific LCOE, in the phase of implementation. With those two source
variables, two financial metrics are calculated:
Revenues from the sales of
electricity, as:
[Electric power in kilowatts] x [8760 hours in a year] x [Local retail price for
households per 1 kWh]
Margin generated over the LCOE, equal to: [Electric power in kilowatts] x
[8760 hours in a year] x {[Retail price for households per 1 kWh] – $0,05}
Table 1
Country
Location
(Flow
per second, with 20% abstraction from the river)
Electric
power generated with 20% of abstraction from the river (Energy for sale)
Annual
revenue (Annual margin over LCOE)
China
Near Xiamen, Jiulong River
(26 636,23 m3 /s)
783,9 kW
(6 867 006,38 kWh a year)
$549 360,51
($206 010,19)
China
Near Changde, Yangtze River
(2400 m3/s)
353,16 kW
(3 093 681,60 kWh a year)
$247 494,53
($92 810,45
India
North of Rajahmundry,
Godavari River
(701 m3/s)
103,15 kW
(903 612,83 kWh a year)
$54 216,77
($9 036,13)
India
Ganges River near Patna
(2400 m3/s)
353,16 kW
(3 093 681,60 kWh a year)
$185 620,90
($30 936,82)
Portugal
Near Lisbon, Tagus river
(100 m3/s)
14,72 kW
(128 903,40 kWh a year)
€ 27 765,79
(€22 029,59)
Germany
Elbe river between Magdeburg
and Dresden
(174 m3/s)
25,6 kW
(224 291,92 kWh a year)
€68 252,03
(€58 271,04)
Poland
Vistula between Krakow and
Sandomierz
(89,8 m3/s)
13,21 kW
(115 755,25 kWh a year)
€ 18 234,93
(€13 083,82)
France
Rhone river, south of Lyon
(3400 m3/s)
500,31 kW
(4 382 715,60 kWh a year)
€ 773 549,30
(€ 582 901,17)
United States, California
San Joaquin River
(28,8 m3/s)
4,238 kW
(37 124,18 kWh a year)
$ 7 387,71
($5 531,50)
United States, Texas
Colorado River, near Barton
Creek
(100 m3/s)
14,72 kW
(128 903,40 kWh a year)
$14 643,43
($8 198,26)
United States, South
Carolina
Tennessee River, near
Florence
(399 m3/s)
58,8 kW
(515 097,99 kWh a year)
$66 499,15
($40 744,25)
Kenya
Nile River, by the Lake Victoria
(400 m3/s)
58,86 kW
(515 613,6 kWh a year)
$113 435
($87 654,31)
Kenya
Tana River, near Kibusu
(81 m3/s)
11,92 kW
(104 411,75 kWh a year)
$22 970,59
($17 750)
China and India are grouped in the
same category for two reasons. Firstly, because of the proportion between the
size of markets for electricity, and the pricing thereof. These are huge
markets in terms of quantity, yet very frugal in terms of price per 1 kWh.
Secondly, these two countries seem to be representing the bulk of populations,
globally observed as touched damage from droughts and floods. Should the
implementation of Energy Ponds be successful in these countries, i.e. should
water management significantly improve as a result, environmental benefits would
play a significant socio-economic role.
With those similarities to keep in
mind, China and India display significant differences as for both the
environmental conditions, and the economic context. China hosts powerful
rivers, with very high flow per second. This creates an opportunity, and a
challenge. The amount of water possible to abstract from those rivers through
ram pumping, and the corresponding electric power possible to generate are the
opportunity. Yet, ram pumps, as they are manufactured now, are mostly
small-scale equipment. Creating ram-pumping systems able to abstract
significant amounts of water from Chinese rivers, in the Energy Ponds scheme,
is a technological challenge in itself, which would require specific R&D
work.
That said, China is already
implementing a nation-wide programme of water management, called ‘Sponge
Cities’, which shows some affinity to the Energy Ponds concept. Water
management in relatively small, network-like structures, seems to have a
favourable economic and political climate in China, and that climate translates
into billions of dollars in investment capital.
India is different in these
respects. Indian rivers, at least in floodplains, where Energy Ponds can be
located, are relatively slow, in terms of flow per second, as compared to China.
Whilst Energy Ponds are easier to implement technologically in such conditions,
the corresponding amount of electricity is modest. India seems to be driven
towards financing projects of water management as big dams, or as local
preservation of wetlands. Nothing like the Chinese ‘Sponge Cities’ programme
seems to be emerging, to the author’s best knowledge.
European countries form quite a
homogenous class of possible locations for Energy Ponds. Retail prices of
electricity for households are generally high, whilst the river system is dense
and quite disparate in terms of flow per second. In the case of most European
rivers, flow per second is low or moderate, still the biggest rivers, such as
Rhine or Rhone, offer technological challenges similar to those in China, in
terms of required volume in ram pumping.
As regards the Energy Ponds business
concept, the United States seem to be a market on their own right. Local
populations are exposed to moderate (although growing) an impact of droughts
and floods, whilst they consume big amounts of electricity, both in aggregate,
and per capita. Retail prices of electricity for households are noticeable
disparate from state to state, although generally lower than those practiced in
Europe[2].
Prices range from less than $0,1 per 1 kWh in Louisiana, Arkansas or
Washington, up to $0,21 in Connecticut. It is to note that with respect to
prices of electricity, the state of Hawaii stands out, with more than $0,3 per
1 kWh.
The United States offer quite a
favourable environment for private investment in renewable sources of energy,
still largely devoid of systematic public incentives. It is a market of multiple,
different ecosystems, and all ranges of flow in local rivers.
[1] IRENA
(2019), Renewable Power Generation Costs in 2018, International Renewable
Energy Agency, Abu Dhabi. ISBN 978-92-9260-126-3
[1] Bosanquet,
B. (1920). The philosophical theory of the state (Vol. 5). Macmillan and
Company, limited.
[2] Feenstra,
Robert C., Robert Inklaar and Marcel P. Timmer (2015), “The Next
Generation of the Penn World Table” American Economic Review, 105(10),
3150-3182, available for download at http://www.ggdc.net/pwt
[3] Kusakana,
K. (2019). Optimal electricity cost minimization of a grid-interactive Pumped
Hydro Storage using ground water in a dynamic electricity pricing environment.
Energy Reports, 5, 159-169.
Life can be full of surprises, like
really. When I was writing my last update, on March 7th and 8th,
the one entitled ‘Lettres
de la zone rouge’, I was already writing about the effects of coronavirus
in the stock market. Yet, it was just sort of an external panic, back then.
External to me, I mean. Now, I am in, as we all are in Europe. Now, more than
ever before, I use blogging, i.e. writing and publishing content, as a device
for putting order in my own thoughts.
At the university, I had to switch
to online teaching, and I am putting a lot of myself into preparing good stuff
for students. By the way, you can assess the quality of my material by
yourself. I have two lectures on Vimeo, in a course entitled
‘Fundamentals of Finance’. Both are password-locked and the password is ‘akademia’.
Pay attention to the ‘k’. Not ‘academia’, but ‘akademia’. Lecture 1 is
available at https://vimeo.com/398464552 and Lecture 2 fires
on
I can’t help
philosophizing. I should be focusing, in my blogging, on melting, hammering,
and hardening my investment strategy in the stock market. Yet, financial
markets are like an endocrine system, and given the way those hormones just
fountain, right now, I am truly interested in studying the way the whole
organism works. According to the personal strategy of writing and publishing,
which I laid out in the update entitled ‘Back in the game’, as well as those
which followed, since February 10th, 2020, I should be using my blog
mostly for writing about strategies to apply for investment in the stock
market. Still, life can be surprising, and it is being bloody surprising right
now. There is a thin line between consistency and obstinacy, and I want to keep
walking on its consistency side. In order to coin up a sensible strategy for
investment, I need to understand the socio-economic environment: this is
elementary stuff which I teach my students in the first year. Besides, as I
observe myself right now, I think I have some affinities with some squids and
octopuses: when I sense serious cognitive dissonance coming my way, I release a
cloud of ink. Just in case.
When I go deep into
thinking, I like starting from what I perceive as my most immediate experience.
Now, my most immediate experience consists in observing my own behaviour and
the behaviour of other people. On Tuesday the 17th, I recorded those
two video lectures, and I had to go to the campus of my university, where we
have a state-of-the-art recording facility. I was cycling through the nearly
empty city, and memories popped up. I remember the late 1970ies, when I was a
little kid, and lived in the communist Poland. When I would walk the streets,
back then, they were similarly empty. It is only now, when human traffic in the
streets has gone down to like 5% of what it used to be until recently, that I
realized how much more mobile and interactive a society we have become, in
Poland, since that communist past.
I am thinking about
the way we, humans, adapt to new circumstances. How is social mobility, even
that most immediately observable daily traffic, connected to the structure of
our social life. How is my GDP per capita – I mean, it is per capita, and thus
I can say it is my per capita – related to the number of pedestrians per hour
per square kilometre out there, in the streets? My most immediate experience of
street traffic is that of human energy, and the intensity of its occurrence. It
looks as if the number of human steps on the pavement, together with the stream
of vehicles, manifested an underlying flow of some raw, hardly individuated at
all, social force. What is the link between this raw social energy, and social
change, such as what we have experienced, all over Central Europe, since the
collapse of the Berlin wall? Well, this is precisely what I am trying to figure
out.
Now, I go deeper, as
deep as William James used to go in his ‘Essays in Radical Empiricism’, published, for the first time,
in 1912. Human energy, out there, manifests itself both in the streets as such,
and in me, in my perception. Phenomenologically, the flow of human traffic is
both outside of me, and inside my mind. The collective experience is that of
roaming the city, and, in the same time, that of seeing other people doing it
(or even knowing they keep doing it). Same for the stock market, real business,
teaching etc. All those streams of human activity are both out there, as
intersubjectively observable stuff, and inside my mind, as part of my
consciousness.
What we do is both in
us, and out there. Social life is a collection of observable events, and a
collection of corresponding, individual experiences. My experience right now is
that of reorganizing my activity, starting with my priorities as for what to
work on. It is fully official, the Minister of Science and Higher Education has
just signed the emergency ordinance that all classes in universities are
suspended until April 10th and that we are all encouraged to take on
any form of distance learning we can use, even if it isn’t officially included
in syllabuses. Given that right after April 10th it will be the
Easter break, and that, realistically, classes are highly unlikely to restart
afterwards, I have a lot of free time and a lot of things to stream smoothly
inside of that sudden freedom.
I start with making a
list. I structure my activity into 3 realms: pure science, applied science, and
professional occupation.
As for the strictly
speaking scientific work, i.e. the action of discovering something, I am
working on using artificial intelligence as a tool for simulating collective
intelligence in human societies. I have come up with some interesting stuff,
but the first exchange I had about it with publishers of scientific journals is
like ‘Look, man, it sounds interesting, but it is really foggy, and you are
really breaking away from established theory of social sciences. You need to
break it down so as to attach the theory you have in mind to the existing
theory and to sort. In other words: your theory is not marketable yet’. I
humbly accept those criticisms, I know that good science is to be forged in
such fire, and I know that science is generally about figuring out something
intelligible and workable.
The concept of
collective intelligence is even more interesting right now. Honestly, that
COVID-19 looks to me as something collectively intelligent. I know, I know:
viruses don’t even have anything to be intelligent with, them having no nervous
system whatsoever. Still, juts look. COVID-19 is different from his cousins by
its very progressive way of invading its host’s body. The COVID-19’s granddad,
the SARS virus from 2003, was like Dalton brothers. It would jump on its prey,
all guns out, and there was no way to be asymptomatic with this f**ker. Once
contaminated, you were lucky if you stayed alive. SARS 2003 was sort of
self-limiting its range. COVID-19 is like a jihadist movement: it sort of hangs
around, masking its pathogenic identity, and starts reproducing very slowly,
sort of testing the immune defences of the organism, and each consecutive step
of that testing can lead to ramping up the pace of reproduction.
All this virus has, as
a species, is a chain of RNA (ribonucleic acid), which is essentially
information about reproducing itself, without any information about any vital
function whatsoever. This chain is apparently quite long, as compared to other
viruses, so it takes some time to multiply itself. That time, unusually long,
allows the host’s body to develop an immune response. The mutual pacing of
reproduction in the virus, and of immune kickback in the host creates that
strange phase, when the majority of hosts act like postmen for the virus. Their
bodies allow the COVID-19 to proliferate just a little, but just enough to
become transmissible. Allowing some colonies of itself to be killed, the virus
brings a new trait: it is more pervasive than deadly, and it is both in the
same time. At the end of the day, COVID-19 achieves an impressive reach across
the human species. I think it will turn out, by the end of this year, that
COVID-19 is a record holder, among viruses, as for the total human biomass
infected per unit of time.
Functionally, COVID-19
looks almost like a civilisation: it is able to expand by adaptation. As I read
scientific articles on the topic of epidemics, many biologists anthropomorphise
pathogens: they write about those little monsters ‘wanting’ something, or
‘aiming’ for some purpose. Still, there is nothing in a virus that could be
wanting anything. There is no personality or culture. There is just a chain of
RNA, long enough to produce additional time in proliferation.
Let’s compare it to
human civilisation. Any human social structure started, long ago, as a small
group of hominids trying to survive in an ecosystem which allows no mistakes.
One of the first mistakes that our distant ancestors would make consisted in
killing and gathering the shit out of their immediate surroundings, and then
starving to death. Hence, they invented the nomadic pattern, i.e. moving from
one spot to another before exhausting completely the local resources. Our apish
great-grandparents were not nomadic by nature: they probably picked it from
other species they observed. Much later, more evolved hominids discovered that
nomadism could be replaced by proper management of local resources. If you
domesticate a cow, and that cow shits in the fields, it contributes to
regenerating the productive capacity of that soil, and so we can stay in one
place for longer.
Many generations
later, we had figured out still another pattern. Instead of having a dozen
children per woman and letting most of them die before the age of 10, we came
to having less offspring but taking care to bring that smaller number up,
nicely and gently, all the way to adulthood. That allows more learning within
one individual lifetime, and thus we can create a much more complex culture,
and more complex technologies. In our human evolution, we have been doing very
much what the COVID-19 virus does: we increase our own complexity, and, by the
same means, we slow down our pace of reproduction. At the end of the day,
slowing down pays off through increased range, flexibility and biomass.
My theoretical point
is that collective intelligence is something very different from the individual
one. The latter requires a brain, the former not at all. All a species needs at
the level of collective intelligence is to make an important sequence of
actions (such as the action of reproducing a long chain of nucleotides) complex
and slow enough for allowing adaptation to environmental response, in that very
sequence.
I assume I am a virus.
I slow down my action so as to allow some response from outside, and to adapt
to that response. It has a name: it is a game. An action involving two or more
distinct agents, where each agent pends their action on the action of the
other(s) is a game. Let’s take a game of chess. Two players: the collective
intelligence of humans vs. the collective intelligence of COVID-19. Someone
could say it is a wrong representation, as the human civilisation has a much
more complex set of pieces than the virus has, and we can make more different
moves. Really? Let’s look. How much complexity and finesse have we demonstrated
so far in response to the COVID-19 pandemic? It turns out we are quite
cornered: if we don’t temporarily shut down our economy, we will expose
ourselves to seeing the same economy imploding when the reasonably predictable
7% of the population develops acute symptoms, i.e. respiratory impairment. What
we do is essentially what the virus does: we play on time, and delay the
upcoming events, so as to gain some breathing space.
We can change the rules of the game. We can introduce new technologies (e.g. vaccines), which will give is more possible moves. Still, the virus can respond by mutating. The most general rules of the game we play with the virus is given by the epidemic model. I tap into the science published in 2019 by Olusegun Michael Otunuga, in the article entitled ‘Closed-form probability distribution of number of infections at a given time in a stochastic SIS epidemic model’ (Heliyon, 5(9), e02499, https://doi.org/10.1016/j.heliyon.2019.e02499 ).
A crazy scientific
idea comes to my mind: as we are facing a pandemic, and that pandemic deeply
affects social life, I can study all of social life as concurrent pandemics: a
pandemic of going to restaurant, a pandemic of making vehicles and driving them
around, a pandemic of making and consuming electricity etc. COVID-19 is just
one among those pandemics, and proves being competitive against them, i.e.
COVID-19 prevents those other pandemics from carrying on at their normal pace.
What is the cognitive
value of such an approach, besides pure intellectual entertainment? Firstly, I
can use the same family of theoretical models, i.e. epidemic models, to study
all those phenomena in the same time. Epidemic models have been in use, in social
sciences, for quite some time, particularly in marketing. The diffusion of a
new product or that of a new technology can be studied as the spreading of a
new lifeform in an ecosystem. That new lifeform can be considered as candidate
for being a pathogen, or a symbiont, depending on the adaptive reaction of
other lifeforms involved. A new technology can both destroy older technologies
and enter with them in all sorts of alliances.
A pathogen able to kill circa 3% of the population, and temporarily disable around 10%, can take down entire economic systems. In the same time, it stimulates the development of entire industries: 3D printing, biotech, pharmacy, and even basic medical supplies. One year ago, would anyone believe that manufacturing latex gloves could be more strategic than manufacturing guns?
On February 24th, after I
posted my last update ( Bloody hard to make a strategy ), I did what I was declaring I
would do: I bought 1 share of Invesco QQQ Trust (QQQ) for $224,4, 5 shares of Square Inc, at $76,85, thus investing $384,25 in this
one. Besides, I have just placed an order to buy one share ($33) of Virgin Galactic Holdings, mostly because it is high tech.
These are the small steps I took,
but now, as financial markets are freaking out about coronavirus, it is the
right moment to figure out my endgame, my strategy. Yes, that’s surely one
thing I have already nailed down as regards investment: the more the market is
driven by strong emotions, the more I need to stay calm. Another thing I have
learnt by experience is that it really pays off, at least for me, to
philosophise about the things I do, would it be science or a business strategy.
It really pays to take a step back from current events, sit and meditate on
said events. Besides, in terms of scientific research, I am currently working
on the ways to derive economic value added from environmental projects. I guess
that fundamental questioning regarding economic value and decisions we make
about it will be interesting.
When I philosophise about anything
connected to social sciences, I like talking to dead people. I mean, no
candles, no hand-touching, just reading and thinking. I discovered that I can find
exceptionally deep insights in the writings of people labelled as ‘classics’.
More specifically, books and articles written at an epoch when the given type
of economic institution was forming, or changing fundamentally, are
particularly insightful. It is a little bit as if I were an astrophysicist and
I had a book, written by an alien who watched the formation of a planet. The
classic that I want to have a word with right now is Louis Bachelier. I am
talking about Bachelier’s ‘Theory of Speculation’ , the PhD thesis from 1900,
originally published in French as ‘Théorie de la spéculation’ (Bachelier 1900[1]).
Here’s how Louis Bachelier introduces his thesis: ‘INTRODUCTION. The
influences which determine the movements of the Stock Exchange are innumerable.
Events past, present or even anticipated, often showing no apparent connection
with its fluctuations, yet have repercussions on its course. Beside
fluctuations from, as it were, natural causes, artificial causes are also
involved. The Stock Exchange acts upon itself and its current movement is a
function not only of earlier fluctuations, but also of the present market
position. The determination of these fluctuations is subject to an infinite
number of factors: it is therefore impossible to expect a mathematically exact
forecast. Contradictory opinions in regard to these fluctuations are so divided
that at the same instant buyers believe the market is rising and sellers that
it is falling. Undoubtedly, the Theory of Probability will never be applicable
to the movements of quoted prices and the dynamics of the Stock Exchange will
never be an exact science. However, it is possible to study mathematically the
static state of the market at a given instant, that is to say, to establish the
probability law for the price fluctuations that the market admits at this
instant. Indeed, while the market does not foresee fluctuations, it considers
which of them are more or less probable, and this probability can be evaluated
mathematically.’
As I see, Louis Bachelier had very
much the same feelings about the stock market as I have today. I am talking
about the impression to be rowing in a really tiny boat across a bloody big
ocean, with huge waves, currents and whatnot, and a general hope that none of
these big dangerous things hits me. And yes, there are sharks. A natural
question arises: why the hell stepping into that tiny boat, in the first place,
and why leaving shore? If it is that risky, why bother at all? I think there is
only one sensible answer to that: because I can, because it is interesting, and
because I expect a reward, all three in the same time.
This is the general, duly snappy
reply, which I need to translate, over and over again, into goals and values.
Back in the day, almost 30 years ago, I used to do business, before I went into
science. For the last 4 years or so, I am thinking about getting back into
business. The difference between doing real business and investing in the stock
market is mostly the diversification of risk. I dare say that aggregate risk is
the same. When I do real business, as a small businessman, I have the same
feeling of rowing in that tiny boat across a big ocean. I have so little
control over the way my small business goes that when I manage to get in
control of something, I get so excited that I immediately label that controlled
thing as ‘successful business model’. Yet, running my own small business is so
time and energy absorbing that I have hardly any juice left for anything else.
I have all my eggs in the same basket, i.e. I do not hedge my risks. With any
luck, I just insure them, i.e. I share them with someone else.
When I invest in the stock market, I
almost intuitively spread my equity over many financial assets. I hedge the
business-specific risks. Here comes an interesting question: why did I choose
to invest in biotechnology, renewable energies and IT? (see Back in the game ). At the time, one month ago, I
made that choice very intuitively, following my intellectual interests. Now, I
want to understand myself deeper. Logically, I turn to another dead man: Joseph
Alois Schumpeter, and to his ‘Business Cycles’. Why am I knocking at this
specific door? Because Joseph Alois Schumpeter studied the phenomenon of
technological change with the kind of empirical assiduity that even today
inspires respect. From Schumpeter I took that idea that once true technological
change starts, it is unstoppable, and it inevitably drives resources from the
old technologies towards the new ones. There are technologies in the today’s
world, precisely such as biotechnology, information technologies, and new
sources of energy, which no one can find their way around. Those technologies
are already reshaping deeply our everyday existence, and they will keep doing
so. If I wanted to start a business of my own in any of these industries, it
would take me years to have the thing running, and even more years to see any
economic gains. If I invest in those industries via the stock market, I can tap
directly into the economic gains of the already existing businesses. Egoistic
but honest. I come back to that metaphor of boat and ocean: instead of rowing
in a small boat across a big ocean, I hook onto a passing cruiser and I just
follow it.
There is more to the difference
between entrepreneurship, and investment in the stock market. In the latter
case, I can clearly pace myself, and that’s what I do: every month, I invest
another parcel of capital, on the grounds of learning acquired in past months.
In entrepreneurship, such a pacing is possible, yet much harder to achieve.
Capital investment required to start the business usually comes in a lump: if I
need $500 000 to buy machines, I just need it. Of course, there is the
fine art of engaging my own equity into a business step by step, leveraging the
whole thing with credit. It is possible in an incorporated business. Still, the
very incorporation of a business requires engaging a minimum equity, which is
way greater than the baby steps of investment.
Incidentally, the very present
developments in the stock market make an excellent opportunity to discuss more
in depth. If you care to have a look at the NASDAQ Composite Index , especially at its relatively
longer time window, i.e. from 1 month up, you will see that what we have now is
a typical deep trough. In my own portfolio of investment positions, virtually
every security is just nosediving. Why does it happen? Because a lot of
investors sell out their stock, at whatever price anybody is ready to pay for
it.
Have you ever wondered how do stock
prices plummet, as it is the case now? I mean, HOW EXACTLY? Market prices are
actual transactional prices, i.e. prices that actual deals are closed at. When
stock prices dive, people who sell are those who panic, I get it. Who buys? I
mean, for a very low stock price to become a market price, someone must be
buying at this specific price. Who is that, who buys at low prices when other people
freak out and sell out? Interesting question. When the market price of a
security falls, the common interpretation is: ‘it is ‘cause that security
has become less attractive to investors’. Wait a minute: if the price
falls, someone buys at this falling price. Clearly, there are investors who
consider that security attractive enough to pay for it even though the price is
likely to fall further.
When a whole market index, such as NASDAQ Composite, is skiing downhill, I can see the same
phenomenon. Some people – right, lots of people – sell out whatever financial
assets they have because they are afraid to see the market prices fall further
down. Some other people buy. They see an opportunity in the widespread
depreciation. What kind of opportunity? The one that comes out of other people
losing control over their behaviour. Now, from there, it is easy to go into the
forest of conspiracy theories, and start talking about some ‘them’, who ‘want
to..’ etc. Yes, there is a rational core to those theories. The stock market is
like an ocean, an there are sharks in it. They just wait patiently for the prey
to come close to them. Yes, officially, the spiritus movens of the present
trough in stock markets is coronavirus. Wait a minute: is it coronavirus as
such, or what we think about it? Wait another minute: is it about what we think
as regards the coronavirus, or about what we expect other people (in the stock
market) to think about it?
When I look at the hard numbers
about coronavirus, they look refreshing. When I divide the number of fatalities
by the number of officially diagnosed cases of infection, that f**ker is, under
some angle, less deadly than common flu. Take a look at Worldometer: 85 217 official carriers of it, and
2 924 of those carriers dead. The incidence of fatality is 2924/85217 =
3,43%. A study by the Center for Infectious
Disease Research and Policy at the University of Minnesota finds an even lower rate: 2,3%. As
reported by CNBC, just in the United States of America, during
the on-going flu season, the flu virus has infected 19 million people, and
caused 10 000 deaths. The incidence of death among people infected with
flu is much lower, 10000/19000000 = 0,05%, yet the absolute numbers are much
higher.
Please, notice that when a real
panic overwhelms financial markets, there is no visible fall in prices, because
there are no prices at all: nobody is buying. This is when pricing is
suspended, and the stock market is effectively shut down. As long as there are
any prices in the market, some market agents are buying. Here comes my big
claim, sourced in my recent conversation with the late Joseph Schumpeter: the
present panic in the stock market is just superficially connected to
coronavirus, and what is manifesting itself deep underneath that surface is
widespread preparation for a bloody deep technological change, coming our way
right now. What technological change? Digital technologies, AI,
robotization, shift towards new sources of energy, and, lurking from the bottom
of the abyss, the necessity to face climate change.
I am deeply convinced that my own
investment strategy, should it demonstrate any foresight and long-range
planning, should be most of all espousing the process of that technological
change. Thus, it is useful to understand the process and to plan my strategy
accordingly. I have already done some research in
this field and my
general observation is that technological change as it is going on right now is
most of all marked by increasing diversity in technologies. What we are
witnessing is not just a quick replacement of old technologies by new ones:
that would be too easy. Owners of technological assets need to think in terms
of stockpiling many generations of technologies in the same time, and the same
place.
From the point of view of an
entrepreneur it is what the French call “l’embarras du choix”, which means
embarrassingly wide range of alternative technological decisions to take. I
described it partially in ‘4 units of quantity in technological
assets to make one unit of quantity in final goods’. Long
story short, there is a threshold speed of technological, up to which older
technological assets can me simply replaced by newer ones. Past that threshold,
managing technological change at the business level becomes progressively more
and more a guessing game. Which specific cocktail of old technologies, and
those cutting-edge ones, all that peppered with a pinch of those in between,
will work optimally? The more technologies we can choose between, the more
aleatory, and the less informed is the guess we make.
I have noticed and studied one
specific consequence of that ever-widening choice of technological cocktails:
the need for cash. Mathematically, it is observable as correlation between two
coefficients: {Amortization / Revenue} on the one hand, and {Cash
& cash equivalent / Assets} on the other hand. The greater a
percentage of revenues is consumed by amortization of fixed assets, the more
cash (in proportion to total assets) businesses hold in their balance sheets. I
nailed it down statistically, and it is quite logical. The greater a palette of
choices I might have to navigate my way through, the more choices I have to
make in a unit of time, and when you need to make a lot of business choices,
cash is king, like really. Open credit lines with banks are nice, just as
crowdfunding platforms, but in the presence of significant uncertainty there is
nothing like a nice, fat bank account with plenty of liquidity at arm’s reach.
When a business holds a lot of cash
for a long time, they end up by holding a lot of the so-called ‘cash
equivalents’, i.e. a low-risk portfolio of financial securities with a
liquidity close to cash strictly spoken. Those securities are listed in some
stock market, whence an inflow of capital into the stock market from companies
holding a lot of cash just in case a breakthrough technology pokes its head
from around the corner. Quick technological change, quick enough to go past the
threshold of simple replacement (understood as straightforward shift from older
technology towards and into a newer one), generates a mounting wave of capital
placed on short-term positions in the stock market.
Those positions are short-term. In
this specific financial strategy, entrepreneurs perceive the stock market as
one of those garage-size warehouses. In such a place, you can store, for a
relatively short time, things which you don’t know exactly what to do with, yet
you feel could need them in the future. Logically, growing an occurrence of of
short-term positions in the stock market induces additional volatility. Each
marginal million of dollars pumped in the stock market via this tube is more
restless than the previous one, whence increasing propensity of the market as a
whole to panic and run in the presence of any external stressor. Joseph
Schumpeter described that: when the economy is really up to a technological
leap, it becomes overly liquid financially. The financial hormone gets piled up
in the view of going all out.
I come back to thinking about my own
strategy. Whatever kind of run we have in the stock market right now, the
coronavirus is just a trigger, and the underlying tension it triggered is
linked to technological change of apparently unseen speed and complexity.
In my portfolio, just two positions
remain positive as for their return rate: Incyte Corporation and Square Inc.
All the others have yielded to the overwhelming panic in the market. Why? I can
follow the tracks of two hypotheses. One, those companies have particularly
good fundamentals, whilst being promisingly innovative: they sort of surf
elegantly on the wave of technological change. Two, it is more aleatory. In the
times of panic such as we experience now, in any given set of listed
securities, investors flock, in a largely random way, towards some businesses,
and away from others. Mind you, those two hypotheses are mutually complementary
(or rather they are not mutually exclusive): aleatory, panicky behaviour on the
part of investors conjoins with a good mix of characteristics in specific
businesses.
Right, so I have the following
situation. In my portfolio, I have two champions of survival, as regards the
rate of return – the above-mentioned Incyte Corporation and Square Inc.
– in the presence of all them other zombies that succumbed to the surrounding panic:
First Solar Inc., Macrogenics, Norsk Hydro, SMA Solar Technology AG, Virgin
Galactic Holdings, Vivint Solar, 11Bit, Asseco Business Solutions, AMUNDI EPRA
DR (ETF Tracker), and Invesco PowerShares EQQQ Nasdaq-100 UCITS (ETF Tracker).
I keep in mind the ‘how?’ of the situation. In the case of Incyte Corporation
and Square Inc., investors are willing to pay for them more than they were
ready to pay in the past, i.e. deals on those securities tend to be closed at a
ramping up average price. As for all the others, displaying negative rates of
return, presently investors pay for them less than they used to a few days or
weeks ago. I stress once again the fact that investors pay. This is how prices
are fixed. Whichever of those securities we take, some investors keep buying.
What I can observe are two different
strategies of opening new investment positions. The first one, largely
dominating, consists in buying into cheap stuff, and forcing that stuff to go
even cheaper. The second one, clearly less frequently occurring, displays
investors opening new positions in the market at a higher price than before. I
am observing two distinct behavioural patterns, and I presume, though I am not
sure, that these two patterns of investment are correlated with the intrinsic
properties of two supposedly different sets of securities. I know that at this
point I am drifting away from the classical ‘supply – demand’ pattern of
pricing in the stock market, yet I am not drifting really far. I acknowledge
the working of Marshallian equilibrium in that price setting, I just enrich my
investigation with the assumption of diverging behavioural patterns.
In my portfolio, I hold securities
which somehow are attached to both of those behavioural patterns. I have taken
a position on other people’s possible behaviour. This is an important finding
about my own investment strategy and the ways I can possibly get better at it.
I can be successful in my investment if I make the right guess as regards the
businesses or securities that incite the pattern of behaviour manifesting in
growing price that deals get closed at. What
I can observe now, in the times of panic in the market, is a selective panic.
As a matter of fact, even before that coronavirus story went western,
securities in my portfolio were disparate in their rate of return: some of them
positive, some others negative. What has changed now is just the proportion
between the positive returns and the negative ones.
Another question comes to my mind:
when I open positions on the stock of businesses in some selected technological
fields, like solar energy, do I participate in technological change, or do I
just surf over the top of financial foam made by that change? There is that
theory, called ‘Q theory of investment’ (see for example Yoshikawa 1980[2]), developed by James Tobin and William
Brainard, and that theory claims that when I invest in the stock of listed companies,
I actually buy claims on their productive assets. In other words, yes, listed
shares are just financial instruments, but when I buy them or sell them, I, as
an investor, I develop strategies of participation in assets, not just in
equity.
When I think about my own behaviour,
as investor, I certainly can distinguish between two frames of mind: the
gambling one, and the farming one. There are moments, when I fall,
unfortunately, into a sort of frantic buying and selling, and I use just small
bits of information, and the information I use is exclusively technical, i.e.
exclusively the price curves of particular securities. This is the gambling pattern.
I do my best to weed out this pattern of behaviour in myself, as it: a) usually
makes me lose money, like really and b) is contrary to my philosophy of developing
long term strategies for my investment. On the other hand, when I am free of
that gambling frenzy, I tend to look at my investment positions in the way I
look at roses in my garden, sort of ‘What can I do to make them grow bigger and
flourish more abundantly?’. This is my farming frame of mind, it is much less
emotional than the gambling one, and I intuitively perceive it as more
functional for my long-term goals.
Good, it looks like I should give
some provisional closure and put this update online. I think that in the presence
of a hurricane, it is good to stay calm, and to meditate over the place to go
when the hurricane calms down. I guess that for the weeks to come, until I
collect my next rent and invest it in the stock market, no sudden decisions are
recommended, given the surrounding panic. I think the best I can do during
those weeks is to study the fundamentals of my present portfolio of investment
positions and draw some conclusions from it.
It is weekend, and it is time to sum
up my investment decisions. It is time to set a strategy for investing the next
rent collected. Besides being a wannabe financial investor, I am a teacher and
a scientist, and thus I want to learn by schooling myself. As with any type of
behavioural analysis, I start by asking “What the hell am I doing?”. Here comes
a bit of financial theory. When I use money to buy corporate stock, I exchange
one type of financial instrument (currency) against another type of financial
instrument, i.e. equity-based securities. Why? What for? If I trade one thing
against another one, there must be a difference that justifies the trade-off. The
difference is certainly in the market pricing. Securities are much more
volatile in their prices than money. Thus, when I invest money in securities, I
go for higher a risk, and higher possible gains. I want to play a game.
Here comes another thing. When I say
I want to play a game, the ‘want’ part is complex. I am determined to learn
investment in the most practical terms, i.e. as my own investment. Still,
something has changed in my emotions over the last month. I feel apprehensive
after having taken my first losses into account. Whilst in the beginning, one
month ago, I approached investment as a kid would approach picking a toy in a
store, now I am much more cautious. Instead of being in a rush to invest in
anything, I am even pushing off a bit the moment of investment decision. It is
like sport training. Sooner or later, after the first outburst of enthusiasm,
there comes the moment when it hurts. Not much, just a bit, but enough to make
me feel uncomfortable. That’s the moment when I need to reassess my goals, and
just push myself through that window of doubt. As I follow that ‘sport
training’ logic, what works for me when I am down on optimism is consistency. I
do measured pieces of work, which I can reliably link to predictable outcomes.
Interesting. Two sessions of
investment decisions, some 4 weeks apart from each other, and I experience
completely different emotions. This is a sure sign that I am really learning
something new. I invest 2500 PLN, and in my investments, I mostly swing between
positions denominated in PLN, those in EUR, and those in USD. At current
exchange rates 2500 PLN = €582,75 = $629,72. Please, notice that when I
consider investing Polish zlotys, the PLNs, into securities denominated in PLN,
EUR or USD, I consider two, overlapping financial decisions: that of exchanging
money (pretty fixed nominal value) against securities, and that of exchanging
zlotys against other currencies.
Let’s focus for a moment, on the
strictly speaking money game. If I swing between three currencies, it is a good
move to choose one as reference. Here comes a practical rule, which I teach to
my students: your reference currency is the one you earn the major part of your
income in. My income comes from my salary, and from the rent, both in Polish
zlotys, and thus the PLN is my denominator. A quick glance at the play between
PLN, USD, and EUR brings the following results:
>> PLN to EUR: February 1st
2020, €1 = 4,3034 PLN ; February 23rd,
2020 €1 = 4,2831 PLN ; net change: (4,2831 – 4,3034) / 4,034 = -0,50%
>> PLN to USD: February 1st
2020, $1 = 3,8864 PLN ; February 23rd, 2020 $1 = 3,9623 PLN; net
change: (3,9623 – 3,8864) / 3,8864 = 1,95%
For the moment, it seems that the
euro is depreciating as compared to the US dollar, and I think it would be
better to invest in dollars. Since my last update on this blog, I did something
just opposite: I sold in USD, and bought in euro. That would be it as for
consistency. February 21st – decided to sell Frequency Therapeutics,
as I was losing money on it. I consistently apply the principle of cutting
losses short. I had a look at short-term trend in the price of Frequency
Therapeutics, and there is no indication of bouncing back up. Question: what to
invest that money in? Canadian Solar? No, they are falling. SMA Solar
Technology AG? Good fundamentals, rising price trend, equity €411,4 mln, market
cap €1 251 mln, clearly overvalued, but maybe for a reason. Bought SMA
Solar Technology, and it seems to have been a bad move. I have a slight loss on
them, just as I have one on First Solar. I consider selling them both, still
they both have interestingly strong fundamentals, yet both are experiencing a
downwards trend in stock price. Hard to say why. Hence, what I have just done
is to place continuous ‘sell’ orders with a price limit that covers my loss and
gives me a profit. We will see how it works. For First Solar, I placed a ‘sell’
order at minimum 54$, and regarding SMA Solar Technology I did the same with
the bottom limit at €37.
I found another interesting
investment in the industry of renewable energies: SolarWinds
Corporation. Good fundamentals, temporarily quite low in price, there is
risk, but there is gain in view, too. I would like to explain the logic of investing in particular sectors of
the economy. My take on the thing is that when I just spend my money, I spend
it sort of evenly on the whole economy because my money is going to circulate.
When I decide to invest my money in the equity of particular industries it is a
focused decision.
Thus, I come to the issue of
strategy. I am completely honest now: I have hard times to sketch any real
strategy, i.e. a strategy which I am sure I will stick to. I see three basic directions.
Firstly, I can keep the present portfolio, just invest more in each position so
as to keep a constant structure. Secondly, I can keep the present portfolio as
it is and invest that new portion of money in additional positions. Thirdly,
and finally, I can sell the present portfolio in its entirety and open
completely new a set of positions. My long-term purpose is, of course, to earn
money. Still, my short-term purpose is to learn how to earn money by financial
investment. Thus, the first option, i.e. constant structure of my portfolio,
seems dumb. Firstly, it is not like I have nailed down something really
workable. That last month has been a time of experimentation, summing up with a
net loss. The third option sounds so crazy that it is tempting.
I think about investing the immediately
upcoming chunk of money into ETF funds, or so-called trackers. I have just
realized they give a nice turbo boost to my investments. The one I already have
– Amundi Epra DRG – performs nicely. The only problem is that it is denominated
in euros, and I want to move towards dollars, at least for now.
Trackers sectorally adapted to my
priorities. Trackers (ETFs) are a bit more expensive – they collect a
transactional fee on the top of the fee collected by the broker – yet my
experience with Amundi Epra, a tracker focused on European real estate, is
quite positive in terms of net returns. I think about Invesco QQQ Trust
(QQQ), a tracker oriented on quick-growth stock. Another one is Microsoft.
OK, I think about Tesla,
too, but it is more than $900 one share. I would have to sell a lot of what I already
have in order to buy one. Maybe if I sell some of the well-performing biotechs
in my portfolio? Square
Inc., the publicly-listed sister company of Twitter, is another interesting
one. This is IT, thus one of my preferred sectors. I am having a look at their
fundamentals, and yes! They look as if they had finally learnt to make money.
My blog is supposed to be very much
about investment, and my personal training therein, still I keep in mind the
scientific edge. I am reworking, from the base, my concept of Energy Ponds,
which I have already developed on for the last year or so (see, for example ‘The mind-blowing hydro’). The general background of
‘Energy Ponds’ consists in natural phenomena observable in Europe as the
climate change progresses, namely: a) long-term shift in the structure of
precipitations, from snow to rain b) increasing occurrence of floods and
droughts c) spontaneous reemergence of wetlands. All these phenomena have one
common denominator: increasingly volatile flow per second in rivers. The
essential idea of Energy Ponds is to ‘financialize’ that volatile flow, so to
say, i.e. to capture its local surpluses, store them for later, and use the
very mechanism of storage itself as a source of economic value.
When water flows downstream, in a
river, its retention can be approached as the opportunity for the same water to
loop many times over the same specific portion of the collecting basin (of the
river). Once such a loop is created, we can extend the average time that a
liter of water spends in the whereabouts. Ram pumps, connected to storage structures
akin to swamps, can give such an opportunity. A ram pump uses the kinetic
energy of flowing water in order to pump some of that flow up and away from its
mainstream. Ram pumps allow forcing a process, which we now as otherwise
natural. Rivers, especially in geological plains, where they flow relatively
slowly, tend to build, with time, multiple ramifications. Those branchings can
be directly observable at the surface, as meanders, floodplains or seasonal
lakes, but much of them is underground, as pockets of groundwater. In this
respect, it is useful to keep in mind that mechanically, rivers are the
drainpipes of rainwater from their respective basins. Another basic
hydrological fact, useful to remember in the context of the Energy Ponds
concept, is that strictly speaking retention of rainwater – i.e. a complete
halt in its circulation through the collecting basin of the river – is rarely
possible, and just as rarely it is a sensible idea to implement. Retention
means rather a slowdown to the flow of rainwater through the collecting basin
into the river.
One of the ways that water can be slowed down consists in making it loop many times over the same section of the river. Let’s imagine a simple looping sequence: water from the river is being ram-pumped up and away into retentive structures akin to swamps, i.e. moderately deep spongy structures underground, with high capacity for retention, covered with a superficial layer of shallow-rooted vegetation. With time, as the swamp fills with water, the surplus is evacuated back into the river, by a system of canals. Water stored in the swamp will be ultimately evacuated, too, minus evaporation, it will just happen much more slowly, by the intermediary of groundwaters. In order to illustrate the concept mathematically, let’ s suppose that we have water in the river flowing at the pace of, e.g. 45 m3 per second. We make it loop once via ram pumps and retentive swamps, and, if as a result of that looping, the speed of the flow is sliced by 3. On the long run we slow down the way that the river works as the local drainpipe: we slow it from 43 m3 per second down to [43/3 = 14,33…] m3 per second. As water from the river flows slower overall, it can yield more environmental services: each cubic meter of water has more time to ‘work’ in the ecosystem.
When I think of it, any human social
structure, such as settlements, industries, infrastructures etc., needs to stay
in balance with natural environment. That balance is to be understood broadly,
as the capacity to stay, for a satisfactorily long time, within a ‘safety
zone’, where the ecosystem simply doesn’t kill us. That view has little to do
with the moral concepts of environment-friendliness or sustainability. As a
matter of fact, most known human social structures sooner or later fall out of
balance with the ecosystem, and this is how civilizations collapse. Thus, here
comes the first important assumption: any human social structure is, at some
level, an environmental project. The incumbent social structures, possible to
consider as relatively stable, are environmental projects which have simply
hold in place long enough to grow social institutions, and those institutions
allow further seeking of environmental balance.
Some human structures can be deemed
‘sustainable’, but this looks rather like an exception than the rule. As a
civilization, we are anything but frugal and energy-saving. Still, the
practical question remains, how can we possibly enhance the creation of
sustainable social structures (markets, cities, industries etc.), without
relying on a hypothetical moral conversion from the alleged ‘greed’ and
‘wastefulness’, to a more or less utopian state of conscious sustainability. The
model presented below argues that such enhancement can occur by creating
economic ownership in local communities, as regards the assets invested in
environmental projects. Economic ownership is to distinguish from the strictly
speaking legal ownership. It can cover, of course, property rights as such, but
it can stretch to many different types of enforceable claims on the proceeds
from exploiting economic utility derived from the environmental projects in
question.
Any human social structure generates
an aggregate amount of environmental outcomes EV, understood as reduction of
environmental risks. Environmental risk means the probable, uncertain
occurrence of adverse environmental effects. Part of those outcomes is captured
as economic utility U(EV), and partly comes as freeride benefits F(EV). For any human social structure there is a
threshold value U*(EV), above which the economic utility U(EV) is sufficient to
generate social change supportive of the structure in question. Social change
means the creation of institutions and markets, which, in turn, have the
capacity to last. On the other hand, should U(EV) be lower than U*(EV), the
structure in question cannot self-justify its interaction with natural
environment, and falls apart.
The derivation of U(EV) is a
developmental process rather than an instantaneous phenomenon. It is long-term
social change, which can be theoretically approached as evolutionary adaptive
walk in rugged landscape. In that adaptive walk, the crucial moment is the
formation of markets and/or institutions, where exchange of utility occurs as stochastic
change over time in an Ornstein–Uhlenbeck process with a jump component, akin
to that observable in electricity prices, i.e. (Borovkova & Schmeck2017[1]).
It means that human social structures become able to optimize their
environmental impact when they form prices stable enough to be mean-reverted
over time, whilst staying flexible enough to drift with jumps. Most
technologies we invent serve to transform environmental outcomes into exchangeable
goods endowed with economic utility. The set of technologies we use impacts our
capacity to sustain social structures. Adaptive walk requires many similar
instances of a social structure, similar enough to have common structural
traits. Each such instance is a 1-mutation neighbour of at least one other
instance.
By the way, if you want to contact me directly,
you can mail at: goodscience@discoversocialsciences.com
Here comes the next update in my
process of self-learning about investment in financial markets. In the last
update ( Back in the game ), I briefly sketched my starting
point, i.e. my first handful of financial positions, and my long-term goals.
According to a pace I set for myself, once a month I make investment decisions.
Why once a month? Because once a month I collect the rent from an apartment I
have in town. My basic concept is to invest the rent I collect on one form of
capital – real estate – into another form of capital.
What should be my next steps in
investment? What should be my strategy? I start by studying my expectations.
What do I expect? Shortly and honestly: I expect to beat the index. In the
jargon of investment, it means that I expect to achieve abnormally high returns
on investment, higher than the returns offered by composite indexes for the
stock markets where I invest. Lots of people expect to beat the index, most of
them fail, so what makes me expect that I can do it? Well, I can quite clearly
pin down situations, in my own past experience as investor, when I managed to
beat the index by many lengths, and other situations when I failed lamentably.
I had one success, which gives me
some confidence. My success name was Bioton, a Polish biotech company. I had an eye on them for many years, I worked
on their case with my students, and I knew they have good foundations.
Innovative enough to launch an original substance of their own – synthetic
insulin – and conservative enough to diversify their business into good old
generics like basic vitamins, antibiotics or basic vaccines. My investment story with them began in January 2014. At the time,
the founder of the company, Mr Krauze, suddenly sold out all of his
participations and essentially disappeared from the business landscape of Poland
in quite mysterious circumstances. There was a healthy business in a
reputationally shitty spot. Their stock price hit an all-time low: 0,03 PLN
(less than $0,01) per share. WTH, I thought. At this price, there is not even
much to lose. I bought. Two and a half years later, by the end of September
2016, I sold those shares at 10,04 PLN (something like 3 dollars). Yes, I made 33367%
of profit on that one. Had I been less timid in opening my initial investment
position, I could have bought some real estate with the proceeds. By the way,
as I have a fresh look at Bioton, they seem to be back almost to the point
where I invested in them. Ever since Autumn 2016, their stock price has been
falling. Now, it is at 3,5 PLN per share, which is pretty low, and as I study
the graph of their price, they are likely to hit, quite soon, that bottom
plateau between the horns of the bull. I need to study their fundamentals, but
it looks like the next good opportunity to open an interesting position. I
check, and it looks tricky.
I mean, it usually looks tricky. At
Bioton, they are losing money: at the end of the 3rd quarter of 2019
they had an operational loss of more than $30 mln. Those fundamentals look bad
in the context of their business history. Their stock price tends to nosedive,
and there are fundamental reasons to that. Still, they remain undervalued as
regards the ‘market to book’ ratio. Calculated as ‘market capitalization
divided by equity’, it makes PLN 300,52 mln / PLN 621 mln = 0,483851232. In
other words: there is room for making money on this position, at least to a ‘market
to book’ coefficient of 1,00, i.e. up to a stock price of about PLN 7,2 ÷ 7,3,
which would give a gain of more than 200%.
Miracles happen quite unfrequently,
whence their reputation. Still, I can say ‘Gotcha’!’. That’s the strategy for
my investment, which I have been turning and sniffing around for during the
last 2 weeks. I had to recapitulate those past events in order to bring those
thoughts into daylight. What I am looking for are companies with healthy
fundamentals, i.e. with reasonably good financial results and prospects for
just good a near future, which, for some reason, are deeply undervalued.
The next step is to run all my
present investment positions with the same test. One, check their fundamentals.
Two, check their relative market
valuation, denominated over their equity. Three, check the price curve and try
to locate the present price of my investment position, so as to check
opportunities for growth. I start with OAT – OncoArendi Therapeutics. I am having a look at their
quarterly financials for Q3 2019. They lose money, as typical R&D-focused
biotechs frequently do. They have very little revenue and much bigger
operational costs. They lose cash, too, which is more worrying. When I sum up
their operational cash-flow (-2,04 PLN mln) with the investment-related one
(-7,3 PLN mln), and with the financial one (6,69 PLN mln), the bottom line is
negative, by almost 3 millions of Polish zlotys. I have a look at OAT’s stock price, and I see that I behaved dumbly: I bought
their stock right before a local peak, which is now being followed by a
descent. I bought on the back of the bear, in the stock-market jargon. Their
‘Market to Book’ coefficient is 164,04/80,08 = 2,05. I see I really need that
slow, grinding learning investment by writing about my own investment. That’s
what I can call a perfect mistake: bad financials, supposedly overappreciated
stock.
I move forward with APREA
THERAPEUTICS. The
fundamentals are weak and look a bit foggy. They certainly lose money, with a
net loss of $6,25 mln in Q3 2019, twice the net loss one year earlier.
Technically, the company has no equity in the strict sense of the term. What I
bought are some convertible securities. Still, those convertibles, apparently
not burdened with any liability, amount to $47 393 333,00. The market
capitalization, according to Yahoo Finance, is $664,35 mln, which gives one of
those crazy ‘Market to book’ ratios: 14,02. Well, if I was looking for an
undervalued biotech company, this is quite the opposite of what I should have
bought. Next, FREQUENCY THERAPEUTICS INC. An interesting case. They lose money, but just
a bit. In Q3 2019 they had a net loss of – $575 000, whilst in Q3 2018 it
was – $5,14 mln. In 2019 they started to have operational revenues, some $24
mln as for end of September 2019. They have an equity of $92 mln, and a market
capitalization of $743,45 mln, which once again gives a huge ‘market to book’
ratio: 8,08.
The next position in my portfolio is
INCYTE CORPORATION . Those guys, they are dutiful:
they have already published their annual 10-K report for the fiscal year 2019.
The fundamentals look nice. They have a steady profit, second year on end: $402
mln of operational profit, out of revenues amounting to $1 775 mln. The
market to book ratio is hilariously high: $16,9 bln of market cap, denominated
over $2,6 bln of equity makes 6,5. Still, the trend in market price is
interesting: looks like halfway the ascending horn of the bull. This investment
position is maybe not the most illustrative for my successful strategy from the
past, yet it seems to be offering some promise. I pass to MACROGENICS INC. The latest financial report available is Q3
2019, which shows a clear financial deterioration as compared to Q3 2018: less
revenues, deeper loss. The market appreciates this company moderately, with a
market cap of $497,42 mln, denominated 1,95 times over the company’s equity,
amounting to $255,2 mln. With all that, the price trend looks moderately
promising: even over 1 month there seems to be room for a nice jump up.
Finally, my last investment position
in biotech: VIR BIOTECHNOLOGY INC. Their fundamental operations don’t
look well: a shrinking revenue, and a deepening operational loss. Still, their
cash flow is positive: investors seem to be trusting them, and the whole show
was over $46 mln on surplus in terms of cash. Over the last weeks, i.e. since I
bought their shares, the price has been decreasing, and yet, on Friday 14th,
there seems to have been some bounce-back. In terms of market valuation, this
company stands $1,75 bln, which, denominated over their nominal equity of
$355,8 mln, spells 4,92. This business is to watch with caution. It looks like
a big balloon: a lot of confidence from investors with little apparent
substance in the business, but what do you want, that’s biotech.
That would be about all as regards
my investment positions in biotech, and so I pass to my two Polish crown jewels
at the frontier of digital and show business: CYFROWY Polsat and ATM Grupa. CYFROWY Polsat has excellent fundamentals. In Q3 2019 they
made PLN 459 mln ($117 mln) of quarterly operational profit. By the way, in
businesses involved with media and cinematic production, the most important
operational metric is EBITDA, or operational profit plus amortization, and this
one stands at more than PLN 1 bln in Q3 2019. Their equity was PLN 14 155
mln, which, when serving as denominator for their market cap of PLN 5 311
mln gives a market to book ratio of 0,38. Interesting: finally, an investment
position with clear undervaluation in the stock market. The long-term trend in
their stock price is generally steady growth with temporary bumps. As regards ATM Grupa,
they have good operational fundamentals, i.e. a comfortable operational profit,
yet they seem to be losing cash at the level of financing activities. Anyway,
they accumulate equity at a steady pace, and their market capitalization values
that equity at 1,59 of its nominal value.
I pass to my investment positions in
energy, and I start with FIRST SOLAR INC. Their fundamentals are sort of hesitantly
good: they had positive operational margin in Q3 2019, still it had to offset a
deeply negative one in Q1 2019. I found a piece of news at Yahoo Finance, which allows some optimism as for
the immediate prospects of their business.
Their market capitalization is $5,83 bln, and denominated over book
value of equity ($5,2 bln), stands at 1,12. As for the Polish PGE , they seem to be attaching a lot of
importance to that EBITDA metric, as if they were running some show business,
not power plants. That EBITDA looks substantial, more than PLN 6,6 bln in 2019 (provisional,
unaudited results), yet it is a bit less than in 2018. Their equity – PLN 48,7
bln – is deeply undervalued in the stock market: in terms of market capitalization,
it stands at 0,23. Looks interesting for a long-term position.
Thus I come to the two investment
positions, which I can’t help calling but my follies: ASTON MARTIN, and BLACK DIAMOND GROUP LTD . The fundamentals of ASTON MARTIN
look a bite like the beginning of trouble. They spike with their sales in the
Americas, and in China, but everywhere else, their sales fall. In 2018, they
had a positive operational margin, but in 2019 not anymore. Although their
assets keep growing, including their fixed assets, they accumulate debt even
faster, whence a decrease in nominal equity: £361,9 mln. With a market
capitalization at £961,7 mln, they are overappreciated by the market at 2,66. BLACK
DIAMOND makes me understand why I bought it. It is stupid. What I wanted to
open a position on was Black Diamond Therapeutics. Black Diamond Group, which I
actually bought, was just next on the list. I clicked on the wrong one, and I
didn’t notice my mistake. Still, just as in a neural network, errors can lead
to something interesting. Here comes the first interesting fact: that company
is undervalued in the stock market. Their market cap, at CAD 102,87 mln, is
just 0,47 of their nominal equity at the end of Q3 2019. Fundamentally, they
display a loss before taxes, still it is a loss after amortization. When we
kick amortization out of the formula, Black Diamond made a solid CAD 10 mln of
operational profit in Q3 2019.
Just as an additional explanation: I
do not perform the same kind of check for my last financial position, the
tracker fund Amundi Epra DR ISIN LU1437018838. This is a fund calibrated so as to
reflect the performance of listed real-estate companies across Europe, in Borsa
Italiana,
Nyse Euronext Paris, Nyse Euronext
Amsterdam, and London Stock Exchange. It cannot really be undervalued or overvalued.
Now, the drums. I mean, the drums
start drumming, so as to build up some tension, and I compute the rate of
return I had on those investments of mine over the first 2 weeks since opening.
In the table below, I sum up the descriptive remarks developed earlier, and I
give it a bottom line with the rates of return.
Company
Market to book
Fundamentals
Remarks about price trend – opportunity
for growth
Rate of return as of February 14th, 2020
(after 2 weeks since opening)
OAT
Onco Arendi Therapeutics
2,05
weak
no
-13,78%
Aprea
Therapeutics
14,02
weak
possible
-21,1%
Frequency
Therapeutics
8,08
weak, but improving
problematic
-1,24%
Incyte
Corporation
6,5
strong
possible
6,36%
Macrogenics
Inc
1,95
weak
problematic
8,19%
Vir
Biotechnology
4,92
weak
possible
-29,01%
Cyfrowy
Polsat
0,38
strong
possible but moderate
-1,86%
ATM
Grupa
1,59
strong
not much, looks more like stable
0,08%
First
Solar
1,12
mixed, uncertainty
possible
3,09%
PGE
0,23
good, slightly deteriorating
possible
-13,54%
Aston
Martin
2,66
deteriorating
no
-13,79%
Black
Diamond Group
0,47
strong
possible
-7,82%
Amundi
Epra – tracker fund
n.a.
strong
possible
5,21%
A few observations emerge out of
that table, as regards my learning by writing about doing things I am supposed
to learn how to do. I fail more often than I succeed. Out of the 13 positions I
opened, 5 are successful (i.e. bring positive return), and the remaining 8 are
failures, at least for the moment. Of course, it is just 2 weeks, and I want to
invest on the long range. Longer a time perspective might change things. Still,
the science of financial markets says that any given moment, I face a certain
aggregate volatility of said markets, and my personal art of survival in those
markets consists in keeping myself on the positive fringe of that volatility.
The useful assumption is that I never have a full knowledge of the financial
market, and my strategy is always the one of trial and error. I need to have
many takes at the thing, and my overall success depends on the percentage of those
attempts, which I derive positive outcomes from. Right now, my rate of success
measured across investment positions (i.e. I do not weigh them yet with the
respective amounts of money I invested in each of them individually) is 5/13 = 0,3846.
In order to even fathom the outcomes I want to reach, my first practical
improvement should be to drive that coefficient above 0,5. When I say ‘drive’
it means that my incidence of success should depend on my choices, not on the
volatility of the market. Hence, I need to hedge. I can already see that
trackers, such as the Amundi Epra one I already have, are a good way to hedge.
I
am writing a book, right now, and I am sort of taken, and I blog much less
frequently than I planned. Just to keep up with the commitment, which any
blogger has sort of imprinted in their mind, to deliver some meaningful
content, I am publishing, in this update, the outline of my first chapter. It
has become almost a truism that we live in a world of increasingly rapid
technological change. When a statement becomes almost a cliché, it is useful to
pass it in review, just to be sure that we understand what the statement is
about. In a very pragmatic perspective of an entrepreneur, or, as a matter of
fact, that of an infrastructural engineer, technological change means that something
old needs to be coupled with or replaced by something new. When a new
technology comes around, it is like a demon: it is essentially an idea,
frequently prone to protection through intellectual property rights, and that
idea looks for a body to sneak into. Humans are supposed to supply the body,
and they can do it in two different ways. They can tell the new idea to coexist
with some older ones, i.e. we embody new technologies in equipment and
solutions which we couple functionally with older ones. Take any operational
system for computers or mobile phones. On the moment, the people who are
disseminating it claim it is brand new but scratch the surface just a little
bit and you find 10-year-old algorithms underneath. Yes, they are old, and yes,
they still work.
Another
way to embody a new technological concept is to make it supplant older ones
completely. We do it reluctantly, yet sometimes it really looks like a better
idea. Electric cars are a good example of this approach. Initially, the basic
idea seems to have consisted in putting electric engines into an otherwise
unchanged structure of vehicles propelled by combustion engines. Still,
electric propulsion is heavier, as we need to drive those batteries around.
Significantly greater weight means the necessity to rethink steering,
suspension, structural stability etc., whence the need to design a new
structure.
Whichever
way of embodying new technological concepts we choose, our equipment ages. It
ages physically and morally, in various proportions. Aging in technologies is
called depreciation. Physical depreciation means physical wearing
and destruction in a piece of equipment. As it happens – and it happens to anything
used frequently, e.g. shoes – we choose between repairing and replacing the destroyed
parts. Whatever we do, it requires resources. From the economic point of view,
it requires capital. As strange as it could sound, physical depreciation occurs
in the world of digital technologies, too. When a large digital system, e.g.
that of an airport, is being run, something apparently uncanny happens: some
component algorithms of that system just stop working properly, under the
burden of too much data, and they need to be replaced sort of on the go,
without putting the whole system on hold. Of course, the essential cause of
that phenomenon is the disproportion between the computational scale of
pre-implementation tests, and that of real exploitation. Still, the interesting
thing about those on-the-go patches of the system is that they are not fundamentally
new, i.e. they do not express any new concept. They are otherwise known,
field-tested solutions, and they have to be this way in order to work.
Programmers who implement those patches do not invent new digital technologies;
they just keep the incumbent ones running. They repair something broken with
something working smoothly. Functionally, it is very much like repairing a fleet
of vehicles in an express delivery business.
As
we take care of the physical depreciation occurring in our incumbent equipment
and software, new solutions come to the market, and let’s be honest: they are
usually better than what we have at the moment. The technologies we hold become
comparatively less and less modern, as new ones appear. That phenomenon of
aging by obsolescence is called moral depreciation. Proportions of the
actual physical depreciation & moral depreciation-cocktail depend on the
pace of technological race in the given industry. When a lot of alternative,
mutually competing solutions emerge, moral obsolescence accelerates and tends
to become the dominant factor of aging in our technological assets. Moral
depreciation creates a tension: as we look the state-of-the-art in our industry
progressively moving away from our current technological position, determined
by the assets we have, we find ourselves under a growing pressure to do
something about it. Finally, we come to the point of deciding to invest in
something definitely more up to date than what we currently have.
Both
layers of depreciation – physical and moral – absorb capital. It seems
pertinent to explain how exactly they do so. We need money to pay for goods and
services necessary for repairing and replacing the physically used parts of our
technological basket. We obviously need money to pay for the completely new
equipment, too. Where does that money come from? Are there any patterns as for
its sourcing? The first and the most obvious source of money to finance
depreciation in our assets is the financial scheme of amortization. In
many legal regimes, i.e. in all the developed countries and in a large number
of emerging and developing economies, an entity being in possession of assets
subject to depreciation is allowed to subtract from its income tax base, a
legally determined financial amount, in order to provide for depreciation.
The
legally possible amount of amortization is calculated as a percentage of book
value ascribed to the corresponding assets, and this percentage is based on
their assumed. If a machine is supposed to have a useful life of five years,
after all is said and done as for its physical and moral depreciation, I can
subtract from my tax base 1/5th = 20% of its book value. Question:
which exact book value, the initial one or the current one? It depends on the
kind of deal an entrepreneur makes with tax authorities. Three alternative ways
are possible: linear, decreasing, and increasing. When I do linear
amortization, I take the initial value of the machine, e.g. $200 000,
I divide it into 5 equal parts right after the purchase, thus in 5 instalments
of $40 000 each, and I subtract those instalments annually from my tax
base, starting from the current year. After linear amortization is over, the
book value of the machine is exactly zero.
Should
I choose decreasing amortization, I take the current value of my machine
as the basis for the 20% reduction of my tax base. The first year, the machine
is brand new, worth $200 000, and so I amortize 20% * $200 000 =
$40 000. The next year, i.e. in the second year of exploitation, I start
with my machine being worth $200 000 – $40 000 = (1 – 20%) *
$200 000 = $160 000. I repeat
the same operation of amortizing 20% of the current book value, and I do:
$160 000 – 20% * $160 000 = $160 000 – $32 000 =
$128 000. I subtracted $32 000 from my tax base in this second year
of exploitation (of the machine), and, and the end of the fiscal year, I landed
with my machine being worth $128 000 net of amortization. A careful reader
will notice that decreasing amortization is, by definition, a non-linear
function tending asymptotically towards zero. It is a never-ending story, and a
paradox. I assume a useful life of 5 years in my machine; hence I subtract 1/5th
= 20% of its current value from my tax base, and yet the process of
amortization takes de facto longer than 5 years and has no clear end. After 5
years of amortization, my machine is worth $65 536 net of amortization, and
I can keep going. The machine is technically dead as useful technology, but I
still have it in my assets.
Increasing
amortization is based on more elaborate assumptions than the two
preceding methods. I assume that my machine will be depreciating over time at
an accelerating pace, e.g. 10% of the current value in the first year, 20%
annually over the years 2 – 4, and 30% in the 5th year. The
underlying logic is that of progressively diving into the stream of
technological race: the longer I have my technology, the greater is the
likelihood that someone comes up with something definitely more modern. With
the same assumption of $200 000 as initial investment, that makes me write
off my tax base the following amounts: 1st year – $20 000, 2nd
÷ 4th year – $40 000, 5th year – $60 000. After
5 years, the net value of my equipment is zero.
The
exact way I can amortize my assets depends largely on the legal regime in force
– national governments have their little ways in that respect, using the rates
of amortization as incentives for certain types of investment whilst
discouraging other types – and yet there is quite a lot of financial
strategy in amortization, especially in large business structures with
ownership separate from management. We can notice that linear amortization
gives comparatively greater savings in terms of tax due. Still, as amortization
consists in writing an amount off the tax base, we need any tax base at all
beforehand. When I run a well-established, profitable business way past its
break-even point, tax savings are a sensible idea, and so is linear
amortization in my fixed assets. However, when I run a start-up, still deep in
the red zone below the break-even point, there is not really any tax base to
subtract amortization from. Recording a comparatively greater amortization from
operations already running at a loss just deepens the loss, which, at the end
of the day, has to be subtracted from the equity of my business, and it doesn’t
look good in the eyes of my prospective investors and lenders. Relatively
quick, linear amortization is a good strategy for highly profitable operations
with access to lots of cash. Increasing amortization could be good for that
start-up business, when relatively the greatest margin of operational income
turns up some time after the day zero of operations.
Interestingly,
the least obvious logic comes with decreasing amortization. What is the
practical point of amortizing my assets asymptotically down to zero, without
ever reaching zero? Good question, especially in the light of a practical fact
of life, which the author challenges any reader to test by themselves: most managers
and accountants, especially in small and medium sized enterprises, will
intuitively amortize the company’s assets precisely this way, i.e. along the
decreasing path. Question: why people do something apparently illogical?
Answer: because there is a logic to that, it is just hard to phrase out. What
about the logic of accumulating capital? Both the linear amortization and the
increasing one lead to having, at some point in time, the book value of the
corresponding assets drops down to zero. A lot of value off my assets means
that either I subtract the corresponding amount from the passive side of my
balance sheet (i.e. I repay some loans or I give away some equity), or I
compensate the write-off with new investment. Either I lose cash, or I am in
need of more cash. When I am in tight technological race, and my assets are
subject to quick moral depreciation, those sudden drops down to zero can put a
lot of financial streets on my balance sheet. When I do something apparently
detached from my technological strategy, i.e. when I amortize decreasingly,
sudden capital quakes are replaced by a gentle descent, much more predictable.
Predictable means e.g. negotiable with banks who lend me money, or with
investors buying shares in my equity.
This
is an important pattern to notice in commonly encountered behaviour regarding
capital goods: most people will intuitively tend to protect the capital base of
their organisation, would it be a regular business or a public agency. When
choosing between amortizing their assets faster, so as to reflect the real pace
of their ageing, or amortizing them slower, thus a bit against the real occurrence
of depreciation, most people will choose the latter, as it smoothens the
resulting changes in the capital base. We can notice it even in ways that most
of us manage our strictly private assets. Let’s take the example of an ageing
car. When a car reaches the age when an average household could consider to
change it, like 3 – 4 years, only a relatively tiny fraction of the population,
probably not more than 16%, will really change for a new car. The majority (the
author of this book included, by the way) will rather patch and repair, and
claim that ‘new cars are not as solid as those older ones’. There is a logic to
that. A new car is bound to lose around 25% of its market value annually over
the first 2 – 3 years of its useful life. An old car, aged 7 years or more,
loses around 10% or less per year. In other words, when choosing between
shining new things that age quickly and the less shining old things that age
slowly, only a minority of people will choose the former. The most common
behavioural pattern consists in choosing the latter.
When
recurrent behavioural patterns deal with important economic phenomena, such as
technological change, an economic equilibrium could be poking its head from
around the corner. Here comes an alternative way of denominating depreciation
and amortization, i.e. instead of denominating it as a fraction of value
attributed to assets, we can denominate over the revenue of our business. Amortization
can be seen as the cost of staying in the game. Technological race takes a toll
on our current business. The faster our technologies depreciate, the costlier
it is to stay in the race. At the end of the day, I have to pay someone or
something that helps me keeping up with the technological change happening
around, i.e. I have to share, with that someone or something, a fraction of
what my customers pay me for the goods and services I offer. When I hold a
differentiated basket of technological assets, each ageing at a different pace
and starting from a different moment in time, the aggregate capital write-off
that corresponds to their amortization is the aggregate cost of keeping up with
science.
When
denoting K as the book value of assets, with a
standing for the rate of amortization corresponding to one of the strategies
sketched above, P representing the average price of goods we
sell, and Q their quantity, we can sketch the considerations
developed above in a more analytical way, as a coefficient labelled A,
as in equation (1) below.
A = (K*a)/(P*Q)
(1)
The
coefficient A represents the relative burden of aggregate
amortization of all the fixed assets in hand, upon the revenues recorded in a
set of economic agents. Equation (1) can be further transformed so as to
extract quantities at both levels of the fraction. Factors in the denominator
of equation (1), i.e. prices and quantities of goods sold in order to generate
revenues will be further represented as, respectively, PG
and QG, whilst the book value of assets subject to
amortization will be symbolized as the arithmetical product QK*PK
of market prices PK of assets, and the quantity QK
thereof. Additionally, we drive the rate of amortization ‘a’ down to what it
really is, i.e. inverted representation of an expected lifecycle F,
measured in years, and ascribed to our assets. Equation (2) below shows an
analytical development in this spirit.
A = (1/F)*[(PK*QK)/(PG*QG)] (2)
Before
the meaning of equation (2) is explored more in depth, it is worth explaining
the little mathematical trick that economists use all the time, and which
usually raises doubts in the minds of bystanders. How can anyone talk about an
aggregate quantity QG of goods sold, or that of fixed
assets, the QK? How can we distil those aggregate
quantities out of the facts of life? If anyone in their right mind thinks about
the enormous diversity of the goods we trade, and the assets we use, how can we
even set a common scale of measurement? Can we add up kilograms of BMW cars
with kilograms of food consumed, and use it as denominator for kilograms of
robots summed up with kilograms of their operating software?
This
is a mathematical trick, yet a useful one. When we think about any set of
transactions we make, whether we buy milk or machines for a factory, we can
calculate some kind of weighted average price in those transactions. When I
spend $1 000 000 on a team of robots, bought at unitary price P(robot),
and $500 000 on their software bought at price P(software),
the arithmetical operation P(robot)*[$1 000 000 /
($1 000 000 + $500 000)] + P(software)*[$500 000 /
($1 000 000 + $500 000)] will yield a weighted average
price P(robot; software) made in one third of the price of
software, and in two thirds of the price of robots. Mathematically, this
operation is called factorisation, and we use it when we suppose the existence
of a common, countable factor in a set of otherwise distinct phenomena. Once we
suppose the existence of recurrent transactional prices in anything humans do,
we can factorise that anything as Price Multiplied By Quantity,
or P*Q. Thus, although we cannot really add up kilograms of
factories with kilograms of patents, we can factorise their respective prices
out of the phenomenon observed and write PK*QK.
In this approach, quantity Q is a semi-metaphysical category,
something like a metaphor for the overall, real amount of the things we have,
make and do.
Keeping
those explanations in mind, let’s have a look at the empirical representation
of coefficient A, as computed according to equation (2), on the
grounds of data available in Penn Tables 9.1 (Feenstra et al. 2015[1]), and represented
graphically in Figure I_1 below. The database known as Penn
Tables provides direct information about three big components of equation (2):
the basic rate of amortization, the nominal value of fixed assets, and the
nominal value of Gross Domestic Product GDP) for each of the 182 national
economies covered. One of the possible ways of thinking about the wealth of a
nation is to compute the value of all the final goods and services made by said
nation. According to the logic presented in the preceding paragraph, whilst the
whole basket of final goods is really diversified, it is possible to nail down
a weighted, average transactional price P for all that lot, and,
consequently, to factorise the real quantity Q out of it. Hence,
the GDP of a country can be seen as a very rough approximation of value added
created by all the businesses in that territory, and changes over time in the
GDP as such can be seen as representative for changes in the aggregate revenue
of all those businesses.
Figure
I_1 introduces two metrics, pertinent to the empirical
unfolding of equation (2) over time and across countries. The continuous line
shows the arithmetical average of local, national coefficients A
across the whole sample of countries. The line with square markers represents
the standard deviation of those national coefficients from the average
represented by the continuous line. Both metrics are based on the nominal
computation of the coefficient A for each year in each given
national economy, thus in current prices for each year from 1950 through 2017.
Equation (2) gives many possibilities of change in the coefficient A
– including changes in the proportion between the price PG
of final goods, and the market price PK of fixed
assets – and the nominal computation used in Figure I_1 captures that factor as
well.
[Figure
I_1_Coefficient of amortization in GDP, nominal, world, trend]
In
1950, the average national coefficient A, calculated as specified
above, was equal to 6,7%. In 2017, in climbed to A = 20,8%. In other words, the
average entrepreneur in 1950 would pay less than one tenth of their revenues to
amortize the depreciation of technological assets, whilst in 2017 it was more
than one fifth. This change in proportion can encompass many phenomena. It can
the pace of scientific change as such, or just a change in entrepreneurial
behaviour as regards the strategies of amortization, explained above. Show
business is a good example. Content is an asset for television stations, movie
makers or streaming services. Content assets age, and some of them age very
quickly. Take the tonight news show on any TV channel. The news of today are
much less of a news tomorrow, and definitely not news at all the next month. If
you have a look at annual financial reports of TV broadcasters, such as the
American classic of the industry, CBS Corporation[1], you will see insane
nominal amounts of amortization in their cash flow statements. Thus, the
ascending trend of average coefficient A, in Figure I_1, could be, at least
partly, the result of growth in the amount of content assets held by various
entities in show business. It is a good thing to deconstruct that compound
phenomenon into its component factors, which is being undertaken further below.
Still, before the deconstruction takes place, it is good to have an inquisitive
look at the second curve in Figure I_1, the square-marked one, representing
standard deviation of coefficient A across countries.
In
common interpretation of empirical numbers, we almost intuitively lean towards
average values, as the expected ones in a large set, and yet the standard
deviation has a peculiar charm of its own. If we compare the paths followed by
the two curves in Figure I_1, we can see them diverge: the average A
goes resolutely up whilst the standard deviation in A stays
almost stationary in its trend. In the 1950ies or 1960ies, the relative burden
of amortization upon the GDP of individual countries was almost twice as
disparate than it is today. In other words, back in the day it mattered much
more where exactly our technological assets are located. Today, it matters
less. National economies seem to be converging in their ways of sourcing
current, operational cash flow to provide for the depreciation of incumbent
technologies.
Getting
back to science, and thus back to empirical facts, let’s have a look at two
component phenomena of trends sketched in Figure I_1: the pace of scientific
invention, and the average lifecycle of assets. As for the former, the
coefficient of patent applications per 1 mln people, sourced from the World
Bank[2], is used as representative
metric. When we invent an original solution to an existing technological
problem, and we think we could make some money on, we have the option of
applying for legal protection of our invention, in the form of a patent.
Acquiring a patent is essentially a three-step process. Firstly, we file the
so-called patent application to the patent office adequate for the given
geographical jurisdiction. Then, the patent office publishes our application,
calling out for anyone who has grounds for objecting to the issuance of patent,
e.g. someone we used to do research with, hand in hand, but hands parted as
some point in time. As a matter of fact, many such disputes arise, which makes
patent applications much more numerous than actually granted patents. If you
check patent data, granted patents define a currently appropriated territories
of intellectual property, whilst patent applications are pretty much
informative about the current state of applied science, i.e. about the path
this science takes, and about the pressure it puts on business people towards
refreshing their technological assets.
Figure
I_2 below shows the coefficient of patent applications per 1 mln people in the
global economy. The shape of the curve is interestingly similar to that of
average coefficient A, shown in Figure I_1, although it covers a
shorter span of time, from 1985 through 2017. At the first sight, it seems making
sense: more and more patentable inventions per 1 million humans, on average,
puts more pressure on replacing old assets with new ones. Yet, the first sight
may be misleading. Figure I_3, further below, shows the average lifecycle of
fixed assets in the global economy. This particular metric is once again
calculated on the grounds of data available in Penn Tables 9_1 (Feenstra et al.
2015 op. cit.). The database strictly spoken contains a variable called
‘delta’, which is the basic rate of amortization in fixed assets, i.e. the
percentage of their book value commonly written off the income tax base as
provision for depreciation. This is factor ‘a’ in equation (1),
presented earlier, and reflects the expected lifecycle of assets. The inverted
value ‘1/a’ gives the exact value of that lifecycle in years,
i.e. the variable ‘F’ in equation (2). Here comes the big
surprise: although the lifecycle ‘F’, computed as an average for
all the 182 countries in the database, does display a descending trend, the
descent is much gentler, and much more cyclical that what we could expect after
having seen the trend in nominal burden A of amortization, and in
the occurrence of patent applications. Clearly, there is a push from science
upon businesses towards shortening the lifecycle of their assets, but
businesses do not necessarily yield to that pressure.
[Figure
I_2_Patent Applications per 1 mln people]
Here
comes a riddle. The intuitive assumption that growing scientific input provokes
shorter a lifespan in technological assets proves too general. It obviously
does not encompass the whole phenomenon of increasingly cash-consuming
depreciation in fixed assets. There is something else. After having casted a
look at the ‘1/F’ component
factor of equation (2), let’s move to the
(PK*QK)/(PG*QG)
one. Penn Tables 9.1 provide two variables that allow calculating it: the
aggregate value of fixed assets in national economies, at current prices, and
the GDP of those economies, in current prices as well. Interestingly, those two
variables are provided in two versions each: one at constant prices of 2011,
the other at current prices. Before the consequences of that dual observation
are discussed, let’s remind some basic arithmetic: we can rewrite (PK*QK)/(PG*QG)
as (PK/PG)*(QK/QG).
The (PK/PG) component fraction corresponds
to the proportion between weighted average prices in, respectively, fixed
assets (PK), and final goods (PG).
The other part, i.e. (QK/QG) stands for the
proportion between aggregate quantities of assets and goods. Whilst we refer
here to that abstract concept of aggregate quantities, observable only as
something mathematically factorized out of something really empirical, there is
method to that madness. How big a factory do we need to make 20 000 cars a
month? How big a server do we need in order to stream 20 000 hours of
films and shows a month? Presented under this angle, the proportion (QK/QG) is much more real. When both the aggregate
stock of fixed assets in national economies, and the GDP of those economies are
expressed in current prices, both the (PK/PG)
factor, and the (QK/QG) really change over
time. What is observed (analytically) is the full (PK*QK)/(PG*QG)
coefficient. Yet, when prices are constant, the (PK/PG)
component factor does not actually change over time; what really changes, is
just the proportion between aggregate quantities of assets and goods.
The
factorisation presented above allows another trick at the frontier of
arithmetic and economics. The trick consists in using creatively two types of
economic aggregates, commonly published in publicly available databases:
nominal values as opposed to real values. The former category represents
something like P*Q, or price multiplied by quantity. The latter
is supposed to have kicked prices out of the equation, i.e. to represent just
quantities. With those two types of data we can do something opposite to the
procedure presented earlier, which serves to distil real quantities out of
nominal values. This time, we have externally provided products ‘price times
quantity’, and just quantities. Logically, we can extract prices out of the
nominal values. When we have two
coefficients given in the Penn Tables 9.1 database – the full (PK*QK)/(PG*QG)
(current prices) and the partial (QK/QG) (constant
prices) – we can develop the following equation: [(PK*QK)/(PG*QG)]/
(QK/QG) = PK/PG. We can use the really observable proportion
between the nominal value of fixed assets and that of Gross Domestic Product,
divide it by the proportion between real quantities of, respectively assets and
final goods, in order to calculate the proportion between weighted average
prices of assets and goods.
Figure
I_4, below, attempts to represent all those three
phenomena – the change in nominal values, the change in real quantities, and
the change in prices – in one graph. As different magnitudes of empirical
values are involved, Figure I_4 introduces another analytical method, namely indexation
over constant denominator. When we want to study temporal trends in
values, which are either measured with different units or display very
different magnitudes, we can choose one point in time as the peg value for each
of the variables involved. In the case of Figure I_4, the peg year is 2011, as
Penn Tables 9.1 use 2011 as reference year for constant prices. Aggregate
values of capital stock and national GDP, when measured in constant prices, are
measured in the prices of the year 2011. For each of the three variables
involved – the nominal proportion of capital stock to GDP (PK*QK)/(PG*QG),
the real proportion thereof QK/QG
and the proportion between the prices of
assets and the prices of goods PK*QK – we
take their values in 2011 as denominators for the whole time series. Thus, for
example, the nominal proportion of capital stock to GDP in 1990 is the quotient
of the actual value in 1990 divided by the value in 2011 etc. As a result, we
can study each of the three variables as if the value in 2011 was equal to
1,00.
[Figure
I_4 Comparative indexed trends in the proportion between the national capital
stock and the GDP]
The
indexed trends thus computed are global averages of across the database, i.e. averages
of national values computed for individual countries. The continuous blue line
marked with red triangles represents the nominal proportion between the national
stocks of fixed assets, and the respective GDP of each country, or the full (PK*QK)/(PG*QG)
coefficient. It has been consistently climbing since 1950, and since the mid-1980ies
the slope of that climb seems to have increased. Just to give a glimpse of actual
non-indexed values, in 1950 the average (PK*QK)/(PG*QG)
coefficient was 1.905, in 1985 it reached 2.197, in the reference year 2011it
went up to 3.868, to end up at 4.617 in 2017. The overall shape of the curve
strongly resembles that observed earlier in the coefficient of patent applications
per 1 mln people in the global economy, and in another indexed trend to find in
Figure I_4, that of price coefficient PK*PG.
Starting from 1985, that latter
proportion seems to be following almost perfectly the trend in patentable
invention, and its actual, non-indexed values seem to be informative about a
deep change in business in connection with technological change. In 1950, the
proportion between average weighted prices of fixed assets, and those of final
goods was PK*PG = 0,465, and even in the middle
of the 1980ies it kept roughly the same level, PK*PG
= 0,45. To put it simply, fixed assets were half as expensive as final goods, per
unit of quantity. Yet, since 1990, something had changed: that proportion started
to grow: productive assets started to be more and more valuable in comparison
to the market prices of the goods they served to make. In 2017, PK*PG
reached 1,146. From a world, where technological assets were just tools to make
final goods we moved into a world, where technologies are goods in themselves.
If we look carefully at digital technologies, nanotechnologies or at biotech, this
general observation strongly holds. A new molecule is both a tool to make
something, and a good in itself. It can make a new drug, and it can be a new
drug. An algorithm can create value added as such, or it can serve to make another
value-creating algorithm.
Against
that background of unequivocal change in the prices of technological assets, and
in their proportion to the Gross Domestic Product of national economies, we can
observe a different trend in the proportion of quantities: QK/QG.
Hence, we return to questions such as ‘How big a factory we need in order to
make the amount of final goods we want?’. The answer to that type of question
takes the form of something like a long business cycle, with a peak in 1994, at
QK/QG = 5,436. The presently observed QK/QG
(2017) = 4,027 looks relatively modest
and is very similar to the value observed in 1950ies. Seventy years ago, we
used to be a civilization, which needed around 4 units of quantity in technological
assets to make one unit of quantity in final goods. Then, starting from the mid-1970ies,
we started turning into a more and more technology intensive culture, with more
and more units of quantity in assets required to make one unit of quantity in
final goods. In the mid-1990ies, that asset-intensity reached its peak, and now
it is back at the old level.
[1] Feenstra, Robert C., Robert Inklaar
and Marcel P. Timmer (2015), “The Next Generation of the Penn World
Table” American Economic Review, 105(10), 3150-3182, available for
download at www.ggdc.net/pwt
I
noticed it is one month that I did not post anything on my blog. Well, been
doing things, you know. Been writing, and thinking by the same occasion. I am
forming a BIG question in my mind, a question I want to answer: how are we
going to respond to climate change? Among all the possible scenarios of such
response, which are we the most likely to follow? When I have a look, every now
and then, at Greta Thunberg’s astonishingly quick social ascent, I wonder why are
we so divided about something apparently so simple? I am very clear: this is
not a rhetorical question from my part. Maybe I should claim something like: ‘We
just need to get all together, hold our hands and do X, Y, Z…’. Yes, in a
perfect world we would do that. Still, in the world we actually live in, we don’t.
Does it mean we are collectively stupid, like baseline, and just some enlightened
individuals can sometimes see the truly rational path of moving ahead? Might
be. Yet, another view is possible. We might be doing apparently dumb things
locally, and those apparent local flops could sum up to something quite
sensible at the aggregate scale.
There
is some science behind that intuition, and some very provisional observations. I
finally (and hopefully) nailed down the revision of the
article on energy efficiency. I have already started
developing on this one in my last update, entitled ‘Knowledge
and Skills’, and now, it is done. I have just revised the
article, quite deeply, and by the same occasion, I hatched a methodological
paper, which I submitted to MethodsX.
As I want to develop a broader discussion on these two papers, without
repeating their contents, I invite my readers to get acquainted with their PDF,
via the archives of my blog. Thus, by clicking the title Energy
Efficiency as Manifestation of Collective Intelligence in Human Societies,
you can access the subject matter paper on energy efficiency, and clicking on Neural
Networks As Representation of Collective Intelligence
will take you to the methodological article.
I
think I know how to represent, plausibly, collective intelligence with
artificial intelligence. I am showing the essential concept in the picture
below. Thus, I start with a set of empirical data, describing a society. Well
in the lines of what I have been writing, on this blog, since early spring this
year, I assume that quantitative variables in my dataset, e.g. GDP per capita,
schooling indicators, the probability for an average person to become a mad
scientist etc. What is the meaning of those variables? Most of all, they exist
and change together. Banal, but true. In other words, all that stuff represents
the cumulative outcome of past, collective action and decision-making.
I
decided to use the intellectual momentum, and I used the same method with a
different dataset, and a different set of social phenomena. I took Penn Tables
9.1 (Feenstra et al. 2015[1]), thus a well-known base
of macroeconomic data, and I followed the path sketched in the picture below.
Long
story short, I have two big surprises. When I look upon energy efficiency and
its determinants, turns out energy efficiency is not really the chief outcome
pursued by the 59 societies studied: they care much more about the local, temporary
proportions between capital immobilised in fixed assets, and the number of
resident patent applications. More specifically, they seem to be principally
optimizing the coefficient of fixed assets per 1 patent application. That is
quite surprising. It sends me back to my peregrinations through the land of
evolutionary theory (see for example: My
most fundamental piece of theory).
When
I take a look at the collective intelligence (possibly) embodied in Penn Tables
9.1, I can see this particular collective wit aiming at optimizing the share of
labour in the proceeds from selling real output in the first place. Then,
almost immediately after, comes the average number of hours worked per person
per year. You can click on
this link and read the full manuscript I have just submitted with
the Quarterly Journal of Economics.
Wrapping
it (provisionally) up, as I did some social science with the assumption of
collective intelligence in human societies taken at the level of methodology, and
I got truly surprising results. That
thing about energy efficiency – i.e. the fact that when in presence of some
capital in fixed assets, and some R&D embodied in patentable inventions, we
seem caring about energy efficiency only secondarily – is really mind-blowing. I
had already done some research on energy as factor of social change, and,
whilst I have never been really optimistic about our collective capacity to
save energy, I assumed that we orient ourselves, collectively, on some kind of
energy balance. Apparently, we do only when we have nothing else to pay
attention to. On the other hand, the
collective focus on macroeconomic variables pertinent to labour, rather
than prices and quantities, is just as gob-smacking. All economic education,
when you start with Adam Smith and take it from there, assumes that economic
equilibriums, i.e. those special states of society when we are sort of in balance
among many forces at work, are built around prices and quantities. Still, in that
research I have just completed, the only kind of price my neural network can build
a plausibly acceptable learning around, is the average price level in international
trade, i.e. in exports, and in imports. All the prices, which I have been
taught, and which I taught are the cornerstones of economic equilibrium, like
prices in consumption or prices in investment, when I peg them as output
variables of my perceptron, the incriminated perceptron goes dumb like hell and
yields negative economic aggregates. Yes, babe: when I make my neural network
pay attention to price level in investment goods, it comes to the conclusion
that the best idea is to have negative national income, and negative population.
Returning
to the issue of climate change and our collective response to it, I am trying
to connect my essential dots. I have just served some like well-cooked science,
and not it is time to bite into some raw one. I am biting into facts which I
cannot explain yet, like not at all. Did you know, for example, that there are
more and more adult people dying in high-income countries, like per 1000, since
2014? You can consult the data available with World Bank, as regards the
mortality of men and that
in women. Infant mortality is generally falling, just as adult mortality in
low, and middle-income countries. It is just about adult people in wealthy
societies categorized as ‘high income’: there are more and more of them dying per
1000. Well, I should maybe say ‘more of us’, as I am 51, and relatively
well-off, thank you. Anyway, all the way up through 2014, adult mortality in high-income
countries had been consistently subsiding, reaching its minimum in 2014 at 57,5
per 1000 in women, and 103,8 in men. In 2016, it went up to 60,5 per 1000 in
women, and 107,5 in men. It seems counter-intuitive. High-income countries are
the place where adults are technically exposed to the least fatal hazards. We
have virtually no wars around high income, we have food in abundance, we enjoy reasonably
good healthcare systems, so WTF? As regards low-income countries, we could
claim that adults who die are relatively the least fit for survival ones, but what
do you want to be fit for in high-income places? Driving a Mercedes around? Why
it started to revert since 2014?
Intriguingly,
high income countries are also those, where the difference in adult mortality
between men and women is the most pronounced, in men almost the double of what
is observable in women. Once again, it is something counter-intuitive. In low-income
countries, men are more exposed to death in battle, or to extreme conditions,
like work in mines. Still, in high-income countries, such hazards are remote.
Once again, WTF? Someone could say: it is about natural selection, about
eliminating the weak genetics. Could be, and yet not quite. Elimination of weak
genetics takes place mostly through infant mortality. Once we make it like through
the first 5 years of our existence, the riskiest part is over. Adult mortality
is mostly about recycling used organic material (i.e. our bodies). Are human
societies in high-income countries increasing the pace of that recycling? Why
since 2015? Is it more urgent to recycle used men than used women?
There
is one thing about 2015, precisely connected to climate change. As I browsed some
literature about droughts in Europe and their possible impact on agriculture (see
for example All
hope is not lost: the countryside is still exposed), it turned out that
2015 was precisely the year when we started to sort of officially admitting
that we have a problem with agricultural droughts on our continent. Even more
interestingly, 2014 and 2015 seem to have been the turning point when aggregate
damages from floods, in Europe, started to curb down after something like two
decades of progressive increase. We swapped one calamity for another one, and
starting from then, we started to recycle used adults at more rapid a pace. Of
course, most of Europe belongs to the category of high-income countries.
See?
That’s what I call raw science about collective intelligence. Observation with
a lot of questions and very remote idea as for the method of answering them. Something
is apparently happening, maybe we are collectively intelligent in the process, and
yet we don’t know how exactly (are we collectively intelligent). It is possible
that we are not. Warmer climate is associated with greater prevalence of
infectious diseases in adults (Amuakwa-Mensah
et al. 2017[1]),
for example, and yet it does not explain why is greater adult mortality happening
in high-income countries. Intuitively, infections attack where people are
poorly shielded against them, thus in countries with frequent incidence of
malnutrition and poor sanitation, thus in the low-income ones.
I
am consistently delivering good, almost new science to my readers, and love
doing it, and I am working on crowdfunding this activity of mine. You can
communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com.
As we talk business plans, I remind you that you can download, from the library
of my blog, the business
plan I prepared for my semi-scientific project Befund (and you can access
the French version as well). You can also get a free e-copy of my book ‘Capitalism
and Political Power’ You can support my research by donating directly,
any amount you consider appropriate, to my PayPal account.
You can also consider going to my Patreon
page and become my patron. If you decide so, I will be
grateful for suggesting me two things that Patreon suggests me to suggest you.
Firstly, what kind of reward would you expect in exchange of supporting me?
Secondly, what kind of phases would you like to see in the development of my
research, and of the corresponding educational tools?
[1] Amuakwa-Mensah, F., Marbuah,
G., & Mubanga, M. (2017). Climate variability and infectious diseases
nexus: Evidence from Sweden. Infectious Disease Modelling, 2(2),
203-217.
[1] Feenstra, Robert C., Robert Inklaar
and Marcel P. Timmer (2015), “The Next Generation of the Penn World Table”
American Economic Review, 105(10), 3150-3182, available for download at www.ggdc.net/pwt
Once
again, I break my rhythm. Mind you, it happens a lot this year. Since January,
it is all about breaking whatever rhythm I have had so far in my life. I am
getting used to unusual, and I think it is a good thing. Now, I am breaking the
usual rhythm of my blogging. Normally, I have been alternating updates in
English with those in French, like one to one, with a pinchful of writing in my
mother tongue, Polish, every now and then. Right now, two urgent tasks require
my attention: I need to prepare new syllabuses,
for English-taught courses in the upcoming academic year, and to revise my draft article on the energy
efficiency of national economies.
Before
I attend to those tasks, however, a little bit of extended reflection on goals
and priorities in my life, somehow in the lines of my last update, « It might be a sign of narcissism
». I have just gotten back from Nice, France, where my son has just started his
semester of Erasmus + exchange, with the Sophia Antipolis University. In my
youth, I spent a few years in France, I went many times to France since, and
man, this time, I just felt the same, very special and very French kind of human
energy, which I remember from the 1980ies. Over the last 20 years or so, the
French seemed sort of had been sleeping inside their comfort zone but now, I
can see people who have just woken up and are wondering what the hell they had
wasted so much time on, and they are taking double strides to gather speed in
terms of social change. This is the innovative, brilliant, positively cocky
France I love. There is sort of a social pattern in France: when the French get
vocal, and possibly violent, in the streets, they are up to something as a
nation. The French Revolution in 1789 was an expression of popular discontent,
yet what followed was not popular satisfaction: it was one-century-long
expansion on virtually all plans: political, military, economic, scientific
etc. Right now, France is just over the top of the Yellow Vests protest, which
one of my French students devoted an essay to (see « Carl Lagerfeld and some guest
blogging from Emilien Chalancon, my student »). I
wonder who will be the Napoleon Bonaparte of our times.
When
entire nations are up to something, it is interesting. Dangerous, too, and yet
interesting. Human societies are, as a rule, the most up to something as
regards their food and energy base, and so I come to that revision of my
article. Here, below, you will find the letter of review I received from the
journal “Energy” after I submitted the initial manuscript, referenced
as Ms. Ref. No.: EGY-D-19-00258. The link to my manuscript is to find in the
first paragraph of this update. For those of you who are making their first
steps in science, it can be an illustration of what ‘scientific dialogue’
means. Further below, you will find a first sketch of my revision, accounting
for the remarks from reviewers.
Thus,
here comes the LETTER OF REVIEW (in italic):
Ms. Ref. No.: EGY-D-19-00258
Title:
Apprehending energy efficiency: what is the cognitive value of hypothetical
shocks? Energy
Dear
Dr. Wasniewski,
The
review of your paper is now complete, the Reviewers’ reports are below. As you
can see, the Reviewers present important points of criticism and a series of
recommendations. We kindly ask you to consider all comments and revise the
paper accordingly in order to respond fully and in detail to the Reviewers’
recommendations. If this process is completed thoroughly, the paper will be
acceptable for a second review.
If
you choose to revise your manuscript it will be due into the Editorial Office
by the Jun 23, 2019
Once
you have revised the paper accordingly, please submit it together with a
detailed description of your response to these comments. Please, also include a
separate copy of the revised paper in which you have marked the revisions made.
Please
note if a reviewer suggests you to cite specific literature, you should only do
so if you feel the literature is relevant and will improve your paper.
Otherwise please ignore such suggestions and indicate this fact to the handling
editor in your rebuttal.
When
submitting your revised paper, we ask that you include the following items:
Manuscript
and Figure Source Files (mandatory):
We
cannot accommodate PDF manuscript files for production purposes. We also ask
that when submitting your revision you follow the journal formatting
guidelines. Figures and tables may be embedded within the source file for the
submission as long as they are of sufficient resolution for Production. For any
figure that cannot be embedded within the source file (such as *.PSD Photoshop
files), the original figure needs to be uploaded separately. Refer to the Guide
for Authors for additional information. http://www.elsevier.com/journals/energy/0360-5442/guide-for-authors
Highlights
(mandatory):
Highlights
consist of a short collection of bullet points that convey the core findings of
the article and should be submitted in a separate file in the online submission
system. Please use ‘Highlights’ in the file name and include 3 to 5 bullet
points (maximum 85 characters, including spaces, per bullet point). See the
following website for more information
Data
in Brief (optional):
We
invite you to convert your supplementary data (or a part of it) into a Data in
Brief article. Data in Brief articles are descriptions of the data and
associated metadata which are normally buried in supplementary material. They
are actively reviewed, curated, formatted, indexed, given a DOI and freely
available to all upon publication. Data in Brief should be uploaded with your
revised manuscript directly to Energy. If your Energy research article is
accepted, your Data in Brief article will automatically be transferred over to
our new, fully Open Access journal, Data in Brief, where it will be editorially
reviewed and published as a separate data article upon acceptance. The Open
Access fee for Data in Brief is $500.
Then,
place all Data in Brief files (whichever supplementary files you would like to
include as well as your completed Data in Brief template) into a .zip file and
upload this as a Data in Brief item alongside your Energy revised manuscript.
Note that only this Data in Brief item will be transferred over to Data in
Brief, so ensure all of your relevant Data in Brief documents are zipped into a
single file. Also, make sure you change references to supplementary material in
your Energy manuscript to reference the Data in Brief article where
appropriate.
If
you have questions, please contact the Data in Brief publisher, Paige Shaklee
at dib@elsevier.com
In
order to give our readers a sense of continuity and since editorial procedure
often takes time, we encourage you to update your reference list by conducting
an up-to-date literature search as part of your revision.
On
your Main Menu page, you will find a folder entitled “Submissions Needing
Revision”. Your submission record will be presented here.
MethodsX
file (optional)
If
you have customized (a) research method(s) for the project presented in your
Energy article, you are invited to submit this part of your work as MethodsX
article alongside your revised research article. MethodsX is an independent
journal that publishes the work you have done to develop research methods to
your specific needs or setting. This is an opportunity to get full credit for
the time and money you may have spent on developing research methods, and to
increase the visibility and impact of your work.
2)
Place all MethodsX files (including graphical abstract, figures and other
relevant files) into a .zip file and
upload
this as a ‘Method Details (MethodsX) ‘ item alongside your revised Energy
manuscript. Please ensure all of your relevant MethodsX documents are zipped
into a single file.
3)
If your Energy research article is accepted, your MethodsX article will
automatically be transferred to MethodsX, where it will be reviewed and
published as a separate article upon acceptance. MethodsX is a fully Open
Access journal, the publication fee is only 520 US$.
Include
interactive data visualizations in your publication and let your readers
interact and engage more closely with your research. Follow the instructions
here: https://www.elsevier.com/authors/author-services/data- visualization to
find out about available data visualization options and how to include them
with your article.
MethodsX
file (optional)
We
invite you to submit a method article alongside your research article. This is
an opportunity to get full credit for the time and money you have spent on
developing research methods, and to increase the visibility and impact of your
work. If your research article is accepted, your method article will be
automatically transferred over to the open access journal, MethodsX, where it
will be editorially reviewed and published as a separate method article upon
acceptance. Both articles will be linked on ScienceDirect. Please use the
MethodsX template available here when preparing your article:
https://www.elsevier.com/MethodsX-template. Open access fees apply.
Reviewers’
comments:
Reviewer
#1: The paper is, at least according to the title of the paper, and attempt to
‘comprehend energy efficiency’ at a macro-level and perhaps in relation to
social structures. This is a potentially a topic of interest to the journal
community. However and as presented, the paper is not ready for publication for
the following reasons:
1.
A long introduction details relationship and ‘depth of emotional entanglement
between energy and social structures’ and concomitant stereotypes, the issue
addressed by numerous authors. What the Introduction does not show is the
summary of the problem which comes out of the review and which is consequently
addressed by the paper: this has to be presented in a clear and articulated way
and strongly linked with the rest of the paper. In simplest approach, the paper
does demonstrate why are stereotypes problematic. In the same context, it
appears that proposed methodology heavily relays on MuSIASEM methodology which
the journal community is not necessarily familiar with and hence has to be
explained, at least to the level used in this paper and to make the paper
sufficiently standalone;
2.
Assumptions used in formulating the model have to be justified in terms what
and how they affect understanding of link/interaction between social structures
and function of energy (generation/use) and also why are assumptions formulated
in the first place. Also, it is important here to explicitly articulate what is
aimed to achieve with the proposed model: as presented this somewhat comes
clear only towards the end of the paper. More fundamental question is what is
the difference between model presented here and in other publications by the
author: these have to be clearly explained.
3.
The presented empirical tests and concomitant results are again detached from
reality for i) the problem is not explicitly formulated, and ii) real-life
interpretation of results are not clear.
On
the practical side, the paper needs:
1.
To conform to style of writing adopted by the journal, including referencing;
2.
All figures have to have captions and to be referred to by it;
3.
English needs improvement.
Reviewer
#2: Please find the attached file.
Reviewer
#3: The article has a cognitive value. The author has made a deep analysis of
literature. Methodologically, the article does not raise any objections.
However, getting acquainted with its content, I wonder why the analysis does
not take into account changes in legal provisions. In the countries of the
European Union, energy efficiency is one of the pillars of shaping energy
policy. Does this variable have no impact on improving energy efficiency?
When
reading an article, one gets the impression that the author has prepared it for
editing in another journal. Editing it is incorrect! Line 13, page 10, error –
unwanted semicolon.
Now,
A FIRST SKETCH OF MY REVISION.
There
are the general, structural suggestions from the editors, notably to outline my
method of research, and to discuss my data, in separate papers. After that come
the critical remarks properly spoken, with a focus on explaining clearly – more
clearly than I did it in the manuscript – the assumptions of my model, as well
as its connections with the MUSIASEM model. I start with my method, and it is
an interesting exercise in introspection. I did the empirical research quite a
few months ago, and now I need to look at it from a distance, objectively. Doing
well at this exercise amounts, by the way, to phrasing accurately my
assumptions. I start with my fundamental variable, i.e. the so-called energy
efficiency, measured as the value of real output (i.e. the value of goods and
services produced) per unit of energy consumed, measured in kilograms of oil
equivalent. It is like: energy
efficiency = GDP/ energy consumed.
In
my mind, that coefficient is actually a coefficient of coefficients, more
specifically: GDP / energy consumed = [GDP per capita] / [consumption of
energy per capita ] = [GDP / population] / [energy consumed / population ].
Why so? Well, I assume that when any of us, humans, wants to have a meal, we
generally don’t put our fingers in the nearest electric socket. We consume
energy indirectly, via the local combination of technologies. The same local
combination of technologies makes our GDP. Energy efficiency measures two ends
of the same technological toolbox: its intake of energy, and its outcomes in terms
of goods and services. Changes over time in energy efficiency, as well as its
disparity across space depend on the unfolding of two distinct phenomena: the
exact composition of that local basket of technologies, like the overall heap of
technologies we have stacked up in our daily life, for one, and the efficiency
of individual technologies in the stack, for two. Here, I remember a model I
got to know in management science, precisely about how the efficiency changes
with new technologies supplanting the older ones. Apparently, a freshly implemented,
new technology is always less productive than the one it is kicking out of
business. Only after some time, when people learn how to use that new thing
properly, it starts yielding net gains in productivity. At the end of the day,
when we change our technologies frequently, there could very well not be any
gain in productivity at all, as we are constantly going through consecutive
phases of learning. Anyway, I see the coefficient of energy efficiency at
any given time in a given place as the cumulative outcome of past collective
decisions as for the repertoire of technologies we use.
That
is the first big assumption I make, and the second one comes from the
factorisation: GDP / energy consumed = [GDP per capita] / [consumption of
energy per capita ] = [GDP / population] / [energy consumed / population ].
I noticed a semi-intuitive, although not really robust correlation between the
two component coefficients. GDP per capita tends to be higher in countries with
better developed institutions, which, in turn, tend to be better developed in
the presence of relatively high a consumption of energy per capita. Mind you,
it is quite visible cross-sectionally, when comparing countries, whilst not
happening that obviously over time. If people in country A consume twice as
much energy per capita as people in country B, those in A are very likely to
have better developed institutions than folks in B. Still, if in any of the two
places the consumption of energy per capita grows or falls by 10%, it does not
automatically mean corresponding an increase or decrease in institutional
development.
Wrapping
partially up the above, I can see at least one main assumption in my method:
energy efficiency, measured as GDP per kg of oil equivalent in energy
consumed is, in itself, a pretty foggy metric, arguably devoid of intrinsic
meaning, and it is meaningful as an equilibrium of two component coefficients,
namely in GDP per capita, for one, and energy consumption per capita, for two. Therefore,
the very name ‘energy efficiency’ is problematic. If the vector [GDP; energy
consumption] is really a local equilibrium, as I intuitively see it, then we
need to keep in mind an old assumption of economic sciences: all equilibriums
are efficient, this is basically why they are equilibriums. Further down this
avenue of thinking, the coefficient of GDP per kg of oil equivalent shouldn’t
even be called ‘energy efficiency’, or, just in order not to fall into
pointless semantic bickering, we should take the ‘efficiency’ part into some
sort of intellectual parentheses.
Now,
I move to my analytical method. I accept as pretty obvious the fact that, at a
given moment in time, different national economies display different
coefficients of GDP per kg of oil equivalent consumed. This is coherent with
the above-phrased claim that energy efficiency is a local equilibrium rather
than a measure of efficiency strictly speaking. What gains in importance, with
that intellectual stance, is the study of change over time. In the manuscript
paper, I tested a very intuitive analytical method, based on a classical move,
namely on using natural logarithms of empirical values rather than empirical
values themselves. Natural logarithms eliminate a lot of non-stationarity and
noise in empirical data. A short reminder of what are natural logarithms is due
at this point. Any number can be represented as a power of another number, like
y = xz, where ‘x’ is called the root
of the ‘y’, ‘z’ is the exponent of the root, and ‘x’ is
also the base of ‘z’.
Some
roots are special. One of them is the so-called Euler’s number, or e =
2,718281828459, the base of the natural logarithm. When we treat e
≈ 2,72 as the root of another number, the corresponding exponent z
in y = ez has interesting properties: it can be
further decomposed as z = t*a, where t is the ordinal number of a
moment in time, and a is basically a parameter. In a moment, I
will explain why I said ‘basically’. The function y = t*a is
called ‘exponential function’ and proves useful in studying processes marked by
important hysteresis, i.e. when each consecutive step in the process depends
very strongly on the cumulative outcome of previous steps, like y(t)
depends on y(t – k). Compound interest is a classic example: when
you save money for years, with annual compounding of interest, each consecutive
year builds upon the interest accumulated in preceding years. If we represent
the interest rate, classically, as ‘r’, the function y = xt*r gives a
good approximation of how much you can save, with annually compounded ‘r’,
over ‘t’ years.
Slightly
different an approach to the exponential function can be formulated, and this
is what I did in the manuscript paper I am revising
now, in front of your very eyes. The natural logarithm of
energy efficiency measured as GDP per kg of oil equivalent can be considered as
local occurrence of change with strong a component of hysteresis. The
equilibrium of today depends on the cumulative outcomes of past equilibriums.
In a classic exponential function, I would approach that hysteresis as y(t) = et*a, with a
being a constant parameter of the function. Yet, I can assume that ‘a’ is local
instead of being general. In other words, what I did was y(t) = et*a(t)
with a(t) being obviously t-specific, i.e. local. I assume that
the process of change in energy efficiency is characterized by local magnitudes
of change, the a(t)’s. That a(t), in y(t) = et*a(t) is
slightly akin to the local first derivative, i.e. y’(t). The
difference between the local a(t) and y’(t) is that
the former is supposed to capture somehow more accurately the hysteretic side
of the process under scrutiny.
In
typical econometric tests, the usual strategy is to start with the empirical
values of my variables, transform them into their natural logarithms or some
sort of standardized values (e.g. standardized over their respective means, or
their standard deviations), and then run linear regression on those transformed
values. Another path of analysis consists in exponential regression, only there
is a problem with this one: it is hard to establish a reliable method of
transformation in empirical data. Running exponential regression on natural
logarithms looks stupid, as natural logarithms are precisely the exponents of
the exponential function, whence my intuitive willingness to invent a method
sort of in between linear regression, and the exponential one.
Once
I assume that local exponential coefficients a(t) in the
exponential progression y(t) = et*a(t)
have intrinsic meaning of their own, as local magnitudes of exponential change,
an interesting analytical avenue opens up. For each set of empirical values y(t),
I can construe a set of transformed values a(t) = ln[y(t)]/t.
Now, when you think about it, the actual a(t) depends on how you
calculate ‘t’, or, in other words, what calendar you apply. When I start
counting time 100 years before the starting year of my empirical data, my a(t)
will go like: a(t1) = ln[y(t1)]/101, a(t2)
= ln[y(t2)]/102 etc. The denominator ‘t’ will
change incrementally slowly. On the other hand, if I assume that the first year
of whatever is happening is one year before my empirical time series start, it
is a different ball game. My a(t1) = ln[y(t1)]/1, and
my a(t2) = ln[y(t2)]/2 etc.; incremental
change in denominator is much greater in this case. When I set my t0
at 100 years earlier than the first year of my actual data, thus t0 = t1 –
100, the resulting set of a(t) values transformed from
the initial y(t) data simulates a secular, slow trend of change. On
the other hand, setting t0 at t0 = t1-1 makes the resulting set
of a(t) values reflect quick change, and the t0 = t1 – 1
moment is like a hypothetical shock, occurring just before the actual empirical
data starts to tell its story.
Provisionally
wrapping it up, my assumptions, and thus my method, consists in studying changes
in energy efficiency as a sequence of equilibriums between relative wealth (GDP
per capita), on the one hand, and consumption of energy per capita. The passage
between equilibriums is a complex phenomenon, combining long term trends and
the short-term ones.
I
am introducing a novel angle of approach to the otherwise classic concept of economics,
namely that of economic equilibrium. I claim that equilibriums are
manifestations of collective intelligence in their host societies. In order to
form an economic equilibrium, would it be more local and Marshallian, or more general
and Walrasian, a society needs institutions that assure collective learning
through experimentation. They need some kind of financial market, enforceable
contracts, and institutions of collective bargaining. Small changes in energy
efficiency come out of consistent, collective learning through those
institutions. Big leaps in energy efficiency appear when the institutions of
collective learning undergo substantial structural changes.
I
am thinking about enriching the empirical part of my paper by introducing
additional demonstration of collective intelligence: a neural network, working with
the same empirical data, with or without the so-called fitness function. I
have that intuitive thought – although I don’t know yet how to get it across
coherently – that neural networks endowed with a fitness function are good at representing
collective intelligence in structured societies with relatively well-developed institutions.
I
go towards my syllabuses for the coming academic year. Incidentally, at least
one of the curriculums I am going to teach this fall fits nicely into the line
of research I am pursuing now: collective intelligence and the use of
artificial intelligence. I am developing the thing as an update on my blog, and
I write it directly in English. The course is labelled “Behavioural
Modelling and Content Marketing”. My principal goal is to teach students the
mechanics of behavioural interaction between human beings and digital technologies,
especially in social media, online marketing and content streaming. At my
university, i.e. the Andrzej Frycz-Modrzewski Krakow University (Krakow, Poland),
we have a general drill of splitting the general goal of each course into three
layers of expected didactic outcomes: knowledge, course-specific skills, and
general social skills. The longer I do science and the longer I teach, the less
I believe into the point of distinguishing knowledge from skills. Knowledge
devoid of any skills attached to it is virtually impossible to check, and
virtually useless.
As
I think about it, I imagine many different teachers and many students. Each
teacher follows some didactic goals. How do they match each other? They are
bound to. I mean, the community of teachers, in a university, is a local social
structure. We, teachers, we have different angles of approach to teaching, and,
of course, we teach different subjects. Yet, we all come from more or less the
same cultural background. Here comes a quick glimpse of literature I will be
referring to when lecturing ‘Behavioural Modelling and Content Marketing’:
the article by Molleman and Gachter (2018[1]), entitled
‘Societal background influences social learning in cooperative decision
making’, and another one, by Smaldino (2019[2]), under
the title ‘Social identity and cooperation in cultural evolution’. Molleman and
Gachter start from the well-known assumption that we, humans, largely owe our
evolutionary success to our capacity of social learning and cooperation. They
give the account of an experiment, where Chinese people, assumed to be
collectivist in their ways, are being compared to British people, allegedly
individualist as hell, in a social game based on dilemma and cooperation. Turns
out the cultural background matters: success-based learning is associated with
selfish behaviour and majority-based learning can help foster cooperation.
Smaldino goes down more theoretical a path, arguing that the structure society
shapes the repertoire of social identities available to homo sapiens in a given
place at a given moment, whence the puzzle of emergent, ephemeral groups as a
major factor in human cultural evolution. When I decide to form, on Facebook, a
group of people Not-Yet-Abducted-By-Aliens, is it a factor of cultural change,
or rather an outcome thereof?
When
I teach anything, what do I really want to achieve, and what does the conscious
formulation of those goals have in common with the real outcomes I reach? When
I use a scientific repository, like ScienceDirect,
that thing learns from me. When I download a bunch of articles on energy, it
suggests me further readings along the same lines. It learns from keywords I
use in my searches, and from the journals I browse. You can even have a look at
my recent history of downloads from ScienceDirect and make yourself an opinion about
what I am interested in. Just CLICK HERE,
it opens an Excel spreadsheet.
How
can I know I taught anybody anything useful? If a student asks me: ‘Pardon
me, sir, but why the hell should I learn all that stuff you teach? What’s the
point? Why should I bother?’. Right you are, sir or miss, whatever gender
you think you are. The point of learning that stuff… You can think of some
impressive human creation, like the Notre Dame cathedral, the Eiffel Tower, or
that Da Vinci’s painting, Lady with an Ermine. Have you ever wondered how much
work had been put in those things? However big and impressive a cathedral is,
it had been built brick by f***ing brick. Whatever depth of colour we can see
in a painting, it came out of dozens of hours spent on sketching, mixing
paints, trying, cursing, and tearing down the canvas. This course and its
contents are a small brick in the edifice of your existence. One more small
story that makes your individual depth as a person.
There
is that thing, at the very heart of behavioural modelling, and social sciences
in general. Fault of a better expression, I call it the Bignetti model. See,
for example, Bignetti 2014[3], Bignetti et al. 2017[4], or Bignetti 2018[5] for more
reading. Long story short, what professor Bignetti claims is that whatever
happens in observable human behaviour, individual or collective, whatever, has
already happened neurologically beforehand. Whatever we use to Tweet or
whatever we read, it is rooted in that wiring we have between the ears. The
thing is that actually observing how that wiring works is still a bit
burdensome. You need a lot of technology, and a controlled environment.
Strangely enough, opening one’s skull and trying to observe the contents at
work doesn’t really work. Reverse-engineered, the Bignetti model suggests
behavioural observation, and behavioural modelling, could be a good method to guess
how our individual brains work together, i.e. how we are intelligent
collectively.
I
go back to the formal structure of the course, more specifically to goals and
expected outcomes. I split: knowledge, skills, social competences. The knowledge,
for one. I expect the students to develop the understanding of the
following concepts: a) behavioural pattern b) social life as a collection of
behavioural patterns observable in human beings c) behavioural patterns
occurring as interactions of humans with digital technologies, especially with
online content and online marketing d) modification of human behaviour as a
response to online content e) the basics of artificial intelligence, like the weak
law of great numbers or the logical structure of a neural network. As for the course-specificskills, I expect my students to sharpen their edge in observing
behavioural patterns, and changes thereof in connection with online content.
When it comes to general social competences, I would like my students to
make a few steps forward on two paths: a) handling projects and b) doing
research. It logically implies that assessment in this course should and will
be project-based. Students will be graded on the grounds of complex projects,
covering the definition, observation, and modification of their own behavioural
patterns occurring as interaction with online content.
The
structure of an individual project will cover three main parts:
a) description of the behavioural sequence in question b) description of online
content that allegedly impacts that sequence, and c) the study of behavioural
changes occurring under the influence of online content. The scale of students’
grades is based on two component marks: the completeness of a student’s work,
regarding (a) – (c), and the depth of research the given student has brought up
to support his observations and claims. In Poland, in the academia, we
typically use a grading scale from 2 (fail) all the way up to 5 (very good),
passing through 3, 3+, 4, and 4+. As I see it, each student – or each team of
students, as there will be a possibility to prepare the thing in a team of up
to 5 people – will receive two component grades, like e.g. 3+ for completeness
and 4 for depth of research, and that will give (3,5 + 4)/2 = 3,75 ≈ 4,0.
Such
a project is typical research, whence the necessity to introduce students into
the basic techniques of science. That comes as a bit of a paradox, as those
students’ major is Film and Television Production, thus a thoroughly practical
one. Still, science serves in practical issues: this is something I deeply
believe and which I would like to teach my students. As I look upon those
goals, and the method of assessment, a structure emerges as regards the plan of
in-class teaching. At my university, the bulk of in-class interaction with
students is normally spread over 15 lectures of 1,5 clock hour each, thus 30
hours in total. In some curriculums it is accompanied by the so-called
‘workshops’ in smaller groups, with each such smaller group attending 7 – 8
sessions of 1,5 hour each. In this case, i.e. in the course of ‘Behavioural
Modelling and Content Marketing’, I have just lectures in my schedule.
Still, as I see it, I will need to do practical stuff with my youngsters. This
is a good moment to demonstrate a managerial technique I teach in other
classes, called ‘regressive planning’, which consists in taking the
final goal I want to achieve, assume this is supposed to be the outcome of a
sequence of actions, and then reverse engineer that sequence. Sort of ‘what do
I need to do if I want to achieve X at the end of the day?’.
If
I want to have my students hand me good quality projects by the end of the
semester, the last few classes out of the standard 15 should be devoted to
discussing collectively the draft projects. Those drafts should be based on
prior teaching of basic skills and knowledge, whence the necessity to give
those students a toolbox, and provoke in them curiosity to rummage inside. All
in all, it gives me the following, provisional structure of lecturing:
{input
= 15 classes} => {output = good quality projects by my students}
{input
= 15 classes} ó {input = [10
classes of preparation >> 5 classes of draft presentations and discussion
thereof]}
{input
= 15 classes} ó
{input = [5*(1 class of mindfuck to provoke curiosity + 1 class of systematic
presentation) + 5*(presentation + questioning and discussion)}
As
I see from what I have just written, I need to divide the theory accompanying
this curriculum into 5 big chunks. The first of those 5 blocks needs to
address the general frame of the course, i.e. the phenomenon of recurrent interaction
between humans and online content. I think the most important fact to highlight
is that algorithms of online marketing behave like sales people crossed with
very attentive servants, who try to guess one’s whims and wants. It is a huge
social change: it, I think, the first time in human history when virtually
every human with access to Internet interacts with a form of intelligence that
behaves like a butler, guessing the user’s preferences. It is transformational
for human behaviour, and in that first block I want to show my students how that
transformation can work. The opening, mindfucking class will consists in a behavioural
experiment in the lines of good, old role playing in psychology. I will
demonstrate to my students how a human would behave if they wanted to emulate
the behaviour of neural networks in online marketing. I will ask them questions
about what they usually do, and about what they did like during the last few days,
and I will guess their preferences on the grounds of their described behaviour.
I will tell my students to observe that butler-like behaviour of mine and to pattern
me. In a next step, I will ask students to play the same role, just for them to
get the hang of how a piece of AI works in online marketing. The point of this
first class is to define an expected outcome, like a variable, which neural
networks attempt to achieve, in terms of human behaviour observable through clicking.
The second, theoretical class of that first block will, logically, consist in
explaining the fundamentals of how neural networks work, especially in online
interactions with human users of online content.
I
think in the second two-class block I will address the issue of behavioural
patterns as such, i.e. what they are, and how can we observe them. I want the mindfuck
class in this block to be provocative intellectually, and I think I will use
role playing once again. I will ask my students to play roles of their choice,
and I will discuss their performance under a specific angle: how do you know
that your play is representative for this type of behaviour or person? What
specific pieces of behaviour are, in your opinion, informative about the social
identity of that role? Do other students agree that the type of behaviour played
is representative for this specific type of person? The theoretical class in
this block will be devoted to systematic lecture on the basics of behaviourism.
I guess I will serve to my students some Skinner, and some Timberlake, namely Skinner’s
‘Selection by Consequences’ (1981[6]), and Timberlake’s ‘Behaviour
Systems and Reinforcement’ (1993[7]).
In
the third two-class block I will return to interactions with online
content. In the mindfuck class, I will make my students meddle with You Tube,
and see how the list of suggested videos changes after we search for or click
on specific content, e.g how will it change after clicking 5 videos of
documentaries about wildlife, or after searching for videos on race cars. In
this class, I want my students to pattern the behaviour of You Tube. The theoretical
class of this block will be devoted to the ways those algorithms work. I think
I will focus on a hardcore concept of AI, namely the Gaussian mixture. I will
explain how crude observations on our clicking and viewing allows an algorithm
to categorize us.
As
we will pass to the fourth two-class block, I will switch to the concept
of collective intelligence, i.e. to how whole societies interact with various
forms of online, interactive neural networks. The class devoted to intellectual
provocation will be discursive. I will make students debate on the following claim:
‘Internet and online content allow our society to learn faster and more
efficiently’. There is, of course, a catch, and it is the definition of
learning fast and efficiently. How do we know we are quick and efficient in our
collective learning? What would slow and inefficient learning look like? How
can we check the role of Internet and online content in our collective
learning? Can we apply the John Stuart Mill’s logical canon to that situation? The
theoretical class in this block will be devoted to the phenomenon of collective
intelligence in itself. I would like to work through like two research papers devoted
to online marketing, e.g. Fink
et al. (2018[8])
and Takeuchi
et al. (2018[9]),
in order to show how online marketing unfolds into phenomena of collective
intelligence and collective learning.
Good,
so I come to the fifth two-class block, the last one before the
scheduled draft presentations by my students. It is the last teaching block
before they present their projects, and I think it should bring them back to
the root idea of these, i.e. to the idea of observing one’s own behaviour when
interacting with online content. The first class of the block, the one supposed
to stir curiosity, could consist in two steps of brain storming and discussion.
Students endorse the role of online marketers. In the first step, they define
one or two typical interactions between human behaviour, and the online content
they communicate. We use the previously learnt theory to make both the description
of behavioural patterns, and that of online marketing coherent and state-of-the-art.
In the next step, students discuss under what conditions they would behave according
to those pre-defined patterns, and what conditions would them make diverge from
it and follow different patterns. In the theoretical class of this block, I would
like to discuss two articles, which incite my own curiosity: ‘A place
for emotions in behaviour research system’ by Gordon M.Burghart (2019[10]), and ‘Disequilibrium
in behaviour analysis: A disequilibrium theory redux’ by Jacobs et al. (2019[11]).
I
am consistently delivering good, almost new science to my readers, and love
doing it, and I am working on crowdfunding this activity of mine. You can
communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com.
As we talk business plans, I remind you that you can download, from the library
of my blog, the business plan I prepared for my
semi-scientific project Befund (and you can access the French version
as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’
You can support my research by donating directly, any amount you consider
appropriate, to my
PayPal account. You can also consider going to my Patreon page and become
my patron. If you decide so, I will be grateful for suggesting me two things
that Patreon suggests me to suggest you. Firstly, what kind of reward would you
expect in exchange of supporting me? Secondly, what kind of phases would you
like to see in the development of my research, and of the corresponding
educational tools?
[1] Molleman, L., & Gächter, S.
(2018). Societal background influences social learning in cooperative decision
making. Evolution and Human Behavior, 39(5), 547-555.
[2] Smaldino, P. E. (2019). Social
identity and cooperation in cultural evolution. Behavioural Processes. Volume
161, April 2019, Pages 108-116
[3] Bignetti, E. (2014). The functional
role of free-will illusion in cognition:“The Bignetti Model”. Cognitive Systems
Research, 31, 45-60.
[4] Bignetti, E., Martuzzi, F., &
Tartabini, A. (2017). A Psychophysical Approach to Test:“The Bignetti Model”. Psychol
Cogn Sci Open J, 3(1), 24-35.
[5] Bignetti, E. (2018). New Insights
into “The Bignetti Model” from Classic and Quantum Mechanics Perspectives. Perspective,
4(1), 24.
[6] Skinner, B. F. (1981).
Selection by consequences. Science, 213(4507), 501-504.
[7] Timberlake, W. (1993).
Behavior systems and reinforcement: An integrative approach. Journal of the
Experimental Analysis of Behavior, 60(1), 105-128.
[8] Fink, M., Koller, M.,
Gartner, J., Floh, A., & Harms, R. (2018). Effective entrepreneurial
marketing on Facebook–A longitudinal study. Journal of business research.
[9] Takeuchi,
H., Masuda, S., Miyamoto, K., & Akihara, S. (2018). Obtaining Exhaustive Answer Set for
Q&A-based Inquiry System using Customer Behavior and Service Function
Modeling. Procedia Computer Science, 126, 986-995.
[10] Burghardt, G. M. (2019). A
place for emotions in behavior systems research. Behavioural processes.
[11] Jacobs, K. W., Morford, Z.
H., & King, J. E. (2019). Disequilibrium in behavior analysis: A
disequilibrium theory redux. Behavioural processes.
I
am recapitulating once again. Two things are going on in my mind: science
strictly spoken and a technological project. As for science, I am digging
around the hypothesis that we, humans, purposefully create institutions for
experimenting with new technologies and that the essential purpose of those institutions
is to maximize the absorption of energy from environment. I am obstinately
turning around the possible use of artificial intelligence as tools for
simulating collective intelligence in human societies. As for technology, I am
working on my concept of « Energy Ponds ». See my update entitled « The
mind-blowing hydro » for relatively the freshest developments on that
point. So far, I came to the conclusion that figuring out a viable financial
scheme, which would allow local communities to own local projects and adapt
them flexibly to local conditions is just as important as working out the
technological side. Oh, yes, and there is teaching, the third thing to occupy
my mind. The new academic year starts on October 1st and I am
already thinking about the stuff I will be teaching.
I
think it is good to be honest about myself, and so I am trying to be: I have a
limited capacity of multi-tasking. Even if I do a few different things in the
same time, I need those things to be kind of convergent and similar. This is
one of those moments when a written recapitulation of what I do serves me to
put some order in what I intend to do. Actually, why not using one of the
methods I teach my students, in management classes? I mean, why not using some
scholarly techniques of planning and goal setting?
Good,
so I start. What do I want? I want a monography on the application of
artificial intelligence to study collective intelligence, with an edge towards
practical use in management. I call it ‘Monography AI in CI – Management’.
I want the manuscript to be ready by the end of October 2019. I want a
monography on a broader topic of technological change being part of human
evolution, with the hypothesis mentioned in the preceding paragraph. This
monography, I give it a working title: ‘Monography Technological Change and
Human Evolution’. I have no clear deadline for the manuscript. I want 2 – 3
articles on renewable energies and their application. Same deadline as that
first monography: end of October 2019. I want to promote and develop my idea of
“Energy Ponds” and that of local financial schemes for such type of project. I
want to present this idea in at least one article, and in at least one public
speech. I want to prepare syllabuses for teaching, centred, precisely, on the
concept of collective intelligence, i.e. of social structures and institutions
made for experimentation and learning. Practically in each of the curriculums I
teach I want to go into the topic of collective learning.
How
will I know I have what I want? This is a control question, forcing me to give
precise form to my goals. As for monographies and articles it is all about
preparing manuscripts on time. A monography should be at least 400 pages each, whilst
articles should be some 30 pages-long each, in the manuscript form. That makes 460
– 490 pages to write (meaningfully, of course!) until the end of October, and
at least 400 other pages to write subsequently. Of course, it is not just about
hatching manuscripts: I need to have a publisher. As for teaching, I can assume
that I am somehow prepared to deliver a given line of logic when I have a
syllabus nailed down nicely. Thus, I need to rewrite my syllabuses not later
than by September 25th. I can evaluate progress in the promotion of my
“Energy Ponds” concept as I will have feedback from any people whom I informed
or will have informed about it.
Right,
the above is what I want technically and precisely, like in a nice schedule of
work. Now, what I like really want? I am 51, with good health and common sense
I have some 24 – 25 productive years ahead. This is roughly the time that
passed since my son’s birth. The boy is not a boy anymore, he is walking his
own path, and what looms ahead of me is like my last big journey in life. What
do I want to do with those years? I want to feel useful, very certainly. Yes, I
think this is one clear thing about what I want: I want to feel useful. How
will I know I am useful? Weeell, that’s harder to tell. As I am patiently
following the train of my thoughts, I think that I feel useful today, when I
can see that people around need me. On the top of that, I want to be
financially important and independent. Wealthy? Yes, but not for comfort as
such. Right now, I am employed, and my salary is my main source of income. I
perceive myself as dependent on my employer. I want to change it so as to have
substantial income (i.e. income greater than my current spending and thus
allowing accumulation) from sources other than a salary. Logically, I need
capital to generate that stream of non-wage income. I have some – an apartment
for rent – but as I look at it critically, I would need at least 7 times more
in order to have the rent-based income I want.
Looks
like my initial, spontaneous thought of being useful means, after having
scratched the surface, being sufficiently high in the social hierarchy to be
financially independent, and able to influence other people. Anyway, as I am
having a look at my short-term goals, I ask myself how do they bridge into my
long-term goals? The answer is: they don’t really connect, my short-term goals
and the long-term ones. There is a lot of missing pieces. I mean, how does the
fact of writing a scientific monography translate into multiplying by seven my
current equity invested in income-generating assets?
Now,
I want to think a bit deeper about what I do now, and I want to discover two
types of behavioural patterns. Firstly, there is probably something in what I
do, which manifests some kind of underlying, long-term ambitions or cravings in
my personality. Exploring what I do might be informative as for what I want to
achieve in that last big lap of my life. Secondly, in my current activities, I
probably have some behavioural patterns, which, when exploited properly, can
help me in achieving my long-term goals.
What
do I like doing? I like writing and reading about science. I like speaking in
public, whether it is a classroom or a conference. Yes, it might be a sign of
narcissism, still it can be used to a good purpose. I like travelling in
moderate doses. Looks like I am made for being a science writer and a science
speaker. It looks some sort of intermediate goal, bridging from my short-term,
scheduled achievements into the long-term, unscheduled ones. I do write
regularly, especially on my blog. I speak regularly in classrooms, as my basic
job is that of an academic teacher. What I do haphazardly, and what could bring
me closer to achieving my long-term goals, would be to speak in other public
contexts more frequently and sort of regularly, and, of course, make money on
it. By the way, as science writing and science speaking is concerned, I have a
crazy idea: scientific stand up. I am deeply fascinated with the art of some
stand up comedians: Bill Burr, Gabriel Iglesias, Joe Rogan, Kevin Hart or Dave
Chapelle. Getting across deep, philosophical content about human condition in
the form of jokes, and make people laugh when thinking about those things, is
an art I admire, and I would like to translate it somehow into the world of
science. The problem is that I don’t know how. I have never done any acting in
my life, never have written nor participated in writing any jokes for stand-up
comedy. As skillsets come, this is a complete terra incognita to me.
Now,
I jump to the timeline. I assume having those 24 years or so ahead of me. What
then, I mean when I hopefully reach 75 years of age. Now, I can shock some of
my readers, but provisionally I label that moment in 24 years from now as “the
decision whether I should die”. Those last years, I have been asking myself how
I would like to die. The question might seem stupid: nobody likes dying. Still,
I have been asking myself this question. I am going into deep existential
ranting, but I think what I think: when I compare my life with some accounts in
historical books, there is one striking difference. When I read letters and
memoirs of people from the 17th or 18th century, even
from the beginnings of the 20th century, those ancestors of ours
tended to ask themselves how worthy their life should be and how worthy their
death should come. We tend to ask, most of all, how long will we live. When I
think about it, that old attitude makes more sense. In the perspective of
decades, planning for maxing out on existential value is much more rational
than trying to max out on life expectancy as such. I guess we can have much
more control over the values we pursue than the duration of our life. I know
that what I am going to say might sound horribly pretentious, but I think I
would like to die like a Viking. I mean, not necessarily trying to kill
somebody, just dying by choice, whilst still having the strength to do
something important, and doing those important things. What I am really afraid
of is slow death by instalments, when my flame dies out progressively, leaving
me just weaker and weaker every month, whilst burdening other people with taking
care of me.
I
fix that provisional checkpoint at the age of 75, 24 years from now. An
important note before I go further: I have not decided I will die at the age of
75. I suppose that would be as presumptuous as assuming to live forever. I just
give myself a rationally grounded span of 24 years to live with enough energy
to achieve something worthy. If I have more, I will just have more. Anyway, how
much can I do in 24 years? In order to plan for that, I need to recapitulate
how much have I been able to do so far, like during an average year. A nicely
productive year means 2 – 3 acceptable articles, accompanied by 2 – 3 equally
acceptable conference presentations. On the top of that, a monography is
conceivable in one year. As for teaching, I can realistically do 600 – 700 hours
of public speech in one year. With that, I think I can nail down some 20
valuable meetings in business and science. In 24 years, I can write 24*550 =
13 200 pages, I can deliver 15 600 hours of public speech, and I can
negotiate something in 480 meetings or so.
Now,
as I talk about value, I can see there is something more far reaching than what
I have just named as my long-term goals. There are values which I want to
pursue. I mean, saying that I want to die like a Viking, and, in the same time,
stating my long-term goals in life in terms of income and capital base: that
sound ridiculous. I know, I know, dying like a Viking, in the times of Vikings,
meant very largely to pillage until the last breath. Still, I need values. I
think the shortcut to my values is via my dreams. What are they, my dreams? Now,
I make a sharp difference between dreams and fantasies. A fantasy is: a)
essentially unrealistic, such as riding a flying unicorn b) involving just a
small, relatively childish part of my personality. On the other hand, a dream –
such as contributing to making my home country, Poland, go 100% off fossil
fuels – is something that might look impossible to achieve, yet its achievement
is a logical extension of my present existence.
What
are they, my dreams? Well, I have just named one, i.e. playing a role in
changing the energy base of my country. What else do I value? Family,
certainly. I want my son to have a good life. I want to feel useful to other
people (that was already in my long-term goals, and so I am moving it to the
category of dreams and values). Another thing comes to my mind: I want to tell
the story of my parents. Apparently banal – lots of people do it or at least
attempt to – and yet nagging as hell. My father died in February, and around
the time of the funeral, as I was talking to family and friends, I discovered
things about my dad which I had not the faintest idea of. I started going
through old photographs and old letters in a personal album I didn’t even know
he still had. Me and my father, we were not very close. There was a lot of bad
blood between us. Still, it was my call to take care of him during the last 17
years of his life, and it was my call to care for what we call in Poland ‘his
last walk’, namely that from the funeral chapel to the tomb properly spoken. I
suddenly had a flash glimpse of the personal history, the rich, textured
biography I had in front of my eyes, visible through old images and old words, all
that in the background of the vanishing spark of life I could see in my
father’s eyes during his last days.
How
will I know those dreams and values are fulfilled in my life? I can measure
progress in my work on and around projects connected to new sources of energy. I
can measure it by observing the outcomes. When things I work on get done, this
is sort of tangible. As for being useful to other people, I go once again down
the same avenue: to me, being useful means having an unequivocally positive
impact on other people. Impact is important, and thus, in order to have that
impact, I need some kind of leadership position. Looking at my personal life
and at my dream to see my son having a good life, it comes as the hardest thing
to gauge. This seems to be the (apparently) irreducible uncertainty in my
perfect plan. Telling my parents’ story: how will I prove to myself I will have
told it? A published book? Maybe…
I
sum it up, at least partially. I can reasonably expect to deliver a certain
amount of work over the 24 years to come: approximately 13 200 pages of
written content, 15 600 hours of public speech, and 450 – 500 meetings,
until my next big checkpoint in life, at the age of 75. I would like to focus
that work on building a position of leadership, in view of bringing some change
to my own country, Poland, mostly in the field of energy. As the first stage is
to build a good reputation of science communicator, the leadership in question
is likely to be rather a soft one. In that plan, two things remain highly
uncertain. Firstly, how should I behave in order to be as good a human being as
I possibly can? Secondly, what is the real importance of that
telling-my-parents’-story thing in the whole plan? How important is it for my
understanding of how to live well those 24 years to come? What fraction of
those 13 200 written pages (or so), should refer to that story?
Now,
I move towards collective intelligence, and to possible applications of
artificial intelligence to study the collective one. Yes, I am a scientist, and
yes, I can use myself as an experimental unit. I can extrapolate my personal
experience as the incidence of something in a larger population. The exact path
of that incidence can shape the future actions and structures of that
population. Good, so now, there is someone – anyone, in fact – who comes and
tells to my face: ‘Look, man, you’re bullshitting yourself and people around
you! Your plans look stupid, and if attitudes like yours spread, our
civilisation will fall into pieces!’. Fair enough, that could be a valid
point. Let’s check. According to the data published by the Central Statistical
Office of the Republic of Poland, in 2019, there are n = 453 390 people in
Poland aged 51, like me, 230 370 of them being men, and 232 020
women. I assume that attitudes such as my own, expressed in the preceding
paragraphs, are one type among many occurring in that population of 51-year-old
Polish people. People have different views on life and other things, so to say.
Now,
I hypothesise in two opposite directions. In Hypothesis A, I state that
just some among those different attitudes make any sense, and there is a
hypothetical distribution of those attitudes in the general population, which
yields the best social outcomes whilst eliminating early all nonsense attitudes
from the social landscape. In other words, some worldviews are so dysfunctional
that they’d better disappear quickly and be supplanted by those more sensible
ones. Going even deeper, it means that quantitative distributions of attitudes
in the general population fall into two classes: those completely haphazard, existential
accidents without much grounds for staying in existence, on the one hand, and
those sensible and functional ones, which can be sustained with benefit to all,
on the other hand. In hypothesis ~A,
i.e. the opposite to A, I speculate that observed diversity in attitudes is
a phenomenon in itself and does not really reduce to any hypothetically better
one. It is the old argument in favour of diversity. Old as it is, it has old
mathematical foundations, and, interestingly, is one of cornerstones in what we
call today Artificial Intelligence.
In
Vapnik,
Chervonenkis 1971[1] , a paper
reputed to be kind of seminal for the today’s AI, I found reference to the
classical Bernoulli’s theorem, known also as the weak law of
large numbers: the relative frequency of an event A in a
sequence of independent trials converges (in probability) to the probability of
that event. Please, note that roughly the same can be found in the
so-called Borel’s law of large numbers, named after Émile Borel.
It is deep maths: each phenomenon bears a given probability of happening, and
this probability is sort of sewn into the fabric of reality. The empirically
observable frequency of occurrence is always an approximation of this
quasi-metaphysical probability. That goes a bit against the way probability is being
taught at school: it is usually about that coin – or dice – being tossed many
times etc. It implies that probability exists at all only as long as there are
things actually happening. No happening, no probability. Still, if you think
about it, there is a reason why those empirically observable frequencies tend to
be recurrent, and the reason is precisely that underlying capacity of the given
phenomenon to take place.
Basic
neural networks, the perceptron-type ones, experiment with weights being attributed
to input variables, in order to find a combination of weights which allows the
perceptron getting the closest possible to a target value. You can find descriptions
of that procedure in « Thinking
Poisson, or ‘WTF are the other folks doing?’ », for example. Now, we can shift
a little bit our perspective and assume that what we call ‘weights’ of input
variables are probabilities that a phenomenon, denoted by the given variable,
happens at all. A vector of weights attributed to input variables is a collection
of probabilities. Walking down this avenue of thinking leads me precisely to
the Hypothesis ~A, presented a few paragraphs ago. Attitudes congruous
with that very personal confession of mine, developed even more paragraphs ago,
have an inherent probability of happening, and the more we experiment, the
closer we can get to that probability. If someone tells to my face that I’m an
idiot, I can reply that: a) any worldview has an idiotic side, no worries b) my
particular idiocy is representative of a class of idiocies, which, in turn, the
civilisation needs to figure out something clever for the next few centuries.
I
am consistently delivering good, almost new science to my readers, and love
doing it, and I am working on crowdfunding this activity of mine. You can
communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com.
As we talk business plans, I remind you that you can download, from the library
of my blog, the business plan I prepared for my
semi-scientific project Befund (and you can access the French version
as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’
You can support my research by donating directly, any amount you consider
appropriate, to my PayPal account. You can also consider going
to my Patreon page
and become my patron. If you decide so, I will be grateful for suggesting me
two things that Patreon suggests me to suggest you. Firstly, what kind of
reward would you expect in exchange of supporting me? Secondly, what kind of
phases would you like to see in the development of my research, and of the
corresponding educational tools?
[1] Vapnik, V. N. (1971). CHERVONENKIS,
On the uniform convergence ofrelativefrequencies. Theory of Probability and Its
Applications, 16, 264-280.