That thing about experiments: you don’t really know

My editorial

One of the spots my internal bulldog keeps sniffing around is the junction of monetary systems and technological change, and one of the most interestingly smelling bones in that spot is labelled ‘cryptocurrencies and renewable energies’. There is that Estonian project called WePower and there are those loose thoughts I have been weaving for months, now, about a cryptocurrency pegged to local markets of renewable energies. I named that currency ‘The Wasun’ (see for example ‘Conversations between the dead and the living (no candles)’ or ‘Giants invisible to muggles, or a good, solid piece of research work’ ). Now, I am putting together a research project concerning the broadly spoken industry of FinTech, and one of the threads in that scientific fabric is the possible way of experimenting with the idea of cryptocurrencies connected to the market of renewable energies.

And so I imagine the most basic experimental framework for this specific case: a trading environment, designed with the Blockchain technology, and in this environment, a token of cryptocurrency equivalent to 1 kWh (kilowatt hour) of energy from renewable sources. Just for the sake of having fun with my own ideas, I return to that old name for the token: 1 Wasun = 1 kWh from renewable energies. In order to be scientifically honest, if I want that Wasun to be really well experimented about, I need its dark sibling, the antithetic shadow: 1 Fossil = 1 kWh from fossil fuels. Pushing my scientific honesty even further, I introduce one more token: 1 WTH = 1 kWh from whatever source comes the first, indiscriminately.

Now, I do what every curator in every art exhibition fears a bit: I let people in. People can buy and sell Wasuns, Fossils, or WTH. The initial price for each will be the same, i.e. the market price for one 1 kWh of electricity. The experiment has two plans: the purely economic one, involving the observation of prices and quantities, and the behavioural – anthropological one, which assumes the observation of human behaviour. We observe, how do the exchange rates of those three tokens change over time, together with the quantities issued and traded. Note: each of the three tokens can be, technically, traded against the other two, as well as against a reference currency: euro, dollar etc. At the behavioural plan, we observe the way that participants make up their mind, and the way they pass from innovative behaviour to more and more ritualized patterns.

At this stage, two versions of the experiment come to my mind: market-constrained, and unconstrained. The market-constrained experiment would involve real kilowatt hours being traded in that experimental environment. At some moment (to be defined), participants could get real electricity for their Wasuns, Fossils, or their WTH, or the equivalent of that electricity in a fiat currency. Fiat currency is a name frequently given to what we call “normal money”. Fiat comes from Latin, and means ‘benediction’. In this case, the benediction comes from a central bank. In this constrained version, participants to my experiment have relatively strong a motivation to make realistic decisions, and this is a plus. Just as in electricity, a plus has sort of a symmetrical minus, and the minus means that somebody has to pay those bills at the end. More realistic a motivation in participants equals higher costs of the experiment in the organizing entity. In the market-unconstrained version, the tokens are purely virtual: no participant earns any claim on any kilowatt hour or on its market value. It allows driving down the costs of the experiment, but takes a lot of real motivation away from the participants.

Any experiment should bring useful results. Creating and maintaining an experimental environment is an effort, which deserves a reward. When I say ‘useful’, one of the most immediate questions that pops up in my mind is ‘useful to whom?’. Who could be deriving substantial gains from well-tested solutions in this specific domain? Cryptocurrencies enter into the broad category of FinTech, and they are based on a specific technology called Blockchain. Financial institutions, and providers of FinTech digital utilities are the most obvious beneficiaries of a good, solid experiment with cryptocurrencies connected to the market of renewable energies. Still, suppliers of energy, as well as the suppliers of technologies for producing energy could have something to gain in that experiment.

An experiment should test something hard to peg down in other ways, a factor of risk, kind of uncertain in its happening, and kind of valuable in its variance. Here, really, Bob’s my uncle. The answer is simple: human behaviour is the main factor of uncertainty both in finance and in technological change, and it is a bearer of value. The most obvious experiment in that field consists in prototyping several, alternative solutions of a cryptocurrency attached to the market of renewable energies, and observing the users’ behaviour when they get the possibility of testing these solutions. The basic characteristics of such a prototype are: a) the technological platform used to convey the whole financial scheme b) the initial valuation of the cryptocurrency and the way it is supposed to change c) the essential financial function of the cryptocurrency, i.e. payment (liquidity) or accumulation d) the amount of energy, in kilowatt hours, attached to the amount of cryptocurrency issued, and, as I think about it, there would be an (e) as well, namely e) how is the given cryptocurrency being issued (mining, contract etc.) and what additional, financial services are going to be attached.

Testing uncertain things, in the prospect of making them more predictable, can always do with a set of sensible hypotheses. Looking for anything that can happen is interesting, but when you think about it, anything that can possibly happen is actually anything we think can happen, and so it is useful to phrase explicitly what we think. Formulating hypotheses is a smart way of doing it. I start hypothesising with something kind of most elementary in my view: the distinction between massive, systematic absorption of innovation, on the one hand, and the incidental absorption at the fringe of social fabric. Thus, I formulate my first experimental hypothesis as a dichotomy of two claims: the absorption of any given prototype of cryptocurrency attached to the market of renewable energies is going to be [claim #1.1] massive and dominant in the behaviour of users, so as to create a structurally stable, ritualized pattern of behaviour, or [claim #1.2, antithetic] incidental and sporadic a pattern of behaviour in users, essentially unstable as a structure. As for the connection between the frequency of happening and structural stability, you can consult what I wrote recently in ‘Fringe phenomena, which happen just sometimes’ .

That first hypothesis is something elementary regarding the absorption of any innovation whatsoever. Innovative behaviour in users is just one of the horses in the team that pulls that cryptocurrency-to-be. There is another: financial behaviour. The most elementary distinction that comes to my mind in this respect is the classical Keynesian ‘to be liquid or not to be liquid, that is the question’, or, in slightly more scientific terms, the dichotomy between liquidity and speculative accumulation. We can go and grab some cryptocurrency in order to pay with right now, or, conversely, in order to accumulate it for unspecified, future liquidity. John Maynard Keynes used to make that distinction referring to propensities in human behaviour. Whilst I find those Keynesian distinctions elegant and well-rounded, I find them hard to apply in practice. Propensity is something I can hardly pin down empirically, I mean how frequently do I have to do something in order to name it ‘propensity’? If I follow the given pattern of behaviour fifteen times out of one hundred, does it already deserve the name of ‘propensity’, or, maybe, should I add some more incidences?

That graceful indefiniteness in the Keynesian thought makes me think about something more precise in terms of theory, and so I turn to Milton Friedman and his quantitative monetary equilibrium, P*T = M*V for those somehow closer friends, where P is the current index of prices, T stands for the volume of transactions in the units of real output of the economy, M is the monetary mass supplied, and V is the velocity of said monetary mass. If people generally use money for paying, the velocity of money, measured as V = [P*T]/M, remains fairly constant. It means, in other terms, that any change in the amount of monetary mass supplied should, logically, cause a proportional change in prices. On the other hand, when money is being hoarded, and users build speculative positions with it, the velocity of money decreases. The link between the supply of money and prices weakens. My money, in this case, is the cryptocurrency I am testing, and is nominal amount, i.e. the number of tokens issued, corresponds to the M variable. The volume T is the number of kilowatt hours of renewable energy traded for those tokens, and P stands for the (average) price of one kilowatt hour.

My second, experimental hypothesis regarding that monetary side of the thing is, once again, antithetic. It says that [claim #2.1] that the freedom of issuing a cryptocurrency attached to the market of renewable energy, combined with unconstrained supply of said energy, is going to produce speculative behaviour, i.e. the hoarding of tokens and decreasing velocity in their circulation, without direct leverage upon the price of energy. I am grounding that claim, somehow intuitively, both in my own research and in the article by Dirk G. Baur, Kihoon Hong and Adrian D.Lee . Antithetically, I formulate [claim #2.2], namely that the issuance of cryptocurrencies attached to the market of renewable energies is going to produce just liquidity in users, i.e. those tokens will have constant velocity, without significant speculative positions.

You can notice that when I formulate experimental hypotheses, I do so in slightly different a manner from my normal hypothesising, i.e. I use that construct of antithetic, internally structured set of claims. This is a very intuitive approach from my part, and from the part of most human beings as a matter of fact. The habit of classifying phenomena in two antithetic categories, sometimes designated as the rule of excluded third, is very deeply rooted in our culture. The classical, Aristotelian logic is based on this pattern (you can find a lot of interesting stuff about it in the writings of my great, and defunct compatriot, Alfred Korzybski). It is just that thing about experiments: you don’t really know what kind of results they are going to bring, and, basically, the more ambitious is the scientific design of an experiment, the more surprises it produces.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

La tâche d’invention en cas d’urgence

Mon éditorial

Je suis en train de faire une connexion entre l’analyse comportementale (béhavioriste) et les grands axes de mon travail de recherche cette année, c’est-à-dire le business plan pour investir dans les villes intelligentes, d’une part, et le site éducatif en sciences sociales, centré sur l’apprentissage à travers la participation dans de la recherche réelle, d’autre part. Je commence par expliquer le terme d’analyse béhavioriste, qui semble avoir ses racines dans les travaux de Burrhus Frederic Skinner, formalisés, entre autres, dans son livre « La science et le comportement humain » (1953[1]). L’idée de B.F. Skinner était simple : le comportement et son observation sont les sources d’information les plus sûres dans l’étude psychologique. Ce que les gens disent qu’ils pensent ou qu’ils ressentent est un contenu linguistique, filtré plusieurs fois à travers les schémas culturels. En plus, l’étude objective du comportement humain révèle plusieurs cas de dissonance entre ce que les gens disent qu’ils font et ce qu’ils font réellement. Si je veux comparer un être humain avec un animal, en termes de mécanismes psychologiques, ce dernier ne me dira rien car il ne parle pas.

L’idée d’observer le comportement plutôt qu’écouter à ce que les gens ont à dire semble être plus ancienne que les travaux de Skinner. Un philosophe Britannique, Bernard Bosanquet , avait même fondé toute une théorie sociale moderniste sur ce concept (consultez « The Philosophical Theory of The State » ). Quoi qu’il en soit, depuis Skinner, le progrès dans l’analyse neurologique à rendue possible l’étude de ce qui se passe à l’intérieur de ce que, faute de mieux, nous appelons l’esprit. L’analyse strictement béhavioriste est en train de se transformer, d’une position théorique en un outil de plus en plus pratique de recherche appliquée. Il y a beaucoup de situations où observer le comportement est la méthode la plus immédiate, la plus intuitive et en même temps la plus payante pour créer des stratégies du quotidien, à appliquer dans des business plans, des campagnes politiques etc. Lorsque, sur votre profil Twitter, une personne que vous n’avez jamais contactée auparavant commence à vous observer, tout en vous refilant occasionnellement des tuyaux sur des hôtels chouettes à visiter ou bien sûr des chaussures vraiment cool à acheter, il y a des fortes chances que cette personne n’existe pas et que ce soit un robot béhavioriste crée par une intelligence artificielle. Les entités d’intelligence artificielle appelées « moteurs béhavioristes » créent des entités d’observation – des robots qui se déguisent en des humains en ligne – qui observent notre comportement tout en nous fournissant des stimuli pour nous tester.

Dans un business plan, l’analyse béhavioriste est l’équivalent d’un costume Hugo Boss dans une réunion d’affaires. Un costume de marque ne garantit pas que la personne qui le porte soit extrêmement intelligente, mais il donne des fortes chances qu’elle soit suffisamment intelligente pour gagner suffisamment d’argent pour se payer le costume. L’analyse behavioriste d’un concept de business ne garantit pas que ça marchera à coup sûr, néanmoins ça garantit que l’auteur du business plan avait fait un effort efficace pour comprendre la mécanique du marché en question. Dans la recherche, l’approche béhavioriste peut servir comme une forme pratique du rasoir d’Ockham : avant de généraliser et de théoriser sur des systèmes et des paradigmes, on observe le comportement humain et on s’impose de rester près dudit comportement quel que soit le voyage intellectuel que l’on se paie.

J’avais donc formulé mes grandes lignes de recherche sur les villes intelligentes (consultez Smart cities, or rummaging in the waste heap of culture), pour m’aventurer un peu dans le domaine d’expérimentation (là, vous pouvez faire un saut vers Une boucle de rétroaction qui reviendra relativement pas cher suivie par There are many ways of having fun with that experiment ) pour commencer enfin à généraliser sur l’analyse béhavioriste proprement dite (Any given piece of my behaviour (yours too, by the way) ). Maintenant, j’enchaîne sur tout ça avec cette assomption que les comportements routiniers, hautement ritualisés et abondamment régulés sont les premiers à être modifiés de façon profonde et par conséquent les premiers à conduire vers un changement socio-économique significatif. Plus accidentel et mois régulé est le schéma donné de comportement, plus il est difficile de dire à quel point il est modifiable.

Je sais que tout cela peut sembler bien abstrait et cette impression est largement justifiée. Tenez : dans Any given piece of my behaviour (yours too, by the way) j’avais tracé une courbe isoquante en ce qui concerne le comportement humain mais je ne sais pas encore quelle pourrait bien être la quantité constante sur cette courbe. Ouais, lorsque vous voulez de l’abstrait, tapez « Wasniewski » : je suis définitivement la bonne adresse pour vous servir des idées pas encore bien cuites. Je veux rendre ces idées un peu plus mûres, genre les laisser aller en ville sans se soucier qu’elles attaquent quelqu’un. Alors, je conçois une expérience pour tester. J’imagine un groupe des gens. A la rigueur, je peux leur effacer la mémoire et les placer dans une ville post-apocalyptique mais ce n’est pas absolument nécessaire. Je leur fais prendre des décisions qui suivent cette courbe béhavioriste que je viens tout juste d’inventer : en partant des décisions routinières et très ami-ami avec des règles de comportement (utiliser le transport urbain) ; en passant par des décisions en grand cycle (où dois-je organiser l’anniversaire de notre mariage ?) et en aboutissant à des trucs hardcore comme alerte à la bombe ou bien évacuation à l’improviste. Je donne à ces gens l’accès facultatif à un répertoire des technologies, par exemple celles typiques pour une ville intelligente. En pratique, cela voudrait dire, le plus vraisemblablement, que je place ces gens dans deux environnements distincts, genre Environnement A avec Toutes Ces Belles Technologies et Environnement B Sans Tous Ces Machins Modernes. Dans chaque décision que ces gens prennent, je peux observer la différence entre le comportement en la présence des technologies sous l’étude, d’une part, et celui entrepris sans ces technologies.

Une bonne expérience exige des bonnes mesures. Lorsque je me réfère au comportement humain, je peux mesurer le résultat ainsi que le processus de comportement lui-même. En ce qui concerne le résultat, je commence par penser en économiste que je suis et je peux me demander, par exemple, combien de temps vont avoir besoin mes deux groupes pour créer une structure de marché cohérente en ce qui concerne les ressources de la communauté. Il y a cette assomption que si le marché est possible en tant que tel (s’il n’est pas exclu par l’usage de la force, par exemple), le premier marché à apparaître sera celui de la ressource la plus vitale pour la communauté, le second marché à se former sera celui de la ressource qui vient en seconde position en termes d’importance et ainsi de suite. Comme je me tourne vers le processus de comportement, je peux par une observation qualitative : quelles sont exactement les actions prises par ces personnes pour arriver à un résultat donné ? Quelles séquences distinctes je peux identifier à cet égard ? Je peux mesurer le temps d’exécution, l’usage des ressources accessibles, le nombre de gens engagés dans le processus etc.

Maintenant je retourne à la logique que j’avais déjà exprimée dans Une boucle de rétroaction qui reviendra relativement pas cher suivie par There are many ways of having fun with that experiment : je donne aux ingénieurs la possibilité d’observer mes deux groupes des gens et je leur donne la tâche de concevoir des technologies qu’ils jugent le mieux adaptées à leur besoins, en se basant sur l’étude comportementale. J’ai donc des technologies conçues pour des gens qui n’ont pas de technologies de ville intelligente, d’une part, et des technologies faites pour des utilisateurs qui en ont déjà des versions différentes. Y aura-t-il des différences significatives entre ces deux groupes des technologies ? Que va-t-il se passer si, au lieu de donner la tâche d’invention à des ingénieurs humains, je la donne à de l’intelligence artificielle ? Quelle sera la différence entre la conception des technologies pour usage routinier et régulé et celle des technologies pour des cas d’urgence ?

Je continue à vous fournir de la bonne science, presque neuve, juste un peu cabossée dans le processus de conception. Je veux utiliser le financement participatif pour me donner une assise financière dans cet effort. Voici le lien hypertexte de mon compte sur Patreon . Si vous vous sentez prêt à cofinancer mon projet, vous pouvez vous enregistrer comme mon patron. Si vous en faites ainsi, je vous serai reconnaissant pour m’indiquer deux trucs importants : quel genre de récompense attendez-vous en échange du patronage et quelles étapes souhaitiez-vous voir dans mon travail ?

[1] Skinner, B. F. (1953). Science and human behavior. Simon and Schuster

My most fundamental piece of theory

My editorial

I am returning to what seems to be my most fundamental piece of theory, namely the equilibrium between population, food and energy, or N = A*(E/N)µ*(F/N)1-µ, where N represents the headcount of local population, E/N stands for the consumption of energy per capita, F/N is the intake of food per capita, whilst ‘A’ and ‘µ’ are parameters. I am taking on two cases: United Kingdom, and Saudi Arabia. In United Kingdom, the issue of population, and more specifically that of immigration, has recently become a much disputed one, in the context of what is commonly called ‘Brexit’. In Table 1, below, you can find, first of all, the two types of ‘model population’ computed with the equation specified in my article ‘Settlement by Energy – Can Renewable Energies Sustain Our Civilisation?’. Thus, columns [2] and [3] provide that model size of population, in millions of people, computed on the grounds of constant alimentary intake equal to F/N = 1 219 100 kcal per person per year. More specifically, this particular variable was used in mega-calories per person year, thus F/N = 1219,1 Mcal per person per year. We have here one of the best fed populations on Earth. This factor is being combined with the current consumption of energy per capita, in tons of oil equivalent per person per year, as published by the World Bank. Column [2] provides the model size of population calculated as model(N) = (Energy consumption per person per year)0,52*(1219,1)1-0,52=0,48. For the sake of presentational convenience, let’s call it ‘Energy-based population’. Column [3] takes on the same logic, but introduces, as an estimate of the (E/N) variable, just the consumption of renewable energy per person per year, once again in tons of oil equivalent, and using the equation model(N) = (Renewable energy consumption per person per year)0,3*(1219,1)1-0,3=0,7.  This type of model population is going to be labelled ‘Renewables-based population’. Column [4] provides the real, current headcount of British population each year, and column [5] gives the estimation of net migration (immigration minus emigration), in a snapshot every five years. The ‘energy-based population’ starts, in 1990, slightly above the real one, with 3,597 tons of oil equivalent being consumed, on average, by one resident, per year, in the beautiful homeland of Shakespeare and The Beatles. That model population follows ascending trend, just as the real one, until 2003. Over that period of time, i.e. between 1990 and all through 2003, energy use per capita had been climbing, in a slightly hesitant manner, up to 3,732 tons of oil equivalent. The real headcount increased, during that period, by 2,58 millions of people, whilst the ‘energy-based’, model population climbed by 1,14 million. Starting from 2004, the British population starts saving energy. In 2013, its average consumption was of 2,978 tons of oil equivalent per capita. The real population has increased, since the last checkpoint in 2003, by 5,25 millions of people. Yet, the ‘energy-based’ population has decreased by 7,1 million people, conformingly to its underlying equation. Saving on the final consumption of energy, which really took place in United Kingdom, reduces the theoretically sustainable demographic base. The reader could say: this is not an equilibrium, if the model population matches the actual one just in a few years over the whole period since 1990 through 2013. Still, if you compute the average proportion (i.e. average over 1990 – 2013) between the real population, and the ‘energy based population’, like ‘real one divided by the energy-based one’, that average proportion is equal to A = 1,028560563. Quite a close match on the long run, isn’t it? This close match decomposes into two distinct phases. The first phase is that of increasing, energy-based sustainability of the population. The second one is the process of growing discrepancy between the real headcount, on the one hand, and what is sustainable in ‘energy-based’ terms, on the other hand.

Now, let’s have a look at column [3], thus at the model population based just on the consumption of renewable energy, alimentary intake held constant. This model population follows a different trend. In 1990, when my observation starts, and when the average resident of United Kingdom consumes, on average, just 0,02 tons of oil equivalent in renewable energy per year. At the time, the ‘renewables-based population’ is way below the real one, more specifically 10,37 million below. At my second checkpoint, in 2003, consumption of renewable energy in the UK doubles, up to 0,04 tons of oil equivalent per year per person. The ‘renewable-based’ population, in 2003, with 53,4 million people, is still below the real one by 6,45 million. In 2013, at my final checkpoint, the situation reverses. With 0,18 tons of oil equivalent per capita per year, in terms of renewable energy consumed, the ‘renewables-based’, model population of Britain makes 85,83 million, 21,5 million more than the real one. Interestingly, the ‘renewables-based’, model population started soaring up by 2007 – 2008, precisely when the global consumption of renewable energies started to grow really fast. Once again, some readers could have the legitimate doubt whether a model yielding that much difference is any kind of equilibrium. I had the same doubts when doing these maths, and the result surprised me. Over the whole period of 1990 – 2013, the ratio of real population divided by the ‘renewables-based’ one was A = 1,028817158, i.e. up to the third decimal point it is the same proportion as the one between the ‘energy-based’ population and the real one. I know that, at this point, it would be very easy to enter the tempting world of metaphysics. If the proportion between my femur and my humerus is X, and I find a piece of driftwood, which, compared to my femur, makes the same proportion, that piece of wood can easily become the fossilized bone of my distant ancestor etc. What holds me (relatively) close to real life is the fact that the recurrent proportion in question is the outcome of two equations with different input data and different values in parameters, and still with the same essential structure. This, in turn, makes me think that what I have found out are two processes, which make some kind of undertow in the country under scrutiny, i.e. United Kingdom in this precise occurrence. Being more and more profuse on the overall consumption of energy made the sustainable demographic base of UK swell, up to 2003, and then the fact of getting meaner on energy per capita contributed to make this demographic base less and less sustainable. In parallel, the systematic increase in the consumption of renewable energies consistently pumped up the demographic sustainability of the UK. Why am I talking about two distinct processes? Well, saving energy per capita means, essentially, more efficiency in using engines of all kinds, as well as high-power electronics (like big servers). On the other hand, shifting towards renewable energies is, respectively, one step and two steps upstream, in the chain of transformation. This type of change pertains essentially to trading combustion engines for electric ones, and switching the generation of electricity from fossil fuels to wind, water, sun etc.

At this point, my theoretical stance fundamentally differs from what the reader could find, for example, with the Club of Rome (see, for example: Meadows et al. 1972[1]). I develop a theoretical approach, where we, humans, are inherently prone to maximized our total intake of energy from environment. Those local equilibriums between population, food and energy mean that any such local population can be represented as a super-organism, absorbing energy, like one of those Australian, saltwater crocodiles, which grow up to the limits offered by their habitat, and there is no question of stopping before reaching those limits. The otherwise highly respectable, intellectual stance of the Club of Rome amounts to saying that we have to save energy in order to survive. I say that if this is the only way for us, humans, to survive, we can just as well start packing. The simple, straightforward saving of energy is simply not what we do. You could ask a white shark to turn vegan. Guess the result. On the other hand, what we can do is to change our technological base so as to have the same or greater an amount of directly consumable energy (motor power, heat, and functionality in our electronics) out of less burdening a basket of primary energies. The reader could object: “But the average resident of United Kingdom did save energy between 2003 and 2013”. Yes, they did, and their sustainable demographic shrunk accordingly. The robustness of any reasoning about demographics can be verified with data on net migration. Whatever I could calculate as the ‘demographic base’ of a country, the net inflow (or net outflow) of people in a given time and place is a sure indicator of how attractive said place is. Column [5] in table 1 provides the data published by the World Bank. These are snapshots, taken every five years: 1992, 1997, 2002, 2007, and 2012. At each of these checkpoints, net migration is way above the net increase in population. It means that immigrants are filling a space left by the otherwise shrinking domestic population. The place is becoming so attractive for newcomers that an effect of demographic crowding out is to notice.

As we move to the right, in table 1, column [6] introduces the ratio of fixed capital stock per one resident patent application. The reader can notice an almost continuous growth in this variable between 1990 and 2013. In terms of the theoretical stance I am developing in my research, that growth means an almost continuous change in the evolutionary function of selection between the incoming, new technologies. We can see a case of fixed capital accumulating faster than the capacity to create patentable invention. The female capitalist structures in the economy of United Kingdom are systematically increasing their capacity to absorb and recombine inventions. That means stronger incentives to invest in the development of new, technologically advanced businesses (the female, capitalistic function), which, in turn, creates an absorptive process: the capitalist structures are, in a sense, hungry for innovation. As the process unfolds, the growing, average amount of fixed assets per one patent application alleviates the pressure, on each individual invention, to be the toughest and the meanest dog in the pack. This, in turn, can be interpreted as lesser a pressure towards hierarchy-forming, in patentable invention, and stronger a pressure towards networking between inventions. One more step to the right side of table 1 brings into our focus the data on aggregate depreciation in fixed assets, as a fraction of the Gross Domestic Product; this is column [7]. We can observe some sort of waving cycle there: increase between 1990 and 1995, then a swing down the scale, between 1996 and 2003, just to give rise another surge, between 2004 and 2013. Growing values in the ratio of physical capital per one patent application seem to produce a cyclical stir in the depreciation of fixed assets. It is reasonable to assume that the pace of physical wear and tear is pretty constant over time, and the changing burden of amortizing fixed assets comes from moral obsolescence, thus from the pace of technological change. That pace of obsolescence, although displaying a tendency to cyclical change, follows an overall ascending trend. The more capital per one patent application, thus the less hierarchy and the more networking among patentable inventions, the greater the burden of technological change on the current aggregate income. A last step to the right, in table 1, leads to column [8], which provides information about the supply of money in the British economy, as a % of the GDP. Another wavy cycle can be noticed, which eventually leads to very high a supply of money, and very low a velocity in said money. Quick pace of technological change brings about the necessity, in the monetary system, to produce a growing number of alternative algorithms of response. That period between 1990 and 2013 shows quite well, how monetary systems can very literally learn to respond. At first, between 1990 and 1993, the monetary system responds, to an accelerating obsolescence in established technologies, by increasing the velocity of money. Starting from 1994, a different mechanism turns on: instead of increasing the velocity of circulation, the monetary system just accumulates monetary balances. It is accumulating monetary resources in reserve, or, in the lines of the Keynesian theory, it is accumulating speculative positions. In the presence of increasing uncertainty as for the actual lifecycle of our average technology, we build up the capacity to react pretty quickly (money allows such quick reaction) to any further technological change.

Table 1 – Selected data regarding United Kingdom

Year Model population, millions, based on energy consumption in general Model population, millions, based on the consumption of renewable energy Real population, millions Net migration, headcount Capital stock per one patent application, at current PPPs (in mil. 2011US$ Aggregate depreciation of fixed assets, as % of the GDP Supply of broad money, as % of the GDP
[1] [2] [3] [4] [5] [6] [7] [8]
1990 58,94 46,89 57,26   185,48 0,116 0,85
1991 59,88 46,37 57,42   191,39 0,124 0,821
1992 59,68 51,00 57,58 205443 201,54 0,13 0,565
1993 59,92 49,89 57,74   213,04 0,135 0,561
1994 60,08 54,27 57,90   232,57 0,145 0,574
1995 60,05 54,87 58,08   246,71 0,148 0,615
1996 61,30 53,62 58,26   270,95 0,147 0,66
1997 60,32 54,51 58,46 498998 281,41 0,139 0,788
1998 60,54 54,62 58,66   261,31 0,133 0,912
1999 60,51 53,28 58,87   236,04 0,124 0,914
2000 60,53 53,51 59,08   232,87 0,116 0,959
2001 60,52 51,98 59,30   242,77 0,113 1,006
2002 59,69 53,64 59,55 968350 251,42 0,109 1,01
2003 60,07 53,40 59,85   260,60 0,107 1,046
2004 59,76 56,35 60,21   302,57 0,109 1,094
2005 59,69 59,43 60,65   368,38 0,114 1,178
2006 58,95 61,36 61,15   407,57 0,119 1,274
2007 57,60 63,74 61,69 2030075 433,89 0,124 1,403
2008 56,89 68,72 62,22   512,68 0,137 1,617
2009 54,96 71,41 62,72   565,24 0,156 1,664
2010 55,68 76,37 63,16   643,84 0,173 1,672
2011 53,32 78,96 63,57   635,67 0,167 1,543
2012 53,89 81,20 63,96 990000 690,89 0,174 1,512
2013 53,42 85,83 64,33   777,86 0,178 1,486

Source: World Bank, Penn Tables 9.0

The case of United Kingdom is that of a relatively well fed society, which increases its demographic sustainability by shifting its technological base towards renewable energies. Presently, we can have a look at completely different a socio-economic environment: Saudi Arabia. Saudi Arabia is one of those countries, which seem to present a huge potential for socio-economic change, at the condition of increasing the use of renewable energies. In terms of the evolutionary selection function regarding new technologies, Saudi Arabia is a land of peace: the ratio of physical capital per one resident patent application is counted in dozens of billions of US dollars. Still, there seems to be more and more agitation in the backstage: this ratio, although very high, had been cut by seven between 1990 and 2013. There is a sneaky snake in that Eden garden. On the top of that, Saudi Arabia is one of those interesting societies with just a slight food deficit per capita: enough to make people alert, not enough to push them into the apathy of deep, chronical hunger. The average alimentary intake per capita, in Saudi Arabia, from 1990 through 2013, had been of F/N = 1087,7 mega calories per year. Table 2, below, provides the same type of quantitative profiling regarding Saudi Arabia as has been presented for United Kingdom. Whilst in the latter case, we deal with a population that had increased its headcount by some 11% between 1990 and 2013, Saudi Arabia presents completely different a calibre of demographic change: plus 83% during the same period. With this magnitude of demographic growth, the social structure in 2013 was likely to be very different from that in 1990. Interestingly, the final consumption of energy per capita per year had increased almost by the same gradient as population, i.e. by 79,5%. Even more interestingly, the ‘energy-based’, model population in Saudi Arabia, calculated with the empirical function model(N) = (Energy consumption per capita)0,72*(1087,7)1-0,72=0,38, never reaches that magnitude, although, on the long run, it is matched by the real population by the scale factor A = 1,026310231. The ‘energy-based’ population grows, over the whole window of observation, just by 52,4%. It is a good example of how the alimentary base of a society works. In comparison to United Kingdom, this base is just 10,8% thinner, and, in spite of almost doubling the absorption of non-edible energy, Saudi Arabia has trouble to develop a sustainable demographic base.

Saudi Arabia is one of those countries, where the absorption of renewable energies per capita had been consistently shrinking in our window of observation, from 1,35 kilograms of oil equivalent per year per capita, in 1990, to barely 0,41 kilograms in 2013.   The second version of model population, the ‘renewables-based’ one, computed as model(N) = (Renewable energy consumption per capita)0,27*(1087,7)1-0,27=0,73, had shrunk from 27,64 million in 1990, to 19,99 million in 2013. Let’s rephrase it, in order to grasp the phenomenon under scrutiny. With the amount of energy that the average Saudi resident consumed per capita in 1990, the country had a sustainable demographic base. Still, with the long-run alimentary intake at 1087,7 mega calories per year per person, the present population, exceeding 30 million people, is hardly sustainable, even with the soaring consumption of energy. Going back to 1990, once again, the amount of renewable energies consumed at the time, other variables held constant, could sustain a population much larger than the 16,89 million recorded in 1990. With the present consumption of renewables, the present population of Saudi Arabia looks anything but sustainable. As we have a look at net migration in Saudi Arabia (column [5] in table 2), a puzzling tendency appears: as long as the local population was robustly sustained by its energy consumption, the balance on migration was negative. When the local population started to drop wheels off its sustainable base, the balance on migration turned positive. Illogical? Maybe, and this is precisely why it is an intellectual challenge, and why I am trying to sort out my first puzzlement, regarding local equilibriums between population, food, and energy. A quick comparison with United Kingdom shows two, completely different paradigms of social change. In United Kingdom, the abundance of food allowed smooth shift towards renewable energies, so as to keep the place highly attractive in spite of saving on the overall energy consumption. In Saudi Arabia, with just slightly lower an alimentary intake, and highly problematic sustainability in population, domestic demographic growth stays way above the net migratory inflow.

Let’s have a look at technological change in Saudi Arabia. First, by having a look at column [7] in table 2, we can see that the relative burden of depreciation, i.e. of obsolescence in established technologies, is close to what is observable in United Kingdom. Thus, the basic pace of technological change can be assumed as nearly identical. Still, the economic system reacts to that change exactly in the opposite way to that observable in United Kingdom. At the starting point of our observation, in 1990, the Saudi economy is extremely abundant in physical capital, when denominated in resident patent applications (column [6]), and rather mean on money. In terms of my theory, it means very little competition between patentable inventions, very little hierarchy among them, and very little algorithms of response in the monetary system. As time passes, and as technological change speeds up (the share of depreciation in the GDP grows), the amount of physical capital per one patent application dramatically shrinks. It means increased effort in research and development, and quickly growing a competition, as well as quickly forming a hierarchy, between all those new inventions. Still, by comparison to the British monetary system, the Saudi one is far from being profuse. Not much is happening in terms of algorithms of response, as well as in terms of speculative positions, as regards the supply of money. In the presence of very nearly the same pace of technological change, and similar gradient of change in that pace, those two economic systems – United Kingdom and Saudi Arabia – develop completely different responses. United Kingdom gives some loose to its hierarchy of inventions, and to the competition between them, and adds a lot of liquidity in its monetary system. Saudi Arabia spurs competition between inventions, and barely adds to the supply of money. Of course, a lot of factors make the difference between those two societies: religion, institutions, historical track, natural resources, climate etc. Still, in terms of the theory I am forming, one difference is sharp like a razor: the difference in food base. United Kingdom has a secure, slightly superfluous alimentary regime, whilst Saudi Arabia is just below satiety. Can this single factor be the ultimate distinction, explaining all the other economic differences? My empirical findings strongly suggest that the answer is ‘yes’, and what I am trying to do now is to go more in depth of that distinction.

Table 2 Selected data regarding Saudi Arabia

Year Model population, millions, based on energy consumption in general Model population, millions, based on the consumption of renewable energy Real population, millions Net migration, headcount Capital stock per one patent application, at current PPPs (in mil. 2011US$ Aggregate depreciation of fixed assets, as % of the GDP Supply of broad money, as % of the GDP
[1] [2] [3] [4] [5] [6] [7] [8]
1990  17,62  27,64  16,89  75 767,78  0,13  0,43
1991  19,21  28,29  17,40    45 179,69  0,13  0,44
1992  20,66  25,37  17,89 -110000  58 512,83  0,12  0,43
1993  20,80  23,10  18,37    60 291,97  0,13  0,46
1994  21,16  27,96  18,85  39 192,54  0,14  0,46
1995  20,86  26,55  19,33    47 465,66  0,14  0,45
1996  21,52  20,58  19,81  49 895,93  0,13  0,44
1997  20,43  19,83  20,30 -350000  24 239,44  0,13  0,44
1998  21,01  20,44  20,83  31 785,39  0,14  0,52
1999  20,90  20,22  21,39    20 603,36  0,13  0,50
2000  21,17  20,14  22,01  20 231,88  0,12  0,45
2001  21,13  20,89  22,67    34 545,11  0,13  0,48
2002  22,27  20,79  23,36 730000  26 811,98  0,13  0,54
2003  21,98  20,44  24,06    30 481,69  0,13  0,51
2004  22,50  20,36  24,75  22 634,09  0,13  0,50
2005  22,41  20,07  25,42    16 913,35  0,12  0,45
2006  23,67  20,78  26,08  19 550,01  0,13  0,47
2007  23,79  20,30  26,74 995000  20 713,69  0,15  0,51
2008  25,28  20,24  27,41  n.a.  0,14  0,48
2009  25,98  20,29  28,09    n.a.  0,18  0,65
2010  27,57  20,09  28,79  12 904,10  0,17  0,55
2011  26,31  20,22  29,50    12 386,98  0,16  0,49
2012  28,14  20,42  30,20 1590000  n.a.  0,16  0,52
2013  26,85  19,99  30,89    10 065,90  0,18  0,56

Source: World Bank, Penn Tables 9.0

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

[1] Donella H. Meadows, Dennis L. Meadows, Jorgen Randers, William W. Behrens III, 1972, The Limits to Growth. A report for The Club of Rome’s Project on the Predicament of Mankind, Published in the United States of America in 1972 by Universe Books, 381 Park Avenue South, New York, New York 10016, © 1972 by Dennis L. Meadows, ISBN 0-87663-165-0

Any given piece of my behaviour (yours too, by the way)

My editorial

Those last weeks I am very much involved with designing experimental environments. I want to develop a business plan for investing in smart cities, and a good business plan could do with the understanding of how can human behaviour change in various experimental conditions. Thus, how are people likely to behave when living in a smart city? How is their behaviour going to be different from those living in a classical, non-smart (dumb?) city? Behaviour is what we do in response to stimuli from our environment. OK, so now I begin by defining what I do. When I take on defining something apparently so bloody complex that my head is turning at the very thought of defining it, the first thing I do is to structure what I do, i.e. I distinguish pieces in the whole.

What kind of categories can I distinguish in what I do? One of the first distinctions I can come up with is by the degree of recurrence. There are things I do so regularly that I even don’t always notice I do them. When repeating those actions, I practically fly on automatic pilot. Walking is one of them. I breathe quite systematically as well, as I think about it. I drive my car almost every day, along mostly repetitive itineraries. As I sail onto the waters of the incidental, and the shore of mindless recurrence progressively vanishes behind me, I cross that ring of islands, like the rings of the Saturn, where big things happen just sometimes, yet they happen in many, periodically spaced sometimes. These are summer holidays, Christmases, or wedding anniversaries. As I sail past the reef of those reassuringly festive occasions, I enter the hardly charted waters of whatever happens next: these are things that I subjectively perceive as absolutely uncertain, and still which happen with a logic I cannot grasp on the happening.

So, here we are with one distinction inside our behaviour: the steady stream of routine, decorated with the regular patches of periodical, ritualized, big events, and all that occasionally visited by the hurricanes of the uncertain. When people live in an urban environment (in any environment, as a matter of fact), their living is composed of three behavioural types: routines, cyclical actions, and reactions to what they perceive as unpredictable. If living in smart cities is about to change our urban lives, it should have some kind of impact upon our behaviour, thus on routines, cyclical events and emergencies. How can it happen and how can we experiment with it?

Another intuitive distinction about our behaviour is that between freedom and constraint. I perceive some of my actions as taken and done out of my sheer free will, whilst I see some others as done under significant constraint. I know that the very concept of free will is arguable, yet I decided to rely on my intuition, and this not a good moment to back off. Thus, I rely and I distinguish. There are actions, which I perceive as undertaken and carried out freely. Trying to be logical, now, I interpret that feeling of freedom as being my own experience of choice. In some situations, I am experiencing quite a broad repertoire of alternative paths to take in my action. A lot of alternatives means that I don’t have enough information to figure out one best way of doing things, and I am entertaining myself with my own feeling of uncertainty. Freedom is connected to uncertainty, but not just to uncertainty. If I can do things in many alternative ways, it means nobody tells me to do those things in one, precise way. There are no ready-made recipes for the situation, or relevant social norms, in my local culture. On the other hand, my highly constrained behaviour corresponds to situations tightly regulated by social norms.

When I have two different distinctions, I can make a third one, two-dimensional, this time. In the most obvious and the least elaborate form it is a table, as shown below:

  Free behaviour (no detailed social norms) Constrained behaviour (normatively regulated)
Routine behaviour Modality #1 Modality #2
Cyclical behaviour Modality #3 Modality #4
Emergency behaviour Modality #5 Modality #6

A normal, fully sane person would leave that table as it is, but I am a scientist, and I have inside me that curious ape, that happy bulldog, and the austere monk. I just need some maths to have something for rummaging in. I just have convert my table into a manifold, with those two nice axes. Maybe I could even trace an indifference curve in it, who knows? Anyway, I need converting modalities into numbers. The kind of numbers I see here are probabilities. The head of the table, namely the distinction between freedom and constraint can be translated into the probability that any given piece of my behaviour (yours too, by the way) is regulated by an unequivocal social norm. It is more fun than you think, as a matter of fact, as we have lots of situations when there are many social norms involved and they are kind of conflicting. I am driving, in order to pick my kid from school, and suddenly I drive over a dog. I should stop and give emergency care to the dog, but then I will not pick up my child from school at time. Of course, at the end of the day, we can convert all such dilemmas into the Hamletic “to be or not to be”, which really narrows down the scope of available options. Still, real life is complicated.

Anyway, I am passing now to scaling numerically the side of my table, as a probability, and I am bumping against a problem: if I translate the recurrence of anything as a probability, it would be the probability of happening in a definite period of time. Thus, it would be a binomial distribution of probability. I take my period of time, like one month, for example, and I just stuff each occurrence in my behaviour into one of the two bags: “yes, it happens at least once in one month” or “no, it doesn’t”. The binomial distribution is fascinating for studying the issue of structural stability (see Fringe phenomena, which happen just sometimes), but in a numerical manifold it gives just two discrete classes, which is not much of a numerical approach, really. I have to figure out something else and that something else is simply recurrence, understood as the cycle of happening, like every day, every three days, every millennium etc.

And so I come up with that nice behavioural graph in PDF, available from the library of my blog . See? Didn’t I tell you I would make an indifference curve? This is the red one in the graph. It is convex, with its tails nicely, assymptotically gliding the long of the axes of reference, so it is bound to be an indifference curve, or an isoquant. The only problem is that I haven’t figured out, yet, what kind of constant quantity it measures. It will come to me, no worries. Still, for the moment, what comes is the idea that on the two tails of this curve I have somehow opposite patterns of behaviour, mostly as for their modifiability. On the bottom right tail, where those ritualized routines dwell, I can modify human behaviour simply by modifying one simple rule, or just a few of them. From now on, I tell those people (or myself) to do things in way B, instead of way A, and Bob’s my uncle: with any luck, and with a little help from Mr Selten and Mr Hammerstein (1994[1]) those people (or me) will soon forget that the rule has ever been changed. On the opposite, upper left tail of that curve, I have things happening really just sometimes, and virtually no rules to regulate human behaviour. How the hell can I modify behavioural patterns in these whereabouts? Honestly, nothing sensible comes to my mind.

Smart cities mean lots of digital technologies. I have just watched a short video, featuring a robot (well, a pair of automated arms fixed to the structure of a bar), which can prepare hundreds of meals, like a professional cook, imitating the movements of a human. Looks a bit scary, I can tell you, but this is what a smart city can look like: some repetitive jobs done by robots. Besides robots, what can we have in a smart city, in terms of smart technologies? GPS tracking, real-time functional optimization (sounds complicated, but this is what you have, for example, in those escalators, which suddenly speed up when you step onto them), personal identification, quick interpersonal communication, and the Internet of things (an escalator can send emails to a cooling pump, which, in turn, can get friends, via social media, among the local smart energy grids). These technologies can take the functional form of: robots (something moving), mobile apps in a phone, in a pair of glasses etc. (something that makes people and things move), and infrastructure (something that definitely shouldn’t move). In their smart form, these things can optimize energy, and learn. We use to call the latter capacity Artificial Intelligence. I think that it is precisely the learning part that can affect our lives the most, in a smart city. We, humans, are kind of historically used to be learning faster than our environment. We are proudly accustomed to figure out things about things before those things change. In a smart city, we have things figuring out things about us, and at an accelerating pace.

In one of my previous updates (see Smart cities, or rummaging in the waste heap of culture ) I made those four hypotheses about smart cities. Good, now I can reappraise those four hypotheses in terms of human behaviour. We behave the way we behave because we have learnt to do so. In a smart city, we will be behaving in the presence of technologies, which can possible learn faster and better than us. Now, keeping in mind that table and that graph, above, how can the coexistence with something possibly smarter than us modify our patterns of behaviour? Following the logic, which I have just unfolded, modification of behaviour can start in the bottom right area of my graph, or with Modality #2 in the tabular form, and then it could possibly move kind of along the red curve in the graph. Thus, what I previously wrote about new patterns observable in handling money, in consuming energy, in rearranging the geography of our habitat, and finally in shaping our social hierarchies, means that smart cities, and their inherently intelligent technologies can impact our behaviour first and most of all by creating and enforcing new rules for highly recurrent, ritualized actions in our life.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

[1] Hammerstein, P., & Selten, R. (1994). Game theory and evolutionary biology. Handbook of game theory with economic applications, 2, 929-993.

Couper le cheveu en quatre, puis en tirer une racine cube

Mon éditorial

Je continue sur ce truc d’expérience scientifique – ou plutôt du milieu expérimental – pour tester non seulement des produits FinTech mais aussi pour expérimenter avec la façon de les créer. J’en ai déjà donné deux descriptions, sous des angles différents et à ce propos vous pouvez consulter « Une boucle de rétroaction qui reviendra relativement pas cher » ainsi que  « There are many ways of having fun with that experiment ». Ce qui m’intéresse à présent c’est une simulation de cette expérience : j’essaie d’imaginer comment elle peut bien se dérouler. Je commence donc comme un utilisateur de FinTech. Je fais une séquence des décisions financières. Question pratique : ça veut dire quoi, au juste, les décisions financières ? Que veut dire exactement « une séquence » ? Comment ? Je coupe le cheveu en quatre ? Eh bien, oui. C’est mon boulot du chercheur de couper le cheveu en quatre, puis en tirer une racine cube que j’associe ensuite, dans un vecteur, avec une projection de mon tour du crâne dans le système hexadécimal. Ça donne des observations intéressantes, ça rend la vie plus intéressante et, avec un peu de pot, ça permet d’inventer un nouveau shampoing.

Mes décisions financières, donc. Au niveau le plus élémentaire, je définis mon capital financier disponible. C’est là que l’expérimentation peut commencer : comment est-ce que je le définis ? Exemple : à un moment donné t0 j’ai un revenu disponible de €1200 par mois, plus des économies en forme d’épargne monétaire de €10 000, plus un portefeuille d’actions en Bourse qui vaut €6000, et en plus de tout ça j’ai une maison de valeur brute de €300 000 avec une hypothèque bancaire de €10 000, payable en des tranches mensuelles de €150. Question : si on me demande de prendre des décisions financières, dans tout le spectre possible, à commencer par la consommation courante, à travers l’épargne classique, en aboutissant à de l’investissement ciblé, quelle sera l’envergure totale de ces décisions ? J’explique le fond de la question. Dans ce cas précis, mes fonds propres, au moment d’avoir tout mon revenu mensuel réellement disponible (donc avant de le dépenser, mais après avoir payé la tranche de mon emprunt hypothéqué) sont égaux à E = €1200 + €10 000 + €6000 + €300 000 – €10 000 – €150 = €307 050. C’est mon capital sur le côté passif du bilan. Si quelle entité que ce soit m’offre des services financiers, FinTech ou traditionnels, cette entité va gagner son bénéfice en prélevant une marge sur mes opérations financières. Plus de capital je mets en mouvement financier, plus d’occasions de se glaner un petit pourcentage.

Ici, je trouve opportun de briser un cliché à propos de la finance. Il y a cette opinion commune que les financiers sont comme des vampires, qui sucent la dernière goutte de sang de nos veines. Alors, c’est ce qu’un vampire idiot ferait. Un vampire futé prélève juste quelques gouttes et, au mieux, la proie ne remarque rien. L’art de la finance consiste à vivre sur des marges vraiment serrées. Ce n’est pas pour rien que les financiers ont inventé le point de base : un centième d’un point de pourcentage. C’est en points de base qu’on calcule une marge transactionnelle dans la finance.

Le mouvement financier dont je parle est un ensemble de transactions. Dans mon expérience, je m’observe moi-même. Disons qu’il y ait trois de moi : un dépensier sans pédale d’arrêt, un bon vivant avec du bon sens et finalement un Harpagon (vous savez, l’Avare chez Molière, cinquième siècle derrière vous) bien économe. D’habitude, la prodigalité est opposée à l’avarice en relation au revenu disponible. Ici, je propose un cadre de référence légèrement différent. Être dépensier veut dire mettre en mouvement virtuellement la totalité du capital de notre bilan. Le moi prodigue dépenserait la totalité de mon capital et en plus, achèterait un appartement pour location, pour €150 000, en utilisant une hypothèque additionnelle du même montant, soutenue pour €75 000 par la valeur de cet appartement-même et pour les €75 000 restants sur la maison que je possède déjà. Le flux de trésorerie crée par le moi prodigue, sur une année, ferait à peu près : 12*€1200 de revenu courant dépensé, y compris les douze mensualités sur l’hypothèque déjà en place, plus économies dépensées de €10 000, plus €6000 la somme recueillie de la vente d’actions, dépensée ensuite, plus le versement du nouvel emprunt de €150 000 sur mon compte, plus le paiement de €150 000 pour l’achat de ce nouvel appartement. Solde : €330 400. Le moi bon vivant et bon sens dépense sans accumuler et sans s’endetter, donc, en principe, j’y tiens mon bilan en état constant, sans modifications des comptes capitaux. Ça donne 12*€1200 = €14 400. En ce qui concerne mon Harpagon alternatif, il pompe le bilan en épargnant €200 chaque mois, ce qui donne un flux de trésorerie fait des dépenses annuelles de 12*€1000 = €12 000.

Trois schémas comportementaux différents donnent trois flux de trésorerie qui, du point de vue d’un financier, ouvrent sur des marchés différents. La dépense courante, ça ne contient pas beaucoup de magie, pour un as de la finance. A moi bon sens et bon vivant, un ingénieur FinTech peut donner un simple logiciel de paiement, prélève une marge de 25 points de base (0,25%), donc un total de 0,25% * €14 400 = €36. De même à Harpagon (0,25% * €12 000 = €30) et au prodigue (0,25% * €30 400 = €76). Harpagon et bon vivant ont des économies à gérer. Le bon vivant tient ses économies constantes : €10 000 liquide sur compte épargne plus €6000 en actions égale €16 000, constant, que l’on peut lui proposer de placer sur un fonds d’investissement. Vous pouvez vérifier par vous-mêmes que les marges sur investissement sont beaucoup plus variées et potentiellement plus élevées que celles sur paiements courants. Avec un bon produit d’investissement et du bon marketing on peut penser même à une marge de 2% sur le capital engagé. Dans le cas du bon vivant ce serait 2% * €16 000 = €320.

Harpagon y ajoute €200/€1200 = 0,166666667 de son revenu courant, donc si on lui propose un fonds d’investissement à retour constant garanti 4%, il va accumuler, chaque année, au moins 0,166666667 * 4% = 0,67% de son capital épargne initial. Par conséquent, notre marge initiale de 2% sur €16 000 = €320 pourrait bien croître, après cinq ans, par exemple, jusqu’à 2% * (1,0067 )5 * €16 000 = €330,8.

La gestion des prêts est une chose à part. Le moi prodigue, en contractant €150 000 de prêt hypothéqué, donne occasion à prélever quelques 4% sur cette somme, soit €6000, chaque année. En plus, le paiement ponctuel de €150 000 au moment d’acheter la maison, donne lieu à une marge ponctuelle de 0,25% * €150 000 = €375.

Résumons. Le moi prodigue génère un flux annuel de trésorerie de €330 400, sur lequel le FinTech peut prélever une marge totale de €76 + €6000 + €375 = €6 451 sur la première année et €36 + €6000 = €6 036 sur chaque année consécutive.  Le moi bon vivant et bon sens, c’est un flux de trésorerie de €14 000 et une marge annuelle de €36 + €320 = €356. Le moi Harpagon génère un flux de trésorerie légèrement inférieur (€12 000 par an) mais la marge qu’il peut laisser créer sur sa finance personnelle est de €30 + €320 = €350 plus quelques €2 qui viennent s’ajouter à cette marge chaque année à mesure d’accumulation d’épargne.

Trois schémas différents de comportement donnent trois flux de trésorerie différents et trois types distincts d’opportunités pour les produits FinTech. La marge dérivée de chaque client dépend de son profil comportemental, mais aussi du répertoire d’utilités FinTech offertes. Règle générale : le mouvement des fonds capitaux donne plus d’occasions de gagner une marge transactionnelle que le simple flux du revenu courant. Le client le plus précieux, pour les produits FinTech, c’est un client actif financièrement.

Ceux parmi vous qui ont bien voulu suivre mon activité de blogger sur l’année dernière ont probablement vu que mon objectif est de créer de la science de bonne qualité, neuve ou presque. Sur mon chemin vers la création d’un site éducatif payant je passe par le stade de financement participatif. Voici le lien hypertexte de mon compte sur Patreon . Si vous vous sentez prêt à cofinancer mon projet, vous pouvez vous enregistrer comme mon patron. Si vous en faites ainsi, je vous serai reconnaissant pour m’indiquer deux trucs importants : quel genre de récompense attendez-vous en échange du patronage et quelles étapes souhaitiez-vous voir dans mon projet de création de site éducatif ?

There are many ways of having fun with that experiment

My editorial

I am designing an experimental environment for products based on digital technologies. I am coalescing my thoughts around the type of digital products that I am particularly focused on those last weeks, namely on the FinTech. I started sketching the contours of that experimental environment in my last update in French (see Une boucle de rétroaction qui reviendra relativement pas cher ). Now, I am rephrasing the idea in English and giving it a bit of an extra spin. There are two groups, the geeks and the ordinaries, I mean a group of k software engineers – or k teams of them, the unit is to settle later – and a group of n users, allegedly prone to being more or less enthusiastic about FinTech products. The first round of the experiment starts as the users are asked to make a set of financial decisions in a controlled environment. By controlled environment I mean a standardized budget, e.g. €10 000, and a standardized set of possible uses for that small fortune. It would be good to have pretty a wide range, as uses come, covering typical saving (holding liquid capital for future, unspecified uses), investment (buying future possible gains with the present allocation of liquid capital), and typical consumption (e.g. sex, drugs, and rock’n roll).

The users make their financial decisions in that controlled environment. The software engineers are then being presented with various types of data about those decisions. ‘Various’ can range from simple reports like ‘50% of the users decided to invest not less than 40% of their budget into the Smartest President’s next business’, through more and more elaborate data on individual decisions, taking a slight turn by a detailed database of those individual decisions, all the way to the data from behavioural observation of the users (i.e. sequence of clicks, sequencing of the decision-making process, speed and pace of the corresponding sequence, eye-tracking, pulse, breathing, don’t-put-that-thermometer-where-we-can-both-regret-it etc.). Yes, you guessed correctly: we have been just experimenting with the users, and now we are experimenting with the software engineers. What type and what amount of data will be actually useful for them? If they can choose the type and the scope of information provided about the users, what their choice is going to be? From then on, what their response is going to be? ‘Cause now, the software engineers are to present prototypes of FinTech products, as well adapted as possible to what they know about the users.

Here’s the little caveat: the users are not supposed to tell the engineers what they want. There is just objectified observation. Why? Well, what I want is to simulate, in that experimental environment, the real working of a market, only faster. In real markets of standardized digital products, engineers seldom speak in person to the end users. They have reports, and they base their invention on it. In my experimental environment, each engineer can pick and choose the parcel of information for takeaway, and then calibrate their engineering response accordingly.

In that first round of the experiment, I arrange an interaction between the users of FinTech products and the engineers supposed to create those products. At this stage, the experiment is already informative at many levels. At the first and most basic level, it tells is about the patterns of financial decisions observable in users, and about the way they make those decisions. One level higher, I encompass both the data on users and the data on the engineering response, and what I get is knowledge about the interactive engineering process itself. In the presence of the given set of data on users, their decisions, and their decision making, how quickly did the engineers respond by producing prototypes of FinTech products? What is the typical pattern of utility in those products, and what are the idiosyncrasies in individually created FinTech solutions? I go one level higher in the generality of observation, and I am connecting the type and scope of data on users with the pattern of engineering response. Did richer a set of information on the users’ behaviour contribute to create more sophisticated FinTech products, i.e. did those engineers, who chose to work with more extensive an information about users, create something clearly sticking out of the crowd as FinTech solutions come? Maybe there is a golden path of wisdom, as regards the amount of data on users, somewhere between a cursory report about what percentage of them chose to buy government bonds, and extensive a report on how was their body temperature changing during the decision-making process.

Besides the amount of data collected about the users’ behaviour, there are other dimensions in that experiment. The specific profiling of users is, of course, one of them. How does the whole process work with users of different age, economic status, sex, education, professional walk of life and whatnot? Another dimension is the relative complexity of the users’ behaviour, on the one hand, and the complexity in engineering response, on the other hand. Up to a point, the more people we have having a certain type of idea, the more distinct ideas are being crystallized. In a sample of 100 users we can probably come up with greater a number of distinct patterns in the making of financial decisions than in a sample of 10 users. Still, diversity in users is not just their sheer number, it is also the relative disparity of their characteristics. If, in a sample of users, we put people with high school degree, and those with their college diploma hung on the wall, we come with a certain diversity in decision-making. Now, we add those with Master’s degrees. What has changed? Did the addition of one more level in education change the diversity in the making of financial decisions? Now, we can experiment at the same level with other characteristics, and each time the question is the same: how does the relative complexity in the population of users reflect in the relative diversity of the observable patterns in their financial decisions, and in the process of making those decisions?

Just to return to real life, which is sometimes useful in science, I remind: in any type of business, the supplier has to deal with various degrees of complexity in their target market. I am a software company from Romania, I had come up with a super-cool payment platform in the Romanian market, and now I am trying to see the big picture, i.e. the Chinese market. I will be interacting with much bigger, and potentially more diverse a population of customers. How should I change my engineering processes in order to cope efficiently with the situation? This is precisely the kind of questions that my experimental environment can give an answer to.

We go one step further, and we can test the link between the above-mentioned complexity in the set of users’ characteristics, and the complexity of the solutions prototyped by engineers. If I add one more social stratum to the sample of users, does a constant population of engineers come up with more of distinct prototypes of FinTech products? There is another path going (somewhere) from this point: if I keep the complexity observable in users constant, and I change the diversity in engineers, how will it affect the creation of prototypes? In the presence of a constant sample of 100 users, with their descriptive data, will 50 engineers, each working independently, generate greater a diversity in prototypes than 10 engineers? Once again, is there any golden path of optimization in that proportion? What if, besides varying the number of engineers, I shift between various types of diversity in their characteristics, e.g. the number of years spent in the professional engineering of software?

In all that experimenting, and shifting diversities, an economic equilibrium is timidly raising its head. What does an economic equilibrium’s head look like? Well, you can take the Marshallian equilibrium as a good example: two convex curves, crossing at one point, so it is like a Guinea pig with a pair of pointy ears, pointing slightly outwards. This could be something like a rabbit, as well. Anyway, each given set of characteristics in the sample of users, combined with a set of characteristics in the sample of engineers, produces an outcome in the form of a set of prototypes in FinTech products. A set of FinTech products, in turn, generates a certain stream of financial gains for users – like the money they save on paying for electricity or the money they gain from an investment portfolio – and a stream of revenue for the supplier of FinTech. The latter earns money in two basic patterns: either by collecting a small margin on each transaction performed by the user, or by charging the user a fixed monthly fee for the access to the given utility. I suppose that a well-rounded prototype, in a FinTech utility, should contain those components.

Thus, in my experiment, I have the following, compound variables, modifiable in my experimental environment:

  1. a vector UR = {ur1, ur2, …, urm} of m characteristics observable in users;
  2. a vector EN = {en1, en2, …, enk} of k characteristics observable in engineers;
  3. a vector EN(UR) = {en(ur)1, en(ur)2,…, en(ur)l} of l users’ characteristics actually used by the engineers in their work; in the experimental environment this vector can be either exogenously constrained or freely shaped by the engineers themselves;
  4. a vector FU(UR) = {fu(ur)1, fu(ur)2, …, fu(ur)p} of p functional utilities available in those FinTech prototypes; you know, the number of clicks (or screen touches) you need to transfer all your money to the Caiman Islands etc.;
  5. an aggregate stream PF(UR) of users’ payoffs from those FinTech prototypes presented; it can be further decomposed as a vector, namely PF(UR) = {pf(ur)1, pf(ur)2, …, pf(ur)z} of z users’ payoffs, but in my basic scheme I want to use it as an aggregate stream;
  6. an aggregate stream P*Q of revenues that the suppliers of those prototyped FinTech products can derive from their exploitation; once again, P*Q can be also represented as a vector P*Q(UR) = {p*q(ur)1, p*q(ur)2, …, p*q(ur)ß} of ß specific streams per user or per utility, but in my basic model I want to keep it simple;
  7. an aggregate stream EC of engineering and maintenance costs, in suppliers, connected to the basket of FinTech prototypes proposed; again, translation into a vector is possible, although not always necessary;

Now, my basic economic equilibrium, if I keep it more or less Marshallian (you know, the rabbit), would be something like: [P*Q – EC] -> PF(UR). It means that there could be some combination of my experimental variables, which makes the aggregate profits, on the part of suppliers, tend to equality with the aggregate payoffs in users. That Marshallian approach has some catches in it. It supposes the existence of an equilibrium price in FinTech products, i.e. a price that clears inventories and leaves no customer with unsatisfied demand for FinTech utilities. This is tricky. On the one hand, in finance, we assume the existence of at least those most fundamental equilibrium prices, like the interest rate or the discount rate. It was observed already in the 18th century, by Adam Smith, among others, but, interestingly enough, not earlier. When I read that book by Jacques Savary, published in 1675, entitled ‘Le Parfait Négociant ou Instruction Générale Pour Ce Qui Regarde Le Commerce’ (in English: ‘The Perfect Merchant or General Instructions as Regards Commerce’), equilibrium rates in finance tend to be considered as a comfortable exception than as the general rule of the market. Still, here, in FinTech, we are at the junction of finance and digital technologies. In IT, people don’t really like talking about equilibrium prices, and I can hardly blame them, ‘cause the actual diversity in IT utilities is so great that we can throw the assumption of homogenous supply through the (metaphorical) window as well.

Anyway, I can optimize my experimental environment regarding the Marshallian equilibrium, or any other economic equilibrium, actually. I can make a function [P*Q – EC] = f[UR], i.e. a function linking the aggregate profits in the suppliers of FinTech products with the observable complexity in the population of users. I can do the same with the complexity observable in the population of engineers. There are many ways of having fun with that experiment, at this stage, and this is just the first stage. There is more to come. The engineers come up with a range of prototypes in FinTech, and they present them for testing. The users test those prototypes, in an experimental environment, and once again we apply the same philosophy: the users are supposed to be informative by their behaviour, not so much to talk about their impressions.

An interesting question arises, namely that of learning. If I am one of those users, and I am supposed to test those prototypes devilishly, I mean like 6 prototypes tested over 6 days, with 6 cups of coffee in terms of neuro-logistic support, I can’t help learning. At my 3rd prototype tested, I will be using, consciously or not, whatever, the experience accumulated with the testing of prototypes 1 and 2. Thus, what I will be really testing, will not be each prototype separately, but a sequence of prototypes, associated with a process of learning. Someone could say that learning creates a harmful bias (believe me, there are people who think so), and so the edge of learning can be blunted in two basic ways.

Firstly, the experimental environment can take away from me any feedback on the financial decisions I make with a given FinTech utility. When we do something about money, we like seeing the outcomes, it is kind of attached to money in its cultural code. You take away the feedback on results, and, by the same means, you take away the incentive to learn. We could think about such an experimental environment, and yet, I have like a little voice in the back of my head (no, I am perfectly sane) saying that financial decisions without feedback are not really financial decisions. That would be an experiment biased by the absence of bias. Secondly, I can make each user test just one prototype. There is no opportunity for learning in this case. Still, we need to split the sample of users into as many subsamples as we have prototypes to test, and each has to somehow representative.

Now, do I really want to take learning out of the equation? Do I expect the users out there, in the big, wild market of FinTech, using those apps without learning anything? I just think it could be safer to assume that people learn things. What we should be really testing, at this stage of the experiment, could be precisely the phenomenon of learning, with respect to FinTech.

I am working on making science fun and fruitful, and I intend to make it a business of mine. I am doing by best to stay consistent in documenting my research in a hopefully interesting form. Right now, I am at the stage of crowdfunding. You can consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Une boucle de rétroaction qui reviendra relativement pas cher

Mon éditorial

Dans ma mise à jour du 31 Janvier (consultez « Smart cities, or rummaging in the waste heap of culture ») j’ai avancé un peu avec ce business plan pour investir dans les villes intelligentes. Voilà que je construis mon business plan autour de quatre hypothèses de travail. Un, une ville intelligente va avoir besoin de plus de masse monétaire pour financer son fonctionnement qu’une ville « ordinaire », afin de pourvoir à l’incertitude découlant d’une dépréciation accélérée des technologies installées sur place. Deux, la construction et la mise à jour de l’infrastructure d’une ville intelligente va s’associer, au moins périodiquement, à une consommation plus élevée d’énergie par tête d’habitant et ceci, encore une fois, dû à la nécessité de remplacement fréquent des technologies à cycle de vie très court. Trois, le développement des villes intelligentes va entraîner une accumulation des populations locales autour de sources d’énergie. Quatre, dû à la présence de plus de masse monétaire par unité de produit réel, les villes intelligentes vont engendrer des structures sociales plus hiérarchisées que les villes « ordinaires », avec une distance croissante entre la base et le sommet de la pyramide sociale.

Deux remarques s’imposent. Premièrement, pourquoi diable formuler des hypothèses pour construire un business plan ? Après tout, c’est du business, pas de la science, et je pourrais bien être en train de confondre les genres. Ma philosophie est simple sur ce point-là : tout business plan digne de ce nom devrait contenir une analyse poussée de l’environnement et l’analyse en question ne peut que gagner en valeur si on formule des hypothèses bien ciblées. En plus, tout investissement qui implique une course technologique rapide implique aussi beaucoup d’incertitude, que je peux réduire en formulant des scénarios alternatifs d’évènements. Les hypothèses, ça tombe bien lorsque je veux des scénarios : le scenario A implique la véracité desdites hypothèses pendant que le scenario B assume qu’elles sont fausses.

Deuxièmement, je me casse la tête comment appeler des structures urbaines qui ne sont pas des villes intelligentes. Par pure opposition, je pourrais me référer à des villes « bêtes », encore que ça pourrait sonner bête en soi-même. Pour le moment j’utilise donc le terme de ville « ordinaire » mais je serais reconnaissant pour toute suggestion linguistique à ce sujet.

Bon, j’en viens à l’essentiel de cette mise à jour : comment pouvons-nous expérimenter avec les technologies qui font l’ossature d’une ville intelligente ? C’est une question pratique. Si je veux convaincre une entreprise d’investir dans un projet de ville intelligente, c’est un investissement en technologie, qui, à son tour, se caractérise par un certain cycle de vie. C’est une composante tout à fait élémentaire de stratégie pour quiconque s’engage dans un projet à haute cadence d’innovation : si aujourd’hui j’investis 5 millions d’euros dans une technologie de transport urbain intelligent, quand est-ce que viendra le moment de la remplacer et comment dois-je diriger ma recherche pour être prêt à temps avec la nouvelle version ? Si je veux convaincre quelqu’un d’investir dans un tel projet, un sentier d’expérimentation pour développer ces technologies serait certainement attractif.

Une expérience scientifique est un modèle réduit de réalité où je teste des hypothèses que je ne peux pas tester autrement. D’habitude ce sont des hypothèses quant au déroulement exact d’une séquence d’évènements. Ces hypothèses-là sont comme des zooms sur des fragments d’hypothèses générales d’un projet de recherche. Ce que j’essaie de faire, en ce moment précis, consiste à traduire mes hypothèses générales en des lignes d’expérimentation et, en parallèle – à trouver une application pratique de ces expériences pour le développement des technologies de ville intelligente. Ma première hypothèse générale dit que le développement d’une ville intelligente va créer une demande accrue de masse monétaire. J’ai deux associations d’idées, immédiates et pratiques. D’abord, le FinTech : cette hypothèse générale se traduit, au niveau business, comme l’assertion que les projets FinTech vont se développer plus rapidement dans le cadre des villes intelligentes qu’ailleurs. Ensuite, le bilan : la seconde traduction pratique suppose que les entreprises engagées dans les projets de ville intelligente auront des actifs significativement plus liquides que les entreprises qui restent en dehors de tels projets, autres facteurs tenus constants.

Si je m’engage dans cette ligne d’expérimentation, il serait bon d’avoir un modèle réduit de transactions financières qui ont lieu dans un environnement de ville intelligente. J’imagine des environnements sociaux différents : un environnement urbain avec beaucoup de technologies digitales connectées au fonctionnement d’infrastructure urbaine, puis un environnement toujours grand-urbain mais nettement mois infus des solutions type « smart city », et à côté de ces deux un environnement typiquement provincial, par exemple celui d’une petite ville. Dans chacun de ces environnements j’observe le développement des micromarchés locaux de Fintech ainsi que les bilans des entreprises actives dans les mêmes marchés locaux. Question : quelle serait la différence entre l’environnement expérimental d’une part et une simple observation de marché d’autre part ? Je veux dire qu’à la rigueur je peux observer la demande pour des services FinTech à travers cet outil appelé « moteur comportemental » – ou « behavioural engine » en anglais – qui observe le comportement d’utilisateurs d’Internet. De même, simple audit comptable périodique peut me donner des informations requises au sujet des bilans. Dans les deux cas, il n’y a pas de besoin impératif de mettre au point une ligne d’expérimentation.

La différence entre une expérience scientifique et la simple observation peut être de double nature. Premièrement, une expérience peut être plus efficace que l’observation dans la mesure où elle fournit des informations plus rapidement et/ou à moindre coût. Une expérience peut donc être un raccourci précieux par rapport à la vie réelle. Deuxièmement, une expérience peut me permettre d’imposer à mon objet expérimental des conditions plus extrêmes que celles de la vie réelle. Côté efficacité, une expérience au sujet de demande pour des services FinTech pourrait se concentrer sur le mécanisme de choix de la part des consommateurs lorsqu’ils sont confrontés à un moment de décision. Une autre idée est le type d’expérience bien connue, par ailleurs, dans le monde des technologies digitales : confronter un groupe d’utilisateurs avec un groupe d’ingénieurs. Les utilisateurs imposent aux ingénieurs un effort constant d’innovation en effectuant des choix en séquence. Chaque choix fait par chacun des utilisateurs est une pièce d’information pour chacun des ingénieurs. Un ingénieur donné réagit au flux d’information par un flux de travail qui résulte en un choix nouveau présenté aux utilisateurs. Leurs choix individuels se somment en un flux nouveau d’information pour les ingénieurs etc.

Disons qu’un groupe de 100 personnes est confronté avec une situation où ils ont une somme de €1000 à allouer parmi des types d’épargne et/ou investissement plus, par exemple, une option d’acheter à l’avance, à un prix attractif, un voyage de vacances ou un paquet des billets de théâtre. Les consommateurs font leur choix – ils allouent leurs €1000 respectifs parmi les options accessibles – et maintenant les ingénieurs ont pour tâche de mettre au point et proposer à chacun des consommateurs une utilité digitale FinTech la mieux adaptée possible aux besoins déduits des choix antérieurs. Chaque ingénieur propose aux consommateurs sa solution originale et chaque consommateur choisit – entre toutes les solutions présentées – celle ou celles qui lui va (vont) le mieux. Ensuite, ou bien en parallèle, nous observons la façon dont les consommateurs utilisent les solutions proposées : le temps passé en face de l’écran, nombre des clicks, séquence d’actions, nombre des pas ratés suivis des pas en arrière etc. Les ingénieurs ont la possibilité d’observer le comportement des consommateurs ou bien reçoivent des rapports là-dessus. Leur tâche consiste alors à optimiser les solutions sélectionnées et de présenter aux consommateurs des prototypes optimisés de seconde génération et ainsi de suite. Bien sûr, au lieu de mettre en compétition des ingénieurs individuels on peut établir des groupes de travail rivaux.

Ce type d’expérience est, pour autant que je sache, pratiqué souvent dans l’industrie informatique. L’idée originale consiste, cette fois, à ajouter un méta niveau d’expérimentation : observer et documenter l’interaction entre les consommateurs et les ingénieurs pour tirer des conclusions sur le processus-même d’innovation. Une expérience comme celle-là serait une version accélérée d’un marché. L’interaction entre les utilisateurs et les ingénieurs, qui dans les conditions d’un marché réel des produits digitaux peut se dérouler sur des années est accélérée et prend, par exemple, des semaines. La différence pratique entre l’environnement expérimental et le monde réel consiste dans l’absence des barrières dans l’échange d’information, habituellement rencontrés dans la pratique du marché. Si je suis un expérimentateur vraiment tenace (vraiment vache ?), je peux simuler ce qui se passe si j’ajoute ces barrières dans le processus. Je peux, par exemple, ajouter un rapporteur intermédiaire entre les consommateurs et les ingénieurs et tester l’impact de sa présence sur le déroulement du processus d’innovation. Ce rapporteur ne doit même pas être un humain : ça peut être un logiciel qui filtre les informations d’une manière biaisée. Avec un peu d’astuce, je peux utiliser cette expérience pour optimiser des structures sociales pour innovation.

Quand j’y pense, cette philosophie d’expérimentation peut être appliquée partout dans l’informatique et pas seulement dans le FinTech. La question essentielle à laquelle répond ce type d’expérimentation est « Combien de temps avons-nous réellement besoin pour mettre au point des solutions nouvelles et comment ce temps peut être modifié par la présence ou l’absence des facteurs de distorsion ? » Zut, je commence à voir plus large. Ça peut faire presque mal, parfois. Je peux utiliser ce cadre d’expérimentation pour toute technologie où, premièrement, il est possible de mettre les utilisateurs et les ingénieurs en boucle de rétroaction, et deuxièmement, où ladite boucle reviendra relativement pas cher.

Ceux parmi vous qui ont bien voulu suivre mon activité de blogger sur l’année dernière ont probablement vu que mon objectif est de créer de la science de bonne qualité, neuve ou presque. Sur mon chemin vers la création d’un site éducatif payant je passe par le stade de financement participatif. Voici le lien hypertexte de mon compte sur Patreon . Si vous vous sentez prêt à cofinancer mon projet, vous pouvez vous enregistrer comme mon patron. Si vous en faites ainsi, je vous serai reconnaissant pour m’indiquer deux trucs importants : quel genre de récompense attendez-vous en échange du patronage et quelles étapes souhaitiez-vous voir dans mon projet de création de site éducatif ?

Smart cities, or rummaging in the waste heap of culture

My editorial

I am trying to put together my four big ideas. I mean, I think they are big. I feel small when I consider them. Anyway, they are: smart cities, Fintech, renewable energies, and collective intelligence. I am putting them together in the framework of a business plan. The business concept I am entertaining, and which, let’s face it, makes a piece of entertaining for my internal curious ape, is the following: investing in the development of a smart city, with a strong component of renewable energies supplanting fossil fuels, and financing this development partly or totally, with FinTech tools, i.e. mostly with something like a cryptocurrency as well as with a local platform for financial transactions. The whole thing is supposed to have collective intelligence, i.e. with time, the efficiency in using resources should increase in time, on the condition that some institutions of collective life emerge in that smart city. Sounds incredible, doesn’t it? It doesn’t? Right, maybe I should explain it a little bit.

A smart city is defined by the extensive use of digital technologies, in order to optimize the local use of resources. Digital technologies age relatively quickly, as compared to technologies that make the ‘hard’ urban infrastructure. If, in a piece of urban infrastructure, we have an amount KH of capital invested in the hard infrastructure, and an amount KS invested in the smart technologies with a strong digital component, the rate of depreciation D(KH) of the capital invested in KH will be much lower than D(KS) invested in KS.


[D(KS)/ KS] > [D(KH)/ KH]

and the ‘>’ in this case really means business.

The rate of depreciation in any technology depends on the pace that new technologies come into the game, thus on the pace of research and development. The ‘depends’, here, works in a self-reinforcing loop: the faster my technologies age, the more research I do to replace them with new ones, and so my next technologies age even faster, and so I put metaphorical ginger in the metaphorical ass of my research lab and I come with even more advanced technologies at even faster a pace, and so the loop spirals up. One day, in the future, as I will be coming back home from work, the technology embodied in my apartment will be one generation more advanced than the one I left there in the morning. I will have a subscription with a technology change company, which, for a monthly lump fee, will assure smooth technological change in my place. Analytically, it means that the residual difference in the rates of depreciation, or [D(KS)/ KS] – [D(KH)/ KH] , will widen.

On the grounds of the research I did in 2017, I can stake three hypotheses as for the development of smart cities. Hypothesis #1 says that the relative infusion of urban infrastructure with advanced and quickly ageing technologies will generate increasing amounts of highly liquid assets, monetary balances included, in the aggregate balance sheets of smart cities  (see Financial Equilibrium in the Presence of Technological Change Journal of Economics Library, Volume 4 (2), June 20, s. 160 – 171 and Technological Change as a Monetary Phenomenon Economics World, May-June 2018, Vol. 6, No. 3, 203-216 ). This, in turn, means that the smarter the city, the more financial assets it will need, kind of around and at hand, in order to function smoothly as a social structure.

On the other hand, in my hypothesis #2, I claim that the relatively fast pace of technological change associated with smart cities will pump up the use of energy per capita, but the reciprocal push, namely from energy-intensity to innovation-intensity will be much weaker, and this particular loop is likely to stabilize itself relatively quickly in some sort of energy-innovation standstill (see Technological change as intelligent, energy-maximizing adaptation Journal of Economic and Social Thought, Volume 4 September 3  ). Mind you, I am a bit less definitive on this one than on hypothesis #1. This is something I found out to exist, in human civilisation, as a statistically significant correlation. Yet, in the precise case of smart cities, I still have to put my finger on the exact phenomena, likely corresponding to the hypothesis. Intuitively, I can see some kind of social change. The very transformation of an ordinary (i.e. dumb) urban infrastructure into a smart one means, initially, lots of construction and engineering work being done, just to put the new infrastructure in place. That means additional consumption of energy. Those advanced technologies embodied in the tissues of the smart cities will tend to be advanced for a consistently shortening amount of time, and as they will be replaced, more and more frequently, with consecutive generations of technological youth. All that process will result in the consumption of energy spiralling up in the particular field of technological change itself. Still, my research suggests some kind of standstill, in that particular respect, coming into place quite quickly. I am thinking about our basic triad in energy consumption. If we imagined our total consumption of energy, I mean as civilisation, as a round cake, one third of that cake would correspond to household consumption, one third to transportation, and the remaining third to the overall industrial activity. With that pattern of technological change, which I have just sketched regarding smart cities, the cake would go somehow more to industrial activity, especially as said activity should, technically, contribute to energy efficiency in households and in transports. I can roughly assume that the spiral of more energy being consumed in the process of changing for more energy-efficient technologies can find some kind of standstill in the proportions between that particular consumption of energy, on the one hand, and the household & transport use. I mean, scrapping the bottom of the energy barrel just in order to install consecutive generations of smart technologies is the kind of strategy, which can quickly turn dumb.

Anyway, the development of smart cities, as I see it, is likely to disrupt the geography of energy consumption in the overall spatial structure of human settlement. Smart cities, although energy-smart, are likely to need, on the long run, more energy to run. Yet, I am focusing on another phenomenon, now. Following in the footsteps of Paul Krugman (see Krugman 1991[1];  Krugman 1998[2]), and on the grounds of my own research ( see Settlement by energy – Can Renewable Energies Sustain Our Civilisation? International Journal of Energy and Environmental Research, Vol.5, No.3, pp.1-18  ) I am formulating hypothesis #3: if the financial loop named in hypothesis #1, and the engineering loop from hypothesis #2 come together, the development of smart cities will create a different geography of human settlement. Places, which will turn into smart (and continuously smarter) cities will attract people at faster a pace than places with relatively weaker a drive towards getting smarter. Still, that change in the geography of our civilisation will be quite idiosyncratic. My own research (the link above) suggests that countries differ strongly in the relative importance of, respectively, access to food and access to energy, in the shaping of social geography. Some of those local idiosyncrasies can come as quite a bit of a surprise. Bulgaria or Estonia, for example, are likely to rebuild their urban tissue on the grounds of local access to energy. People will flock around watermills, solar panels, maybe around cold fusion. On the other hand, in Germany, Iran or Mexico, where my research indicates more importance attached to food, the new geography of smart human settlement is likely to gravitate towards highly efficient farming places.

Now, there is another thing, which I am just putting my finger on, not even enough to call it a hypothesis. Here is the thing: money gets hoarded faster and more easily than fixed assets. We can observe that the growing monetization of the global economy (more money being supplied per unit of real output) is correlated with increasing social inequalities . If, in a smart and ever smarter city, more financial assets are being around, it is likely to create a steeper social hierarchy. In those smart cities, the distance from the bottom to the top of the local social hierarchy is likely to be greater than in other places. I know, I know, it does not exactly sound politically correct. Smart cities are supposed to be egalitarian, and make us live happily ever after. Still, my internal curious ape is what it is, i.e. a nearly pathologically frantic piece of mental activity in me, and it just can’t help rummaging in the waste heap of culture. And you probably know that thing about waste heaps: people tend to throw things, there, which they wouldn’t show to friends who drop by.

I am working on making science fun and fruitful, and I intend to make it a business of mine. I am doing by best to stay consistent in documenting my research in a hopefully interesting form. Right now, I am at the stage of crowdfunding. You can consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

[1] Krugman, P., 1991, Increasing Returns and Economic Geography, The Journal of Political Economy, Volume 99, Issue 3 (Jun. 1991), pp. 483 – 499

[2] Krugman, P., 1998, What’s New About The New Economic Geography?, Oxford Review of Economic Policy, vol. 14, no. 2, pp. 7 – 17

Le cousin du beau-frère de mon ami d’école

Mon éditorial

Je retourne au sujet des villes intelligentes, que j’avais déjà commencé à développer dans ma mise à jour du 11 janvier, intitulée « My individual square of land, 9 meters on 9 ». Je me mets, à présent, à rechercher des sources de données que je juge vitales pour le développement des villes intelligentes : densité de population, croissance démographique et prix de l’immobilier. J’ai déjà fait un peu de recherche côté immobilier au sujet de la ville de Lyon – et le projet local baptisé « Confluence » – ainsi que sur Vienne, la capitale d’Autriche, toutes les deux très fortement engagées dans l’investissement en villes intelligentes. J’avais déjà étudié leurs prix locaux de l’immobilier et leurs taux respectifs de croissance démographique. Maintenant, j’ajoute l’information sur la densité de population. Je rappelle que cette variable a été jugée comme vitale par les auteurs du rapport ‘The State of European Cities’ : apparemment, 3000 habitants par km2 est le seuil minimum pour pouvoir envisager des investissements en infrastructure urbaine intelligente. La logique de ce chiffre particulier est simple : c’est le seuil inférieur de densité de population qui détermine le bien-fondé d’investissements en l’infrastructure des transports en commun. En tout cas, selon l’INSEE, la densité de population dans la commune urbaine de Lyon est de 10 583,1 habitants par kilomètre carré . J’en passe sur le sort pas très clair de ce un dixième d’habitant par mètre carré. Ça doit être une expérience étrange, lorsqu’on est un habitant, d’avoir besoin d’être constamment étendu sur 10 km2 pour être un habitant complet. La distance de votre chambre à coucher à votre salle des bains peut s’étendre jusque 141,4213562 mètres, contre un maximum de 13,94321098 mètres pour chacun des 10 583 habitants complets avant la virgule décimale. En tout cas, le deuxième arrondissement de Lyon, celui où le projet « Confluence » commence à voir le jour, se caractérise par une densité de population légèrement inférieure : juste 8 926 habitants par km2 .  Par comparaison, Vienne, en Autriche, associée avec la ville de Lyon dans ce projet des villes intelligentes, montre une densité de population de 4 326,1 habitants par kilomètre carré . Munich, la troisième ville de ce partenariat tripartite, recense recense 4 700 habitants par kilomètre carré .

Intéressant : parmi ces trois villes métropolitaines, engagées toutes les trois dans des projets coordonnés d’investissement en villes intelligentes, la ville de Lyon recense la population la plus dense et elle est la seule des trois où le projet de ville intelligente prend la forme de réaménagement d’un quartier entier. Autre truc qui m’intrigue : à Lyon, l’emplacement exact du projet, le deuxième arrondissement, tout en étant très peuplé, a une densité de population légèrement inférieure à la moyenne de la ville entière. Tout comme si cette différence 10583,1 – 8926 = 1657,1 (ou plus 18,6%) d’habitants par kilomètre carré était une sorte de densité résiduelle, possible à atteindre à travers l’investissement en l’infrastructure de ville intelligente.   Bon, je forme deux hypothèses de travail rapides. Premièrement, l’investissement en l’infrastructure des villes intelligentes a du sens à partir d’une densité de population de 3000 habitants par km2, ça prend de l’élan au-dessus de 4000 habitants sur le kilomètre carré, et ça a des fortes chances de prendre la forme de réaménagement des quartiers entiers lorsqu’on se balance dans 8000 habitants, ou plus, sur leur kilomètre carré statistique moyen. Deuxièmement, l’émergence d’infrastructure de ville intelligente peut entraîner une croissance en densité de population, par un peu plus de 1500 personnes par km2 ou à peu près 18%.

Ces deux hypothèses me permettent déjà de commencer à esquisser mon business plan pour investir en des villes dont l’intelligence a des fortes chances de surpasser celle de certains de leurs habitants. Avant que je m’en prenne à modeler le truc façon économie, voilà encore une poignée de données intéressantes sur le sujet : le rapport d’INSEE intitulé « L’accès aux services, une question de densité des territoires » , ainsi que deux rapports signés EUROSTAT, tout d’abord celui intitulé « Urban Europe 2016 » et ensuite « L’Annuaire régional d’Eurostat ». J’espère bien de retourner à discuter le contenu de tous les trois, sur mon blog, en français ou bien en anglais (dépendra du jour) mais à présent, je modèle. Je me dis que du point de vue business, l’investissement en des villes intelligentes veut dire investissement tout court, donc un bilan. Ce bilan, il s’installe dans une ville sous la forme d’une infrastructure : réseau routier, métro, tramway, bâtiment, réseau sanitaire, approvisionnement en énergie etc. Cette infrastructure élémentaire est ce dont nous avons besoin pour pouvoir nous appeler une ville, mais ce n’est pas tout : nous en avons besoin en une certaine densité par kilomètre carré. Oui, mesdames et messieurs, une ville, ça n’a pas seulement plus de personnes par kilomètre carré que la campagne, mais aussi plus de capital investi dans l’infrastructure, sur le même kilomètre carré. Enfin, pas rigoureusement le même kilomètre, ça peut être bien un autre kilomètre, équivalent, quoi.

En tout cas, une ville, c’est un territoire R habité par une population N et pourvu en infrastructure équivalente en valeur à un montant de capital K. Tout habitat humain se caractérise par des coefficients respectifs N/R et K/R. A propos, je n’en sais rien du second, c’est à dire de la valeur comptable d’infrastructure urbaine. Faudra que je me renseigne. Lorsqu’une ville arrête d’être bête et commence à être intelligente, son bilan change de deux façons. D’une part, il y a de l’infrastructure nouvelle. Des fibres optiques en abondance, c’est certain, ainsi que des transmetteurs haut débit de signal Internet. Avec ça, nous pouvons ajouter des sources génératrices locales d’énergies renouvelables : panneaux solaires, petites turbines hydrauliques, des moulins à vent c’est trop gros mais un système de recirculation d’eau chaude en provenance du refroidissement des gros serveurs (marche en Scandinavie, apparemment). Ça, c’est de l’investissement immédiat en l’infrastructure, qui suggère plus de capital dans le bilan immédiat et qui est l’équivalent urbain d’un diplôme humain : ça suggère la possibilité d’une intelligence latente, mais ne la garantit pas. En plus, c’est comme un cerveau. Ça a besoin de tout le reste du corps et en même temps ça peut transformer ce corps. La présence des connexions intelligentes change la façon dont l’infrastructure urbaine lourde fonctionne. Le réseau des transports en commun, le réseau sanitaire, l’approvisionnement en énergie : tout ça, ça s’adapte à mesure que ça a plus de retour d’information. Tous ces millions d’euros investis en conduits, rails, câbles etc., ça va se réallouer dans l’espace.

Voilà une question intéressante : après que tout soit fait et dit, à long terme, quoi, une ville intelligente contient-elle plus ou bien moins de K/R ? Encore une fois, je n’en sais que dalle sur les valeurs réelles de K/R dans des villes typiques – bêtes ou intelligentes, peu importe – et il faudra que je prenne quelques prisonniers de guerre pour me renseigner là-dessus. Néanmoins, je peux formuler des hypothèses. Par instinct économique, je définis trois périodes distinctes de temps : t-1 ou ville bête d’avant, t0 ou ville intelligente comme je l’ai devant mes yeux et enfin t+1 qui correspond à la ville encore plus intelligente que j’espère dans l’avenir. Ensuite, je pose deux hypothèses alternatives. Premièrement, je peux avoir K/R(t-1) < K/R(t0) < K/R(t+1) : la ville intelligente d’avenir aura plus de capital par kilomètre carré que ses versions précédentes. Deuxièmement et alternativement, il se peut que K/R(t-1) < K/R(t0) et K/R(t+1) < K/R(t-1) donc que la ville encore plus intelligente de l’avenir sera moins pourvue en l’infrastructure lourde que la ville bête du passé, comme elle apprendra à utiliser ladite infrastructure lourde de façon plus intelligente.

Bon, ça commence à prendre une forme cohérente, au moins je l’espère. Je peux traduire mes hypothèses en une structure logique avec des conditions préalables et des changements possibles :

  • ville intelligente, condition nécessaire : N/R > 2999 habitants par km; K/R > X1 € par km2
  • ville intelligente, ça devient intéressant : N/R > 3999 habitants par km2; K/R > X2 € par km2
  • ville intelligente, ça devient politique : N/R > 7999 habitants par km2; K/R > X3 € par km2
  • ville intelligente, ça peut apporter du changement 1500 < ∆N/R < 1700 habitants par km2 ou bien 15% < (∆N/R)/[N/R(t0)] < 20% et en plus
    • K/R(t-1) < K/R(t0) < K/R(t+1)

ou bien

  • K/R(t-1) < K/R(t0) et K/R(t+1) < K/R(t-1)

Bon, ça c’est le côté bilan et maintenant je jette un coup d’œil côté compte d’exploitation. Le retour sur l’investissement en infrastructure urbaine peut prendre deux formes, une directe et une autre plus chic. Façon directe et brutale, je peux vendre l’accès à mon infrastructure sur la base platement commerciale, comme accès à Internet. Néanmoins, si le cousin du beau-frère de mon ami d’école est proche des certains milieux, je peux obtenir une subvention publique qui couvrira une part ou le total des revenus dont j’ai besoin pour avoir un retour décent (indécent ?) sur mon investissement. Dans ce second scénario, les habitants me paient de toute façon mais ils le font indirectement, à travers leurs impôts, et ça s’appelle « partager » (avec le cousin du beau-frère de mon ami d’école) au lieu de « vendre et se faire payer » et quand ça sonne bien c’est toujours mieux. Quoi qu’il en soit au sujet du cousin du beau-frère de mon ami d’école, je peux espérer un flux futur FR(t0 ; t+1) de trésorerie. Avec l’investissement dénoté comme ∆K, en équilibre il faut que ∆K = FR(t0 ; t+1) mais soyons raisonnables : le bon business c’est quand ∆K < FR(t0 ; t+1) .

Le flux FR de trésorerie dépend, bien sûr, du nombre N d’habitants et du revenu disponible D/N par tête d’habitant. Enfin, pas seulement la tête. Les talons-aiguille Manolo Blahnik, ça ne se porte pas sur la tête et il y a même des cas où ça se gagne autrement qu’avec la tête. Quoi qu’il en soit, la population N a un revenu disponible agrégé de D = N*D/N et ce qui importe c’est le revenu disponible D/R = {N*(D/N)}/{N/R} par kilomètre carré de ma ville (votre ville aussi) et c’est important parce que mon flux futur FR(t0 ; t+1) de trésorerie par kilomètre carré, soit FR(t0 ; t+1)/R sera une fraction de ce revenu disponible et cette fraction sera une fonction et ça veut dire FR(t0 ; t+1)/R = f(D/R). A ce moment-là, mon sac à probabilités devient vraiment bien pourvu. La transformation d’une ville bête en une ville intelligente peut influencer pratiquement toutes les variables de l’équation. Il peut y avoir plus de N/R dans une ville intelligente que dans la ville bête d’avant, mais c’est juste une hypothèse. Ces gens peuvent gagner plus par tête mais là aussi, c’est une hypothèse. La fonction qui transforme leur revenu disponible en flux de trésorerie dans la direction d’investisseurs en l’infrastructure de la ville intelligente peut changer sous l’impact de cet investissement-même.

Bon, fini de modeler pour aujourd’hui. Maintenant, faut mettre cette pâte modelée au four et la cuire, pour que ça durcisse. Si ça casse dans la cuisson, cela voudra dire qu’il faut modeler à nouveau.

Ceux parmi vous qui ont bien voulu suivre mon activité de blogger sur l’année dernière ont probablement vu que mon objectif est de créer de la science de bonne qualité, neuve ou presque. Sur mon chemin vers la création d’un site éducatif payant je passe par le stade de financement participatif. Voici le lien hypertexte de mon compte sur Patreon . Si vous vous sentez prêt à cofinancer mon projet, vous pouvez vous enregistrer comme mon patron. Si vous en faites ainsi, je vous serai reconnaissant pour m’indiquer deux trucs importants : quel genre de récompense attendez-vous en échange du patronage et quelles étapes souhaitiez-vous voir dans mon projet de création de site éducatif ?

Fringe phenomena, which happen just sometimes

My editorial

I am focusing on the Fintech industry, and I continue studying that report by Pricewaterhouse Coopers, entitled “Global FinTech Report 2017. Redrawing the lines: FinTech’s growing influence on Financial Services” . I started studying this report in my last update in French (see Quatorze mâles avec le gène Fintech ), and, just for the sake of delivering interesting information, I am reproducing, in English, the table with percentages of incidence as regards partnerships with FinTech companies (table 1, below). Still, I want to focus on another topic. Further below, in Table 2, I reproduce the contents of Figure 7, page 8, in that report. It is supposed to show the attitudes that FinTech companies, and incumbent financial institutions have as for working with each other. This is quite a piece of information: simple at the first glance, bloody cryptic when you really think about it. You can see there one of those tables (in the source report this is a graph, but it does not really help), with a structure much more complex than the content it conveys. I want to use this piece of empirical data as a pretext to discuss the mathematical and logical interpretation I would use to tackle this type of information, i.e. poll percentages. Hence, I invite you to scroll through both tables and I invite you to follow up my analysis, below the table no. 2.My editorialMy editorial

Table 1 Incidence of partnerships with FinTech companies, in the PwC survey 2017

Country Percentage of respondents currently engaging in partnerships with FinTech companies Percentage of respondents expecting to increase partnerships over the next three to  ve years
Germany 70% 78%
Belgium 69% 81%
Netherlands 65% 85%
Australia & New Zealand 64% 83%
South Africa 63% 96%
Canada 62% 88%
Finland 62% 100%
Singapore 62% 89%
Switzerland 59% 82%
Indonesia 55% 94%
Russia 54% 74%
United States 53% 88%
Taiwan 52% 68%
Argentina 50% 83%
France 45% 90%
Global 45% 82%
Poland 44% 64%
United Kingdom 44% 81%
Hungary 43% 74%
India 42% 95%
Luxembourg 42% 83%
Italy 41% 84%
China mainland 40% 68%
Ireland 40% 71%
Hong Kong SAR 37% 82%
Denmark 36% 81%
Mexico 31% 81%
Brazil 30% 72%
Japan 30% 91%
Colombia 25% 93%
Turkey 22% 76%
South Korea 14% 76%

Source: “Global FinTech Report 2017. Redrawing the lines: FinTech’s growing influence on Financial Services”

Table 2 When working with Financial Institutions (or FinTech companies), what challenges do you face?

Working with FinTech companies

Working with the incumbent financial institutions
Field of cooperation

Current score in 2017

Change regarding 2016 Current score in 2017

Change regarding 2016

IT Security


-1% 58%


Regulatory uncertainty


5% 54%


Differences in management and culture





Differences in business models


9% 35%


IT compatibility


5% 34%


Differences in operational processes


-11% 24%


Differences in knowledge and skills


3% 24%


Required financial investments 16% -12% 17%


Source: “Global FinTech Report 2017. Redrawing the lines: FinTech’s growing influence on Financial Services”

Good. You’ve rummaged (intellectually, I mean) through both tables, and now I am focusing on table 2. I am giving here my own interpretation of the data at hand, which is not the same as presented in the source report. Firstly, the basic assumption that I can derive from the data in question is that PwC asked both some FinTech companies, and some classical financial institutions, how they get along with each other. The questioning visibly implied that it is not all rose petals and little angels, and there is tough s**t to handle, on both sides. The typical challenges are enumerated in the left column, under the heading ‘Field of cooperation’. Once again, this is a typical case of Ockham’s razor: I have no clue as for how those fields of cooperation have been defined. They could have resulted from open, in depth-interviews, or could have been defined a priori, whatever: I take what I see. They are linguistic pieces of meaning that some people considered intelligible enough to answer questions about, and this is all I need to know at the starting point.

And thus we come by any percentage we want in that table and we ask a legitimate question: “So what?”. Such type of percentages is based on the Aristotelian logic of the excluded third, i.e. anything we encounter in this world can be A or non-A. The supervisor of my Master’s thesis in law, professor Studnicki, used to tease us, his students, with those mindf***ing games in the lines of: “All the phenomena in the universe are either pink elephants or non-pink-elephants. Prove me wrong”. It was bloody hard to prove him wrong. This is probably what made him a professor. Anyway, Aristotelian logic comes handy regarding basic distinctions, as it does not burden us with too many of them. There is just one. Yet, what Aristotle used to train the minds of young, ancient aristocrats requires some deeper understanding. Suppose I ask you: “Do you consider differences in business models as a challenge in cooperating with FinTech companies?”, and you can answer just with a YES or a NO, without any further opportunity to enter into nuanced judgments. I ask the same question to like 100 other people. As a result, X% of them say YES, and Y% = 1 – X% say NO. The 1 – X% part is crucial here: each of the percentages X in table 2 (just as in table 1, by the way) makes a closed universe together with its opposite 1 – X. We call it a logical division, both complete (it covers the whole universe) and separable (no overlapping between the classes defined). Mathematically, we translate it as the probability X% of scoring a success in a trial, versus the 1 – X% probability of suffering a failure. In mathematics, we use the binomial distribution to understand such situations. You can check the mathematical details with Wikipedia (formal, quite complete an exposition) or with the Khan academy .  What I want to discuss is the practical logic behind the equations.

In the first place, it is legitimate to ask ‘Why the binomial distribution? Why not something like the Gaussian, normal distribution or any other distribution, as we are talking about it?’. Well, we have just A and non-A, happening with the respective probabilities of X%, and 1 – X%. This is all we have. If you calculate the mean of X% and 1 – X%, you always end up with 50%. Any parametric distribution is close to meaningless in such cases. The binomial one is the only practical logic we can apply in order to understand the situation. Thus, we go with the binomial logic. The interviewer asks a sample of people from FinTech companies what do they think about differences in knowledge and skills when working with the incumbent financial institutions (first numerical column in table 2, second row from the bottom); 33% of them say ‘Yes, this is a challenge’ and 1 – 33% = 67% say ‘No problem whatsoever’ or something similar. Provisionally, I label the former answer as a success in the experiment, and the latter as a failure. On 100 trials, the interviewer scored 33 successes and 67 failures. How could it have happened? How could those 33 successes take place on 100 trials? The binomial distribution says that the probability of having k successes on n trials is defined as P(k; n, p) = (n!/k!*(n – k)!)*pk*(1 – p)n-k (the exclamation mark means a factorial, i.e. 1*2*…*n etc.). It further means that there is some kind of underlying probability p of having any given respondent saying ‘Yes, this is a challenge’, and there is (100!/33!*(100 – 33)!) = 294 692 427 022 541 000 000 000 000,00    ways of having that underlying probability happening 33 times on 100.

That makes a lot of ways of happening, more than people on Earth. Seeing this number could make you turn intuitively towards an attempt to extract that underlying probability from the equation. Legitimate goal, indeed, and yet I want to attract your attention to something else. If I change the proportion between the number of successes and that of failures, so if I take e.g. the incidence of people from FinTech companies mentioning IT security as a challenge when working with incumbent financial institutions, and I have that 58%, it means that on 100 people interviewed, I have (100!/58!*(100 – 58)!) = 28 258 808 871 162 600 000 000 000 000,00 ways of having that happening, which is two orders of magnitude more than in the previous case. Of course, these (n!/k!*(n – k)!) values are pretty abstract, and still they show an important aspect of the situation at hand: different distributions of percentages in the Aristotelian logic of excluded third correspond to different degrees of variety in the possinble ways those percentages can happen in real life. If I take a question, which yields ‘Yes, you were correct in your intuition, dear Interviewer’ in 98 cases on 100, I have just (100!/98!*(100 – 98)!) = 4 950 ways of having it happening. If I have something like the electoral results of some politicians I know of, i.e. 99% of support from those voters who are sensible enough not to deny their support (happens in some countries), I have barely (100!/99!*(100 – 99)!) = 100 ways it can possibly take place.

Do you understand? The greater the percentage of something that I qualify as a success (generally, the positive happening of that something), the less distinct ways this percentage can occur. Conversely, the less probable is that occurrence, the more distinct ways it can happen. Big, respectable percentages in table 2, like 58% or 48%, mean that whatever are their underlying mechanisms, they are relatively more unified in terms of patterns in human behaviour than the small percentages, like 16%. Small incidences can happen in more ways than big incidences. Counterintuitive? Apparently, yes. Yet, as you think about it, more robust a logic raises its head. If anything happens quite frequently, like atoms happening in the same spot in space-time in the chair I am sitting on, the structure supporting the happening is relatively stable. My chair is stable: I took it to the carpenter, recently, and he fixed that wobbly leg. If something else happens pretty seldom, like an atom of oxygen in the outer space, the supporting structure must be pretty unstable. Here, it becomes logical. Big, stable, central phenomena, which happen in 99 cases on 100, and make me believe there is any point in all that process called ‘living’, are all based on some stable, predictable structures. Fringe phenomena, on the other hand, which happen just sometimes and their sometimes is really choosy in its happening, are based on ephemeral structures in reality.

This is deep philosophy. How could I have got there, starting from those percentages supplied by Pricewaterhouse Coopers? I still wonder.

One more thing. I am working on launching that fully fledged, business-based educational website of my own. If you follow my blog for some time, you can see I am doing by best to stay consistent in documenting my research in a hopefully interesting form. Right now, I am at the stage of crowdfunding. You can consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?